• 沒有找到結果。

DYNAMIC COLOR RESTORATION METHOD IN REAL TIME IMAGE SYSTEM EQUIPPED WITH DIGITAL IMAGE SENSORS

N/A
N/A
Protected

Academic year: 2022

Share "DYNAMIC COLOR RESTORATION METHOD IN REAL TIME IMAGE SYSTEM EQUIPPED WITH DIGITAL IMAGE SENSORS"

Copied!
8
0
0

加載中.... (立即查看全文)

全文

(1)

DYNAMIC COLOR RESTORATION METHOD IN REAL TIME IMAGE SYSTEM EQUIPPED WITH DIGITAL IMAGE SENSORS

Li-Cheng Chiu* and Chiou-Shann Fuh

ABSTRACT

When digital image sensors are used to capture images or videos, captured col- ors will have obvious differences from the colors observed by human vision. This phenomenon results from the distinctions of response curves between digital image sensors and human vision so color restoration is an essential step when images are captured by digital image sensors. In real time image systems, such as digital still cameras or video camcorders, accurate color restoration is an arduous task because of highly changeable environmental illumination. This paper introduces a color restora- tion method to dynamically reproduce image colors. Our proposed method employs a two-step restoration procedure. White balance of captured images is the first step and is followed by image color correction in the second step. Our method is capable of accurate color restoration through empirical analysis and performs very fast due to low computation complexity.

Key Words: digital image sensor, color restoration, illumination, two-step restoration procedure.

*Corresponding author. (Tel: 886-932-234973; Fax: 886-2- 26590077; Email: d93010@csie.ntu.edu.tw)

The authors are with the Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan, R.O.C.

I. INTRODUCTION

The color of an object is observed through the combination of image sensor sensitivity and object surface reflectance under various illuminations. Dis- tinct sensor sensitivities will represent different im- age colors even under the same illumination. It is a great challenge to compensate for the differences of inconsistent responses between image sensors and human vision. Gray world assumptions (Forsyth, 1990;

Funt et al., 1996; Finlayson et al., 1995; Barnard et al., 2002) and perfect reflector assumptions (Forsyth, 1990; Funt et al., 1996; Finlayson et al., 1995; Barnard et al., 2002) recover image colors by a diagonal matrix.

This diagonal matrix can correct the unbalanced sen- sitivity for each channel of digital image sensors but it is not sufficient to restore the true color in real scenarios. Color Calibration (Chang and Reid, 1996;

Vrhel and Trussel, 1999) corrects color discrepancies between standard color targets and devices. Least

square approximation (LSA) (Wolf, 2003) uses a re- cursive algorithm to acquire an approximation solu- tion and usually obtains satisfactory restoration results.

They are hardly implemented in real time image sys- tems because both methods need a standard color target in various scenarios. Prior information method (Hu et al., 2001) and neural network (Yin and Cooperstock, 2004) detect the color cast under different illumina- tions but they are limited because they rely on a great deal of prior information.

This paper introduces a fast and dynamic color restoration for real time image systems with digital image sensors. Our method recovers the unbalanced sensitivity for each color channel first and image color is further adjusted through dynamic color correction matrix estimation. This paper is organized as follows.

The problem formulation of color restoration is de- scribed in Section II. Our proposed method is dis- cussed in Section III. The experiments in Section IV show the results of our method and are followed by conclusions in Section V.

II. PROBLEM FORMULATION AND PREVIOUS RELATED WORKS

Many papers (Horn, 1984; Wondell, 1987;

(2)

Viggiano, 2001; Zhang; 2004) have explored the color relationship between digital image sensors and hu- man vision. Here we give a brief review and define the notations as follows.

When an object with surface reflectance R is lighted up under illumination I, the sensor response P about this object can be described as

Pi= I(λi)R(λi)Sii)dλ

w

i = {r, g, b}, (1) where S is image sensor sensitivity function and wave- length λ should be integrated over visible spectrum w. Assume that sensor sensitivity S is a Dirac delta function and therefore Eq. (1) can be rewritten to

Pi = I(λi)R(λi) i = {r, g, b}. (2) Eq. (2) describes that the sensor response only corre- lates with environment illumination and object surface reflectance under narrow-band sensor sensitivity assumptions. It is obvious that each RGB channel of sensor response can be transformed into an identical amount through one diagonal matrix transformation by

I(λg)R(λg) I(λg)R(λg) I(λg)R(λg)

=

I(λg)R(λg)

I(λr)R(λr) 0 0

0 1 0

0 0 I(λg)R(λg) I(λb)R(λb)

× I(λr)R(λr) I(λg)R(λg) I(λb)R(λb)

. (3)

The determination of diagonal matrix is called white balance for environmental illumination. Gray world and perfect reflector assumptions are both well-known white balance algorithms to obtain the diagonal ma- trix under various illuminations. To simplify the notation, Eq. (3) can be described as

Ci = DPi i = {r, g, b}, (4) where C represents the sensor response after white bal- ance and D denotes the diagonal matrix. Unfortunately the sensor sensitivity function is not the narrow-band Dirac delta function and usually covers broad-band wave- lengths of light. So the sensor response after white balance will still have color discrepancies from human vision even when the diagonal matrix is accurately obtained. Fortunately some studies (Worthey and Brill, 1984; Finlayson et al., 1994; Finlayson and Funt, 1996) showed that the sensor response after white balance can have a good approximation with human vision

through a 3 × 3 matrix correction. The relation be- tween human vision response X and white balanced sensor response C is given by

Xi ~= MCi i = {r, g, b}, (5) where M is a 3 × 3 color correction matrix. From Eqs. (4) and (5), the image color can be restored well when the diagonal and color correction matrixes are both obtained correctly.

Some studies (Chang and Reid, 1996) skip the diagonal matrix calculation and directly compute the compound matrix T as

T = MD. (6)

In a static image system, the least square approxima- tion (LSA) (Wolf, 2003) usually has a satisfactory result for compound matrix calculation. LSA mainly obtains an optimal transformation matrix A with a minimum error between original vector V and target vector O. The approximation can be defined as

O ~= ˜O = AV, (7)

where ˜O is the result of original vector V multiplied by the optimal transformation matrix A. For a 3 × 3 color correction matrix and three-channel sensor response, Eq. (7) will have nine linear equations. If over nine independent samples are used, the sets of Eq. (7) will be over-determined and the solution of LSA can be given by

O ~= AV => VTO ~= AVVT

=> (VTV)–1VTO ~= (VTV)–1AVVT

=> A ~= (VTV)–1VTO. (8) In a dynamic image system, a standard color tar- get is usually unavailable. Therefore our proposed method adopts a two-step method to restore image colors. The details of our method will be described in the following section.

III. OUR PROPOSED METHOD

For a real time image system, color rendition of image sensor depends on environmental illumina- tion and sensor sensitivity functions. To precisely restore image colors, our proposed method will de- tect color temperatures of light sources and correct image colors close to human vision based on detected environmental color temperature. The algorithm flow is demonstrated in Fig. 1. At first, a calibration model is constructed under specific light sources. This cali- bration model is used to recognize the color temperatures

(3)

of environmental illumination and balance the sensi- tivity discrepancies of RGB channels. In the second step, image colors are corrected through the relation- ship between detected environment color temperature and calibration model. Our method does not rely on standard color targets. It will reduce computation com- plexity and perform excellent color restoration.

1. White Balancse Algorithm

Image white balance aims at an accurate diago- nal matrix acquisition in Eq. (4). Under various illuminations, RGB values of white colors are always consistent so white color is usually regarded as an indexed color for white balance adjustment. Our pro- posed white balance model is employed in the G/R- G/B color space. When white points are captured by an image sensor under various light sources, the lo- cations of these white points in G/R-G/B coordinate can be represented in Fig. 2. These white-point loca- tions in G/R-G/B coordinate are concentrated into a band, named the white-point color temperature band.

This band is characteristic of a digital image sensor.

The G/R-G/B coordinate depicts color temperature behavior of light sources. When color temperature of light source is higher, the blue component is stronger and the red component is weaker. On the contrary, when the color temperature of light source is lower, the blue component is weaker and the red component is stronger. Aside from color temperature coordinates, pixel luminance is an important factor in the G/R-G/

B coordinates. Image sensor current noise is a zero- mean vibration and will be added into RGB channels when the raw image is output. Therefore, when pixel luminance is higher, noise effect is lower; when pixel luminance is lower, noise effect is relatively higher.

It is concluded that white-points in a scene will be located in a band in the luminance-G/R-G/B three- dimensional color temperature coordinate.

The calibration model of our white balance is to capture a white chart by image sensor under five spe- cific light sources. In our experiments, these five light sources are often used color temperatures, 7500K,

6500K, 5000K, 4100K, and 3100K. The five loca- tions in G/R-G/B coordinates are demonstrated in Fig.

3 and the five-point curve is called a white point color temperature curve (WPCTC). Let the five color tem- perature coordinates be (x1, y1), (x2, y2), ..., (xi, yi), ..., (x5, y5) and define color temperature distance (CTD) and minimum color temperature distance (MCTD) of an arbitrary pixel (x0, y0) as

CTD(x0, y0, i) = mi(x0– xi) – (y0– yi) mi2+ 1

i = 1, ..., 4, (9)

MCTD(x0, y0)

=

(x0– x1)2+ (y0– y1)2 if x0< x1 & y0> y1 (x0– x5)2+ (y0– y5)2 if x0> x5 & y0< y5

Minmum(CTD(i)) otherwise

,

(10)

Fig. 1 The scheme of our proposed color restoration method. The first step is white balance adjustment and the second step is color correction

White balance Color correction

Calibration model

Raw image White balance adjustment

Color correction

matrix adjustment Corrected image

Environment color temperature detection

3

2

1

0

0 1 2 3 4G/B

G/R

Fig. 3 A white chart under five specific light sources in G/R-G/

B coordinates

Fig. 2 White point allocations in G/R-G/B coordinate

4

3

2

1

00 1 2 3 G/B

G/R

(4)

where mi represents the slope between coordinates (xi, yi) and (xi + 1, yi + 1). Functions CTD(x0, y0, i) and MCTD(x0, y0) denote the four projection distances and shortest projection distance from an arbitrary pixel to each line of WPCTC. When the MCTD is calculated, the white-point detection mechanism can be expressed as

White Point

= Yes if MCTD≤ distance threshold &

L ≥ luminance threshold

No otherwise . (11)

It shows that an image pixel is regarded as a white- point when pixel luminance (L) exceeds the lumi- nance threshold and MCTD is under the distance threshold. Experiments show satisfactory results in most scenarios and sensors when the luminance and distance thresholds, 70 and 0.25, are used.

Our white balance method can be divided into three parts: white-point color temperature curve construction, white-point detection mechanism, and white balance adjustment. The detailed flow of our white balance is demonstrated in Fig. 4. First of all, G/R-G/B coordinates of the white chart under five light sources are calibrated for an image sensor and the WPCTC of this sensor is composed of these five G/R-G/B coordinates. WPCTC is the characteristic curve of a sensor and will not be reconstructed after the first calibration. The second step is the white- point detection mechanism. An image pixel is regarded as a white-point when it passes the exami- nation of Eq. (11); then, corresponding pixel values of RGB channels are accumulated separately. If a pixel fails the examination, do nothing and the next image pixel enters the mechanism. The examination is repeated until all image pixels pass through the white point detection mechanism. The accumulated RGB pixel values of qualified white points in an im- age can be represented as

Ra=j = 1

Σ

m Rq, Ga=j = 1

Σ

m Gq, and Ba=j = 1

Σ

m Bq. (12)

Accumulated RGB pixel values are denoted as Ra, Ga, and Ba; qualified white points in an image are repre- sented as Rq, Gq, and Bq; m is the number of qualified white points. In the last step, the compensation di- agonal matrix is obtained by

D =

Ga/Ra 0 0

0 1 0

0 0 Ga/Ba

. (13)

2. Color Correction Algorithm

After white balance adjustment, the diagonal

matrix D is calculated. The next step is to obtain the color correction matrix M for further color restoration.

From Eqs. (2), (4) and (5), we know that the diagonal and color correction matrixes will alter with the illu- mination on an object. This implies that the color correction matrix will change with the diagonal matrix.

Let the five color correction matrixes M1, M2, ..., M5

be the approximation solution of Eq. (5) under five specific illuminations in the WPCTC construction.

These five matrixes can be easily calculated through least square approximation. Let white balance ratios, (Ga/Ra, Ga/Ba), be the color temperature coordinates of current environmental illumination. We can ac- quire an arbitrary color correction matrix under vari- ous illuminations by

M = αMj + (1 – α)Mj + 1, (14) where j and j + 1 denote two consecutive nodes in WPCTC which are close to the environmental color temperature coordinate, (Ga/Ra, Ga/Ba), and α repre- sents the distance weight between the environmental

Fig. 4 The scheme of our proposed white balance algorithm WPCTC construction

Finish Next pixel

Next pixel

Yes No

No

Yes Calculate MCTD and record luminance (L) of current pixel

White-Point examination:

If MCTD ≤ distance threshold &

L ≥ luminance threshold ?

Accumulate RGB values of qualified white-points:

Ra + = Rq, Ga+ = Gq, Ba+ = Bq

If all pixels are examined completely ?

White balance adjustment:

Applied compensation diagonal matrix D to each channel

(5)

color temperature coordinate and these two coordinates.

The distance weight is defined as α=CTDCTDj + 1

j+ CTDj + 1. (15)

Eqs. (14) and (15) show that color temperature coor- dinates of different light sources are spatially closer and corresponding color correction matrixes will be quantitatively more similar.

In a real time image system, the frequently changing environmental illumination always makes color restoration difficult. In our proposed method, color correction matrix is obtained through the color temperature distance between environmental illumi- nation and calibrated color temperature coordinates.

Although there is no standard color target for color correction matrix calculation, the acquired matrix from Eqs. (14) and (15) will be very close to the ap- proximation result by LSA. Experiments in the fol- lowing section will demonstrate the color behavior between LSA and our method.

IV. EXPERIMETNTAL RESULTS 1. Algorithm Simulation

To evaluate our method, we prepare unprocessed raw images from digital still cameras under various illuminations. Unprocessed raw images can be used to compare various color restoration algorithms. Be- cause LSA is considered as an accurate method in static color reproduction, here we compare LSA with our method with the same raw images. The root mean squared errors (RMSE) in Eq. (16) between restored images and standard color chart, GretagMacbeth ColorChecker, are used for quantitative evaluation.

RMSE = 1

N[(Rr– Rs)2+ (Gr– Gs)2+ (Br– Bs)2] , (16) where N is the total number of pixels. The restored image in RGB color space is described as (Rr, Gr, Br) and standard color chart in RGB color space is de- scribed as (Rs, Gs, Bs). Fig. 5 demonstrates RMSE between standard color target and restored images using LSA and our method. Besides restored images, the RMSE between standard color target and raw images are also shown in this figure. The RMSE data in Fig.

5 represent accumulated errors from number 13 to 19 blocks of GretagMacbeth ColorChecker under vari- ous illuminations. LSA restores image colors by least squared approximation between each raw image and standard color in GretagMacbeth ColorChecker. So it performs excellent color restoration. Based on the same raw images, our method employs the familiar

LSA performance with the two-step restoration procedure. Fig. 6 demonstrates one set of these com- pared images. The unprocessed raw image is displayed in Fig. 6(a). It is clear that image colors are very greenish in the raw image because the green channel in the digital image sensor usually has a broader range and a higher sensitivity in the three RGB channels.

Fig. 6(b) shows the raw image after our white balance.

Our method will detect environmental illumination and adjust the diagonal matrix based on environmen- tal information. The unbalanced colors in the raw image have been recovered but image colors are not restored accurately yet. Fig. 6(c) shows the final re- sult of our method. The color correction matrix is determined, dynamically, based on the relation be- tween current and reference calibrated illuminations.

Therefore image colors are restored excellently and are similar to the restored colors produced by LSA in Fig. 6(d). Besides, although colors from sensors are device-dependent, the transformed colors are compared in the device-independent sRGB color space.

2. Practical Implementation

To further observe real scenarios using our method, we implemented our method in a prototype camera. This prototype camera has the full functions and capabilities of digital cameras, including a six mega pixel image sensor, image signal processing, optical zoom lens, and so on. It can output uncor- rected raw images, intermediate images after white balance, and final restored results. Figs. 7, 8, and 9 demonstrate some scenarios captured by our proto- type camera. Figs. 7(a), 8(a), and 9(a) show unproc- essed raw images. These raw images are all greenish due to the higher sensitivity of the green channel. Figs.

7(b), 8(b), and 9(b) display the intermediate images after our white balance. Image colors are balanced by diagonal matrix compensations. Because red, green, and blue channels of digital image sensors are not impulse functions, restored image colors using only

Fig. 5 RMSE sums of number 13 to 19 blocks in ColorChecker 100

90 80 70 60 50 40 30 20 10

0 1 2 3 4 5 6 7 8 9 10 11Image

RMSE

Our method LSA Raw image

(6)

Fig. 6 (a) Unprocessed raw image from digital image sensor. (b) image is recovered after white balance by our method. (c) image is restored after color correction by our method. (d) image is restored by LSA

(a) (b)

(c) (d)

(a) (b) (c)

Fig. 7 (a) Raw image, (b) intermediate image after our white balance and (c) final result

Fig. 8 (a) Raw image, (b) intermediate image after our white balance and (c) final result

(a) (b) (c)

(7)

diagonal matrix compensations have discrepancies from true colors. Figs. 7(c), 8(c), and 9(c) show the final results of our method. Restored colors are very similar to true colors because color correction ma- trixes are obtained based on the information concerning current and reference calibrated color temperatures.

V. CONCLUSIONS

This paper introduces a dynamic color restora- tion method for real time image systems with digital image sensors. Our method restores image colors by a two-step procedure and it can dynamically detect highly changeable environmental color temperatures.

The first step is white balance recovery and color correction is the second step. Our white balance lo- cates the environmental color temperature and com- pensates for the unbalanced colors based on detected environmental information. The color correction step will refer to the detected environmental color tem- perature and obtain a corresponding color correction matrix. Image colors are accurately restored by our method according both to objective and subjective experimental results.

ACKNOWLEDGEMENT

The authors would like to thank the reviewers for their insightful comments. This work was sup- ported partially by Syntek Semiconductor Corporation.

NOMENCLATURE

A optimal transformation matrix Ba accumulated pixel value of blue

channel

Bq pixel value of blue channel of quali- fied white point

Br pixel value of blue channel of re- stored image

Bs pixel value of blue channel of stan- dard color chart

C densor response after white balance

Fig. 9 (a) Raw image, (b) intermediate image after our white balance and (c) final result

(a) (b) (c)

CTD(x0, y0, i) color temperature distance of an ar- bitrary pixel (x0, y0)

D diagonal matrix

Ga accumulated pixel value of green channel

Gq pixel value of green channel of quali- fied white point

Gr pixel value of green channel of re- stored image

Gs pixel value of green channel of stan- dard color chart

I illumination

j and j + 1 two consecutive nodes in WPCTC M color correction matrix

MCTD(x0, y0) minimum color temperature distance of an arbitrary pixel (x0, y0)

m number of qualified white points mi slope between coordinates (xi, yi) and

(xi + 1, yi + 1)

N total number of pixels

O target vector

˜

O result of target vector multiplied by the optimal transformation matrix

P sensor response

R surface reflectance

RMSE root mean squared errors

Ra accumulated pixel value of red chan- nel

Rq pixel value of red channel of quali- fied white point

Rr pixel value of red channel of restored image

Rs pixel value of red channel of stan- dard color chart

S sensor sensitivity

V original vector

T compound matrix

X human vision response

Greek Symbols

α distance weight

λ wavelength

(8)

REFERENCES

Barnard, K., Funt, B., and Cardei, V., 2002, “A Com- parison of Computational Color Constancy Algorithms; Part One: Methodology and Experi- ments with Synthesized Data,” IEEE Transactions on Image Processing, Vol. 11, No. 9, pp. 972-984.

Chang, Y. C., and Reid, J. F., 1996, “RGB Calibration for Color Image Analysis in Machine Vision,”

IEEE Transactions on Image Processing, Vol. 5, No. 10, pp. 1414-1422.

Finlayson, G. D., Drew, M. S., and Funt, B. V., 1994,

“Spectral Sharpening: Sensor Transformations for Improved Color Constancy,” Journal of the Opti- cal Society of America A, Vol. 11, No. 5, pp. 1553- 1563.

Finlayson, G. D., Funt, B. V., and Barnard, K., 1995,

“Color Constancy under a Varying Illumination,”

Proceedings of International Conference on Com- puter Vision, Boston, MA, USA, pp. 720-725.

Finlayson, G. D., and Funt, B. V., 1996, “Coefficient Channels: Derivation and Relationship to Other Theoretical Studies,” Color Research Application, Vol. 21, No. 2, pp. 87-96.

Forsyth, D. A., 1990, “A Novel Algorithm for Color Constancy,” International Journal of Computer Vision, Vol. 5, No.1, pp. 5-35.

Funt, B. V., Cardei, V., and Barnard, K., 1996, “Learning Color Constancy,” Proceedings of IS&T and SID Conference on Color Imaging, Scottsdale, USA, pp. 58-60.

Horn, B. K. P., 1984, “Exact Reproduction of Colored Images,” Proceedings of Conference on Computer Vision, Graphics, and Image Processing, Vol. 26, No. 1, pp. 135-167.

Hu, B., Lin, Q., Chen, Q. M., and Chang, L. M., 2001, “Automatic White Balance Based on Priori

Information,” Journal of Circuits and Systems, Vol. 6, No. 2, pp. 25-28.

Viggiano, J. A. S., 2001, “Minimal-Knowledge As- sumptions in Digital Still Camera Characterization.

I: Uniform Distribution, Toeplitz Correlation,” Pro- ceedings of IS&T/SID Conference on Color Imaging, Scottsdale, USA, pp. 332-336.

Vrhel, M. J., and Trussel, H. J., 1999, “Color Device Calibration: a Mathematical Formulation,” IEEE Transactions on Image Processing, Vol. 8, No.

12, pp. 1796-1806.

Wolf, S., 2003, “Color Correction Matrix for Digital Still and Video Imaging Systems,” TM-04-406, NTIA Technical Memorandum, Washington, D.C., USA.

Wondell, B. A., 1987, “The Synthesis and Analysis of Color Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 9, No.

1, pp. 2-13.

Worthey, J. A., and Brill, M. H., 1986, “Heuristic Analysis of Von Kries Color Constancy,” Jour- nal of the Optical Society of America A, Vol. 3, No. 10, pp. 1708-1712.

Yin, J., and Cooperstock, J. R., 2004, “Color Correc- tion Methods with Applications to Digital Pro- jection Environments,” Proceedings of Confer- ence on Computer Graphics, Visualization and Computer Vision, Vol. 12, No. 1, pp. 102-120.

Zhang, X., and Brainard, D. H., 2004, “Bayesian Color Correction Method for Non-Colorimetric Digital Image Sensors,” Proceedings of IS&T/SID Conference on Color Imaging, Scottsdale, USA, pp. 308-314.

Manuscript Received: Oct. 21, 2008 Revision Received: Mar. 07, 2009 and Accepted: Apr. 07, 2009

參考文獻

相關文件

The overall system is shown in figure 1. An infrared sensitive camera synchronized with infrared LEDs is used as a sensor and produces an image with highlighted pupils. The

We propose a digital image stabilization algorithm based on an image composition technique using four source images.. By using image processing techniques, we are able to reduce

In this thesis, we present a new color interpolation algorithm and it can remove common artifact, such as cross-talk which creates blocky noise or image edge zipper

Because the Scale-Invariant Feature Transform can also find the feature points with the change of image size and rotation, in this research, we propose a new method to

Multiple Domain Image to image translation can apply on face attribute transfer (Input, Zero Vector, Blond hair, Brown hair, Gender, Mustache(Gender, Goatee, Mustache), Pale,

In this paper, we aim to develop a transparent object detection algorithm which can detect the location of transparent objects in color image.. Due to the

另外一般來說要處理不等號較為困難, 所以當 要證明 one-to-one 時, 我們大都用 Definition 5.3.7 的

使用的方法為先用 Median filtering 找出中位數,再用 outlier detection 找出不 一樣的點,利用 3*3mask 的中間點和周圍相異兩個標準差為基準,將中間的 pixel 換成中位數,最後再用