• 沒有找到結果。

CHAPTER 2 PROPOSED NON-FACE ENHANCEMENT METHOD

2.5 Color Reconstruction

After previous steps, we can get the modified gray values . Then, color reconstruction is done by using the following formulation [13] to prevent relevant hue shift and color desaturation. This method is denoted by

Y ′

where R, G, and B are the input color values.

Finally, there is an example to illustrate the whole proposed method in Fig. 2.5. First, there is a dark scene in Fig. 2.5(a). Through the examination function, its corresponding histogram is not qualified, so the adjustment method is applied. Through increasing the shift times, we can find a smallest shift times (n = 4.6) such that the adjusted histogram can be qualified. Figs.

2.5(b) through (h) show the results treated by different shift times. It can be seen that the noises in the background are decreasing and the over-exposed areas are being improved. Fig. 2.5(i) shows the original histogram curve and the adjusted curve applied by our proposed adjustment approach.

(a) Original image

(b) n = 0 (c) n = 1

(d) n = 2 (e) n = 3

Fig. 2.5 An example to illustrate the proposed method. (a) Original image.

(b)-(e) The enlargement of part of the results treated by 0-3 shift times.

(continues)

(f) n = 4 (g) n = 5

(h) n = 4, α=0.6 (i)

Fig. 2.5 An example to illustrate the proposed method. (f)-(g) The

enlargement of part of the results treated by 4 and 5 shift times. (h) The enlargement of part of the results treated by 4.6 shift times for avoiding over-adjustment. (i) The original and the final adjusted histogram curve.

CHAPTER 3

THE PROPOSED FACE ENHANCEMENT METHOD

It is straightforward that skin regions, especially faces, in real-life images are the most visually interesting areas. Most conventional image enhancement techniques [3-8, 10-11] do not give a special treat for the improper lighting condition in skin regions. Hence, these enhancement techniques either improve obviously for the unsuitable illumination in skin regions or fail to offer sufficient contrast in skin regions. If the contrast in skin regions is insufficient, the skin regions may seem to be wash-out appearance and unnatural (see Figs. 3.1(b) through (e)). On the other hand, some techniques [9, 14-15] were provided to enhance face part in an image, for the other part, they can not provide a satisfied result.

Battiato et al. [9] proposed an exposure correction with camera response curve improved by skin dependent techniques. The basis of adjusting illumination in a whole image is based on the difference between the average luminance of skin regions and the ideal pre-defined luminance. This technique can produce a satisfied result in skin regions, but it may fail to produce a suitable result in non-skin regions. Fig. 3.2 shows the satisfied illumination in skin regions, but the background regions are distorted.

(a)

(b)

(c)

(d)

(e)

Fig. 3.1 Some results using different enhancement methods with the images

in the right column being the enlarged parts of the images in the left column.

(a) Original image. (b) HE [3]. (c) Capra’s algorithm [8]. (d) Our proposed non-face enhancement method. (e) Picasa software [2].

(a)

(b)

Fig. 3.2 A result using exposure correction method with the images in the

right column being the enlarged parts of the images in the left column. (a) Original image. (b) Battiato’s algorithm.

Since our proposed non-face enhancement method can only treat non-face images well, for those face images, the skin part may not be treated well. In this chapter, we will provide a method to treat both face and non-face regions. The proposed method integrates our non-face enhancement and the exposure correction method proposed by Battiato et al [9]. This exposure correction method uses the mean luminance of skin regions as a reference point. Then, we can apply exposure correction method by skin content to get an image with satisfied skin regions. After obtaining the results of non-face enhancement method defined Ynon-skin, and exposure correction method defined Yskin, a distance map can help us to fuse these two results. Fig. 3.3 shows the flowchart of the proposed method for face images.

Fig. 3.3 The flowchart of the proposed face enhancement method.

3.1 Skin recognition by skin locus model

Before applying the exposure correction by skin dependent techniques, we should recognize skin pixels in advance. We would like to determine all possible skin pixels with multiple light source or changing illumination conditions. Therefore, we choose the skin locus model defined in [16]. To reduce the illumination dependence, this technique is based on the

plane by normalized RGB space (

) , (r g

) , (

)

( R G B

g g B G R r R

+

= + +

= + ).

Through the experiment, Fig. 3.4 shows the r-g skin histogram in diverse illumination condition from a 1CCD camera.

Fig. 3.4 Statistic of skin locus

By this statistical information, the skin color cluster in plane occupies a shell-shape curve. A membership function to the skin locus is a pair of quadratic functions denoting the upper and lower bound of the cluster.

Pixels can be labeled as skin pixels using the skin locus constraint denoted as follows:

where skin=1 representing a skin pixel and skin=0 representing a non-skin pixel. The variable W is to avoid labeling whitish pixels as skin. Therefore, we can process an image pixel-by-pixel using this constraint and record the result in a skin map, , with = 1 representing a skin pixel and

= 0 representing a non-skin pixel(see Fig. 3.5(b)). Then, the skin map should be refined by Morphological operation [3] to eliminate the noises and fill the holes (see Fig. 3.5(c)). In order to speed up the latter procedure described in Section 3.3, we can use the bilinear interpolation of a factor of

Mskin Mskin(x,y) )

, (x y Mskin

1/4 to scale the original image at first to obtain a smaller size of Mskin.

(a)

(b) (c)

Fig. 3.5 An example of skin recognition. (a) Original image. (b) Recognized

skin map by the skin locus. (c) Refined skin map after morphological processing.

3.2 Exposure correction method

The exposure correction method is accomplished by a simulated camera response curve [17]. This curve shows an evaluation of light value q, called

“light quantity”, transformed to the final pixel values I by the camera sensor (see Fig. 3.6). This camera response curve f can be presented by

C

e Aq

I q

f (1 )

) 255

(

= +

= (3.2-1)

where parameters A and C can be utilize for controlling the curve shape.

Fig. 3.6 A simulated camera response curve

Therefore, the exposure correction technique [9] utilized the transformation between light quantity and final luminance to simulate controlling how much light the camera will capture. Based on this transformation, the exposure correction method uses the mean luminance of skin regions as a reference point. First, we extract luminance Y with Eq.

(2.1-1) in an original image. After labeling skin pixels, we can get the average luminance Yavg of the skin pixels. A simulated camera response curve f defined in Eq. (3.2-1) can be used to offset the light quantity difference between Yavg and pre-defined ideal luminance Yideal. The offset of light quantity is denoted by

) ( )

( 1

1

avg

ideal f Y

Y f

offset= (3.2-2) The original luminance values can be modified by

))) , ( ( (

) ,

(x y f offset f 1 Y x y

Yskin = + (3.2-3) Therefore, the Yskin is the result of the exposure correction method. There is an example to illustrate the result of exposure correction method in Fig 3.7.

(a) (b)

Fig. 3.7 An example of the exposure correction method (a) Original image. (b)

Result of the exposure correction method.

3.3 Measurement of distance map

In this section, we define a measurement of a distance map Mdistance. Mdistance means a distance map recording the distance between each pixel and its nearest skin pixel. The Yskin and Ynon-skin can be fused together using a distance map. Before presenting the fusion method, we describe the measurement of a distance map at first.

After labeling the skin pixels in the Mskin, there should be several connected components of skin regions. Therefore, we use the dilation operation [3] iteratively to estimate the Mdistance. It is the reason that if a pixel is nearer connected components, it will be dilated earlier. Because the distance between a pixel and different connected components is distinct, the smallest distance should be selected. Therefore, each connected component should be dilated individually and recorded the smallest distance for each pixel. A pre-defined threshold t would be given for setting the number of times of performing dilation procedure. When dilation procedure is stop, the pixels which are not dilated yet represent that they are too far from skin

regions. Therefore, they are all assigned to be t+1 which is the farthest distance recorded in Mdistance. This method is described as follows and there will be an example to illustrate to procedure in Fig. 3.8.

Notation and initialization: Let d denote the current dilation times and

initialize to be zero. Denote Mdisn tance to be the distance map for the nth connected component where 1≤nC and C is the number of connected components in a skin map. Initialize the

where it is a skin pixel at coordinate belonging to the n

0 ) ,

tan (x y =

Mdisn ce (x,y)

th connected component. Other pixels are initialized to be infinite (see Figs. 3.8(a) and (b)).

Step 1: Add one to the variable d. Dilate each connected component in

one time by a disk structuring element and record the current dilated pixel at to be d.

n ce

Mdistan

) ,

tan (x y Mdisn ce

Step 2: If , go back to step 1. Otherwise, combine all

with the smallest value at the same position to obtain the distance map M

t

d < Mdisn tance

distance. (see Figs. 3.8 (c) throught (e))

Step 3: Replace all infinite values in Mdistance with value t+1. (see Fig.

3.8(f))

It can be noted that Mskin is the scaled image by a factor of 1/4. We must up-sample the Mdistance to be the same size of an original image for the later procedure of fusion. Therefore, a bilinear interpolation by a factor of 4 can be used for this purpose. Fig. 3.9 illustrates an example of distance map.

(a) where the symbol ‘-’ means the infinite values. (c)-(d) Each connected component is dilated respectively until the halting condition occurs.

(e) Combine and with the smallest values. (f) Substitute

distance for the values t+1 to obtain the final Mdistance.

(a) (b)

Fig. 3.9 The skin map (a) and its corresponding distance map (b) with

brighter pixels representing smaller distance values.

3.4 Fusion

After obtaining Mdistance, Yskin and Ynon-skin, each pixel of the finial luminance Yfinal(x,y) is a composition of the Yskin and Ynon-skin with Mdistance. Therefore, there should be a weight map based on Mdistance on the interpolation process. It is straightforward that if the distance of a pixel is very small, this pixel value should be closer to Yskin. We use the power-law curve with 0.4 by mapping the narrow range of smaller distance values into a wider range of bigger weight values. This curve can make the composition in the boundary regions between skin and non-skin sharper without halo effects.

The combination equation with power-law function is expressed by

( )

where t is the threshold of dilation times defined in Section 3.3. Finally, we use the same method in Eq. (2.5) with and original luminance Y to reconstruct the color image. Fig. 3.10 illustrates an example of a weight map and a final result.

final

Y

(a) (b)

(c)

Fig. 3.10 The distance map (a) and its corresponding weight map (b) by

power-law function with brighter pixels representing smaller weight value.

The bottommost image (c) is the final result of our proposed face enhancement method.

.

CHAPTER 4

EXPERIMENTAL RESULTS

In this chapter, the experimental results implemented by our proposed non-face enhancement method and face enhancement method are given. In our database, there are about 150 photos including landscapes and portraitists with image resolution about 1280 × 960. These photos include: (1) overexposed and/or underexposed problems; (2) low-contrast problems; (3) normal images with good exposure light. We will give some experimental results and five comparisons with following techniques:

HE technique [3];

Picasa software [2];

Exposure correction (Battiato’s algorithm [9]);

Local gamma correction (Capra’s algorithm [8]);

Shadow correction (Safonov’s algorithm [10]).

These experimental results and comparisons are shown in Section 4.1 for non-face images and in Section 4.2 for face images.

4.1 Experimental results of non-face images

There are four experiment results and comparisons of non-face images shown in Fig 4.1 through Fig. 4.4. These four results include a normal image, a photo with shadow areas, an image with a backlight condition, a dark scene image. Finally, there are more experimental results shown in Fig. 4.5.

First, Fig. 4.1(a) shows an image with good exposure light and others are

the results treated by different techniques. Through the proposed examination function, the HE technique is suitably applied to this image directly without artifacts. The result of the HE technique is shown in Fig. 4.1(b) which is the same as the result of our proposed method in Fig. 4.1(g). Figs. 4.1(c) through (f) show the comparisons with Picasa, Battiato’s algorithm, Capra’s algorithm and Safonov’s algorithm. Fig. 4.1(d) shows a slight over-exposed look in the regions of lotus leaves. In Fig. 4.1(e), there is a suitable result in local regions but the global contrast is decreasing. The results in Figs. 4.1(c) and (f) are almost the same as the original image. By comparing all results, we can see that the result of our proposed method is more vivid and high contrast.

Second, there is an image with shadow areas given in Fig. 4.2(a), and others are the results treated by different techniques. Through the proposed examination function, the HE technique is suitably applied to this image directly. Therefore, we can see that the results in Fig. 4.2(b) applied by the HE technique and (g) applied by our method are the same. By comparing other results in Figs 4.2(c) through (f), we see that the result of our proposed method is both good at shadow areas and other areas.

Third, Fig. 4.3(a) illustrates an image with over-exposed and under-exposed regions simultaneously. Through the examination function, the HE technique is not suitably applied to this image. We can see that in Fig 4.3(b), although the dark area is enhanced clearly by applying the HE technique, there is an obvious false contour in the sky area. In Fig. 4.3(d) applied by Battiato’s algorithm, although the dark area is enhanced clearly, the details in the sky area are loss. The result of Picasa in Fig 4.3(c) is almost the same with the original image. Figs 4.3(e) and (f) applied by Capra’s

method and Safonov’s method have slight improvement in shadow areas, and the global contrast of the results are reducing. In Fig 4.3 (g), our result not only has obvious improvement in shadow areas, but also keeps the details in the sky area.

Fourth, there is a dark scene in Fig. 4.4(a). The result in Fig. 4.4(b) applied by the HE technique shows that background noises have been amplified and there is a halo-effect in the light area. In Fig. 4.4(d) applied by Battiato’s method, there is also an obvious halo-effect in the light area. In Figs.

4.4(c) and (f) treated by Picasa and Safonov’s method, it can be seen that there are few differences between the original images and these two images.

The results of Capra’s method and our proposed method in Figs. 4.4(e) and (g) have obvious improvement in detail of the dark area without artifacts. Nothing that Capra’s method is based on pixel-by-pixel gamma correction with a non-linear masking, so it is a high complexity algorithm. By comparing the complexity, our proposed algorithm is accomplished by a modified global HE technique, so our proposed method can execute efficiently and automatically.

As illustrations of the experimental results, it should be pointed out that our proposed examination function can detect effectively if the HE technique is suitably applied to an image. If it is not qualified, our adjustment approach can produce well results without artifacts. Furthermore, there are more experimental results by our proposed method shown in Fig. 4.5.

(a) Original image (b) HE

(c) Picasa software (d) Battiato’s algorithm

(e) Capra’s algorithm (f) Safonov’ algorithm

(g) Our method

Fig. 4.1 A normal photo enhanced by difference techniques.

(a) Original image (b) HE

(c) Picasa software (d) Battiato’s algorithm

(e) Capra’s algorithm (f) Safonov’ algorithm

(g) Our method

Fig. 4.2 A photo with shadow areas enhanced by different techniques.

(a) Original image (b) HE

(c) Picasa software (d) Battiato’s algorithm

(e) Capra’s algorithm (f) Safonov’ algorithm

(g) Our method

Fig. 4.3 An image with a backlight condition enhanced by different methods.

(a) Original image (b) HE

(c) Picasa software (d) Battiato’s algorithm

(e) Capra’s algorithm (f) Safonov’ algorithm

(g) Our method

Fig. 4.4 A dark scene image enhanced by different methods.

Fig. 4.5 The original images in the left column and with their corresponding

results treated by our proposed method in the right column.

4.2 Experimental results of face images

In this section, we will show the experimental results and comparisons of two kinds of face images including a dark scene and a backlight condition shown in Fig. 4.6 and Fig. 4.7 respectively. Finally, there are more experimental results shown in Fig. 4.8.

First, Fig. 4.6(a) illustrates the face photo taken in the dark scene. The result of HE in Fig. 4.6(b) shows the noises are amplified and the face regions are over-exposed. The results applied by Picasa and Battiato’s method in Figs.

4.6(c) and (d) have improvement in skin regions, but the background is still unclear. The result applied by Capra’s method in Fig. 4.6(e) has more distinguishable details in the background, but the face regions seem unnatural influenced by insufficient contrast. Fig. 4.6(f) shows the slight improvement in both background and skin regions by Safonov’s method. By comparing all results, the result of our proposed method in Fig. 4.6(g) shows that there is not only a clearer background, but also satisfied illumination in skin regions.

Second, Fig. 4.7(a) illustrates a face photo taken in a backlight condition.

In Fig. 4.6(b), HE still produces a wash-out appearance in skin regions. The results of Figs. 4.6(c), (e) and (f) treated by Picasa, Capra’s method and Safonov’s method still have unsatisfied illumination in face regions. In Figs.

4.6(b), the face regions are both seems unnatural influenced by insufficient contrast in skin regions. Fig. 4.6(d) has satisfied illumination in face regions, but the details in the background are loss. The result of our method in Fig.

4.6(g) has not only appropriate illumination in face regions, but also a suitable background.

(a) Original image (b) HE

(c) Picasa software (d) Battiato’s algorithm

(e) Capra’s algorithm (f) Safonov’s algorithm

(g) Our method

Fig. 4.6 A face image in a dark scene enhanced by different techniques.

(a) Original image (b) HE

(c) Picasa software (d) Battiato’s algorithm

(e) Capra’s algorithm (f) Safonov’s algorithm

(g) Our method

Fig. 4.7 A face image with a backlight condition enhanced by different

techniques.

Fig.4.8 The original images in the left column and with their corresponding results treated by our proposed method in the right column.

CHAPTER 5

CONSLUSION AND FUTURE WORK

In this thesis, we have proposed an automatic and efficient non-face enhancement method and face enhancement method respectively. In the non-face enhancement method, we propose a contrast-stretching constraint based on JND to judge if the HE technique is suitably applied to an image. If it is not qualified, we present an adjustment approach to modify the histogram curve without destroying the monotonic property. In the face enhancement method, we improve our framework by combing our proposed non-face enhancement method and exposure correction technique provided by Battiato et al. [9] to obtain satisfied contrast in the background and appropriate illumination in skin regions. On this combining process, a distance map estimated by iterative morphological operations is used for this fusion purpose.

Experimental results show that our proposed method can produce suitable results without artifacts.

It can be noted that our proposed image enhancement method is based on the information of illumination, not colors. Therefore, it may be an interesting approach to extend our concept by combing color HE techniques [18] and the JND model.

REFERENCES

[1] Adobe Photoshop: http://www.adobe.com/products/photoshop/family/, Adobe Systems Inc.

[2] Picasa: http://picasa.google.com/, Google, Inc.

[3] R. C. Gonzalez, and R. E. Woods, Digital Image Processing, 2nd Ed., New Jersey: Prentice-Hall, 2002, ISBN: 0-201-18075-8.

[4] Y. T. Kim, “Enhancement using brightness preserving bi-histogram equalization,” IEEE Transactions on Consumer Electronics, vol. 43, no. 1, pp. 1–8, Feb. 1997.

[5] Y. Wang, Q. Chen, and B.-M. Zhang, “Image enhancement based on equal area dualistic sub-image histogram equalization method,” IEEE Transactions on Consumer Electronics, vol. 45, no. 1, pp. 68–75, Feb.

1999.

[6] S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T.

Greer, B. H. Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368, Sep. 1987.

[7] N. Moroney, “Local colour correction using non-linear masking,”

[7] N. Moroney, “Local colour correction using non-linear masking,”

相關文件