• 沒有找到結果。

Novel quality-effective zooming algorithm for color filter array

N/A
N/A
Protected

Academic year: 2022

Share "Novel quality-effective zooming algorithm for color filter array"

Copied!
15
0
0

加載中.... (立即查看全文)

全文

(1)

Novel quality-effective zooming algorithm for color filter array

Kuo-Liang Chung Wei-Jen Yang

National Taiwan University of Science and Technology Department of Computer Science and Information Engineering

No. 43, Section 4, Keelung Road Taipei, Taiwan 10672 E-mail: wjyang@mail.ntust.edu.tw

Jun-Hong Yu National Taiwan University

Graduate Institute of Networking and Multimedia No. 1, Section 4, Roosevelt Road

Taipei, Taiwan 10617

Wen-Ming Yan National Taiwan University

Department of Computer Science and Information Engineering No. 1, Section 4, Roosevelt Road

Taipei, Taiwan 10617

Chiou-Shann Fuh National Taiwan University

Graduate Institute of Networking and Multimedia and

Department of Computer Science and Information Engineering No. 1, Section 4, Roosevelt Road

Taipei, Taiwan 10617

Abstract. Mosaic images are captured by a single charge-coupled device/complementary metal-oxide-semiconductor (CCD/CMOS) sensor with the Bayer color filter array. We present a new quality- effective zooming algorithm for mosaic images. First, based on adaptive heterogeneity projection masks and Sobel- and luminance- estimation-based masks, more accurate gradient information is ex- tracted from the mosaic image directly. According to the extracted gradient information, the mosaic green共G兲 channel is first zoomed.

To reduce color artifacts, instead of directly moving the original red 共R兲 value to its right position and the blue 共B兲 value to its lower position, the color difference interpolation is utilized to expand the G-R and G-B color difference values. Finally, the zoomed mosaic R and B channels can be constructed using the zoomed G channel and the two expanded color difference values; afterward, the zoomed mosaic image is obtained. Based on 24 popular test mosaic images, experimental results demonstrate that the proposed zoom- ing algorithm has more than 1.79 dB quality improvement when compared with two previous zooming algorithms, one by Battiato et al. (2002) and the other by Lukac et al. (2005). © 2010 SPIE and IS&T. 关DOI: 10.1117/1.3302126兴

1 Introduction

To reduce the manufacturing cost of digital cameras, in- stead of using three CCD/CMOS sensors, most manufac- turers use a single sensor array to capture the color infor- mation based on the Bayer color filter array 共CFA兲 structure,1 which is depicted in Fig. 1. As a result, each pixel in the mosaic image has only one color component.

Because the G channel is the most important factor to de- termine the luminance of the color image, half of the pixels in the Bayer CFA structure are assigned to the G channel;

the R and B channels share the remaining parts evenly.

Since the full-color image is required for the human vi- sual system, the two missing color components for each pixel in the mosaic image should be recovered as best as possible and such a recovery is called the demosaicing

Paper 09016RR received Feb. 6, 2009; revised manuscript received Nov.

18, 2009; accepted for publication Dec. 11, 2009; published online Feb.

23, 2010.

1017-9909/2010/19共1兲/013005/15/$25.00 © 2010 SPIE and IS&T.

(2)

process.2,3 Previously published demosaicing algorithms could be divided into two categories,4namely, the nonheu- ristic demosaicing algorithms and the heuristic demosaicing algorithms. In the first category, these developed nonheu- ristic demosaicing algorithms include the minimum mean squared error estimator method,5 the projection-onto- convex-set method,6 the linear hyper-plane method,7 the Fourier domain-based methods,8–10 the wavelet domain method,11 and the Taylor expansion method.12 The second category includes the bilinear interpolation method,13 which is the simplest demosaicing method in which two missing color components of each pixel are calculated by averaging its proper adjacent pixels; the edge-sensing- based methods,14–21 which can preserve the detailed edge information or limit the hue transitions; the color- difference-based hybrid methods4,22–29 in which these methods were developed by integrating the interpolation estimation, edge-sensing scheme, and the color difference technique.

Besides the demosaicing issue, designing efficient zooming algorithms for mosaic images has received grow- ing attention. Because the optical hardware zooming ap- proach costs too much, the software zooming approach is preferable. For gray images or full-color images, some ef- ficient zooming algorithms30–32have been developed. How- ever, these developed algorithms can not be applied to zoom mosaic images directly. Intuitively, the given mosaic image can be demosaiced first, and then one of these de- veloped zooming algorithms is applied to the demosaiced full-color image; unfortunately, this intuitive approach must store an extra full-color image to be used in the later zoom- ing process, so it is impractical due to the limited memory constraint of digital cameras.

Previously, several zooming algorithms for mosaic im- ages were developed. All of them only use one array memory with the same size of the zoomed image. Based on the local adaptive zooming concept, Battiato et al.33 pre- sented the first zooming algorithm. Based on the adaptive edge-sensing mechanism, Lukac and Plataniotis34Lukac et al.35 presented two better zooming algorithms to improve the zoomed image quality. Further, Lukac and Plataniotis36 presented a computation-saving interpolation method to meet the real-time surveillance requirement. Note that the previous zooming algorithms mentioned here enlarge the

original mosaic image with size X⫻Y to the one with size 2X⫻2Y. In this paper, we follow the same size constraint.

Developing a zooming algorithm to enlarge the mosaic im- age to the one with arbitrary size is still a challenging prob- lem.

In this paper, a new quality-effective zooming algorithm for mosaic images is presented. Utilizing the adaptive het- erogeneity projection masks and the Sobel- and luminance- estimation-based共SL-based兲 masks,29more accurate gradi- ent information can be extracted from the input mosaic image directly. Then, based on the color difference concept and the extracted gradient information, the proposed quality-effective mosaic image zooming algorithm is pre- sented. Based on 24 test mosaic images, the proposed zooming algorithm has more than 1.79 dB of quality im- provement when compared with two previous zooming al- gorithms, one by Battiato et al.33and the other by Lukac et al.35

The remainder of this paper is organized as follows.

Section 2 presents the adaptive heterogeneity projection masks and the SL-based masks used to extract gradient information from the mosaic image. In Sec. 3, combining the extracted gradient information and the color difference concept, the proposed quality-effective zooming algorithm for mosaic images is presented. Section 4 demonstrates some experimental results to show the quality advantage of the proposed zooming algorithm. Finally, some concluding remarks are addressed in Sec. 5.

2 Extracting Gradient Information from Mosaic Images

In this section, we describe the adaptive heterogeneity pro- jection masks and the SL-based masks,29which are used to extract gradient information from the mosaic image di- rectly. As shown in Fig. 1, the R, G, and B color pixels located at position共i, j兲 in the input mosaic image are de- noted by Imor 共i, j兲, Imog 共i, j兲, and Imob 共i, j兲, respectively.

Based on the concept of adaptive heterogeneity projection,29 Table 1 shows three possible heterogeneity projection masks with different sizes adopted in this paper.

In Table1, the terms N and Mhp共N兲 denote the mask size and the corresponding heterogeneity projection mask, re- spectively. Given a mosaic image Imo, the horizontal het- erogeneity projection map HPH-mapand the vertical hetero- geneity projection map HPV-mapcan be obtained by

HPH−map=兩Imo丢Mhp共N兲T兩,

B B

R

B B

G G

R R

G G

R G G

R

G B G B G

R G

G G G B

B G B G

G R G R G R

R

R G G G R

R R

R G G G R

i-3 i-2 i-1 i i+1 i+2 i+3

j-3 j-2 j-1 j j+1 j+2 j+3

Fig. 1 Bayer CFA structure.

Table 1 Three possible heterogeneity projection masks

N Mhp共N兲

5 关1 −2 0 2 −1兴

7 关1 −4 5 0 −5 4 −1兴

9 关1 −6 14 −14 0 14 −14 6 −1兴

(3)

HPV−map=兩Imo丢Mhp共N兲T兩, 共1兲 where the symbol丢 denotes the 1-D convolution operator,

|·| denotes the absolute value operator, and the operator T denotes the transpose operator. According to the statistical analysis, N = 5 is a good choice to gather accurate horizon- tal and vertical edge information of the current pixel; it can also reduce the computation time for calculating Eq. 共1兲.

For exposition, the determined N共=5兲 is called NH. To normalize the masks with different sizes, the normal- ization factor 1/QNHis used to normalize the coefficients of the mask where the value of QNH is defined as the sum of positive coefficients covered by the mask with size NH. As a result, the heterogeneity projection mask 关1 −2 0 2

− 1兴Twould be normalized to关1 −2 0 2 −1兴T/3. To reduce the estimation error, we use the low-pass filter to tune the heterogeneity projection maps. For HPH-map and HPV-map the horizontal and vertical heterogeneity projection values at position共i, j兲 are denoted by HPH共i, j兲 and HPV共i, j兲, re- spectively. The tuned HPH共i, j兲 and HPV共i, j兲 can be com- puted by using the following two low-pass filters:

HPH共i, j兲=共1/10兲兺k=−44kHPH共i, j+k兲 and HPV共i, j兲

=共1/10兲兺k=−44kHPV共i+k, j兲, respectively, wherek= 2 if k = 0;k= 1, otherwise. The two tuned heterogeneity projec- tion values of the current pixel at position共i, j兲 are used to determine the interpolation direction of the current pixel.

To extract gradient information from the mosaic image, we further embed the luminance estimation technique8into the Sobel operator;37the two normalized SL-based masks29 are shown in Fig.2. After running two SL-based masks on the 5⫻5 mosaic subimage centered at position 共i, j兲, the horizontal gradient response⌬IdmH 共i, j兲 and the vertical gra- dient response⌬IdmV 共i, j兲 can be obtained. The obtained gra- dient responses are used to determine the interpolation weights for the neighboring pixels of the current pixel. In addition, from the hardware viewpoint in camera design, performing one multiplication requires more computational load and greater power consumption than that required us- ing shift operation. We thus want to decrease the number of used multiplications when running the SL-based masks on the mosaic subimage. After examining two SL-based masks as shown in Fig.2, we observe that only five numbers, 2, 4, 6, 8, and 12, are considered. Based on this observation, it requires only two multiplications, three shift operations, 19 additions, and 10 absolute-value operations rather than 20 multiplications, 19 additions, and 10 absolute-value opera- tions to obtain one response by running the mask on the 5⫻5 mosaic subimage.

3 Proposed Zooming Algorithm for Mosaic Images

The proposed quality-effective zooming algorithm consists of two stages:共1兲 zooming the mosaic G channel Imog with size X⫻Y to obtain the zoomed mosaic G channel Zmog with size 2X⫻2Y; 共2兲 zooming the mosaic R channel Imor and mosaic B channel Imob to obtain the zoomed mosaic R chan- nel Zmor and zoomed mosaic B channel Zmob . Finally, the zoomed mosaic image Zmocan be obtained.

3.1 Stage 1: Zooming the Mosaic G Channel Initially, the original mosaic image is expanded by the fol- lowing rule:

Zmo共2i,2j兲 = Imo共i, j兲,

ZHPd共2i,2j兲 = HPd共i, j兲,

⌬Zdm

d 共2i,2j兲 = ⌬Imo

d 共i, j兲, 共2兲

where for all d苸兵H,V其, i苸兵0,1,2, ... ,X−1其, and j 苸兵0,1,2, ... ,Y −1其; ZHPd共i, j兲 and ⌬Zdm

d 共i, j兲 denote the tuned adaptive heterogeneity projection value and the gradient response at position共i, j⬘兲 in the zoomed mosaic image Zmo, respectively. After expanding Imo, Fig.3 illus- trates the pattern of the obtained Zmo. The zooming process for the mosaic G channel consists of two steps:共1兲 estimat- ing the G values of the pixels in1g=兵共4m,4n+2兲,共4m+ 2 , 4n兲兩 ∀m,m, n , n苸Z,0艋4m,4m+ 2艋2X−1,0 艋4n+2,4n艋2Y −1其; 共2兲 estimating the G values of the pixels in ⍀2g=兵共m,n兲兩 ∀m,n苸odd,0艋m艋2X−1,0艋n 艋2Y −1其. For exposition, the pixels in ⍀1g and⍀2gare de- noted by symbols 䊊 and 䉱 in Fig. 4, respectively. The detailed descriptions for the two steps are demonstrated in Secs. 3.1.1 and 3.1.2, respectively.

   

   

   

   

   

    

    

    

    

Fig. 2 Two normalized SL-based masks:共a兲 the horizontal and 共b兲 the vertical SL-based mask.

0 1 2 3 4 5 6 7 8 9 10 11

G R G R

R G

B G B G B G

G R G R

R G

B G B G B G

G R G R

R G

B G B G B G

0 1 2 3 4 5 6 7 8 9 10 11

i' j'

Fig. 3 Pattern of the obtained Zmoafter expanding Imo.

(4)

3.1.1 Step 1 in stage 1: estimating the G values of the pixels in1g

From Fig.3, it is observed that for each pixel at position 共x,y兲苸⍀1g, the G value can be estimated from its four neighboring G pixels with movement ⍀n1g

=兵共x, y兲兩共x, y兲=共x⫾2,y兲,共x,y⫾2兲其. Figure 5 is the representative to explain how to estimate the G value Zmog 共x,y兲. Considering the neighboring G pixel located at location 共x−2,y兲, if the vertical G gradient response

⌬ZdmV 共x−2,y兲 is large, it means that there is a horizontal edge passing through it. Based on the color difference concept,22,23this case reveals that the G value of this pixel makes less contribution to the estimation of the G value for pixel Zmog 共x,y兲; otherwise, the G value of this pixel makes more contribution to the estimation of the G value for pixel Zmog 共x,y兲. Further, to reduce the estimation error, the two vertical G gradient responses ⌬ZdmV 共x,y兲 and ⌬ZdmV 共x

− 4 , y兲 are also considered. Combining the preceding analy- sis, the weight of the pixel at location 共x−2,y兲 can be given by wg共V,x−2,y兲=1/兵1+关⌬ZdmV 共x,y兲+2⌬ZdmV 共x

− 2 , y兲+⌬ZdmV 共x−4,y兲兴其. Following the similar discussion, the weights of the four neighboring G pixels can be ex- pressed by wg共V,x−2,y兲=1/兵1+关兺k=02k⌬ZdmV 共x−2k,y兲兴其, wg共V,x+2,y兲=1/兵1+关兺k=02k⌬ZdmV 共x+2k,y兲兴其, wg共H,x,y

− 2兲=1/兵1+关兺k=02k⌬ZdmH 共x,y−2k兲兴其, and wg共H,x,y+2兲

= 1/兵1+关兺k=02k⌬ZdmH共x,y+2k兲兴其, wherek= 2 if k = 1;k

= 1, otherwise.

In addition, based on the horizontal and vertical hetero- geneity projection values of the current pixel at position 共x,y兲,ZHPH共x,y兲 and ZHPV共x,y兲, the interpolation estima- tion scheme for G channel should consider three cases, namely 共1兲 horizontal variation as shown in Fig.6共a兲, 共2兲 vertical variation as shown in Fig.6共b兲, and 共3兲 the other variations as shown in Fig.6共c兲. The arrows in Fig.6 de- note the relevant data dependence. Consequently, the value of Zmog 共x,y兲 can be estimated by

Zmog 共x,y兲 = Zmo r 共x,y兲 +

共d,x

,y兲苸␰g

wg共d,x,y兲Dg共x,y⬘兲

共d,x

,y兲苸␰g

wg共d,x,y ,

g=

1212 if ZHPif ZHPotherwise,VH共x,y兲 ⬍共x,y兲 ⬍ZHPZHPHV共x,y兲共x,y兲

共3兲

where ␰1=兵共V,x⫾2,y兲其 and ␰2=兵共H,x,y⫾2兲其; if 共d,x, y⬘兲苸␰1, Dg共x, y兲=Zmo

g 共x, y兲−关Zmo

r 共x+ 2 , y⬘兲 + Zmor 共x− 2 , y⬘兲兴/2, if 共d,x, y⬘兲苸␰2, Dg共x, y⬘兲

= Zmog 共x, y兲−关Zmo

r 共x, y+ 2兲+Zmo

r 共x, y− 2兲兴/2. Then, the proposed new refinement method, which combines the con- cept of the local color ratios38and the proposed weighting scheme, is used to refine the estimated Zmog 共x,y兲 by Zmog 共x,y兲 = −+关Zmor 共x,y兲

+␤兴

共d,x

,y兲苸␰g⬘␦共d,x,ywg共d,x,y兲Lg共x,y

共d,x

,y兲苸␰g⬘␦共d,x,ywg共d,x,y⬘兲 , 共4兲 where ␰g=兵共H,x,y兲,共V,x,y兲,共H,x,y⫾2兲,共V,x⫾2,y兲其;

Lg共x, y兲=关Zmo

g 共x, y⬘兲+␤兴/关Zmo

r 共x, y⬘兲+␤兴; ␦共d,x,y

=共1/2兲 if 共d,x, y兲苸兵共H,x,y兲,共V,x,y兲其;共d,x,y= 1, oth- erwise. Furthermore, the determination of the two param- eters␣ and␤ is discussed in the appendix.

0 1 2 3 4 5 6 7 8 9 10 11

     

     

     

     

     

     

0 1 2 3 4 5 6 7 8 9 10 11

j' i'

Fig. 4 Depiction of the pixels in1 gandΩ2

g, where䊊苸Ω1 gand䉱 苸Ω2g.

G R G R

R

G B G B G

G R G R

R

G B G B G

G R G R

R x-3 x-2 x-1 x x+1 x+2 x+3 x-4

x+4

y-3 y-2 y-1 y y+1 y+2 y+3

y-4 y+4

Fig. 5 Subimage used to explain how to estimate the G values of pixels inΩ1g.

B G B

G R G

B G B

B G B

G R G

B G B

B G B

G R G

B G B

Fig. 6 Data dependence of the proposed interpolation estimation for the G channel:共a兲 horizontal variation 共vertical edge兲, 共b兲 vertical variation共horizontal edge兲, and 共c兲 other variations.

(5)

After estimating the G values in1g, Fig.7illustrates the current pattern of Zmo. For the current pattern of Zmo, in order to preserve the Bayer CFA structure, Lukac and Plataniotis,34 and Lukac et al.35 suggested moving the R and B values of the pixels in1g to the positions corre- sponding to the Bayer CFA structure. Thus, for each R value at position共x, y兲苸⍀1

g, we move it to the right po- sition 共x, y⬘+ 1兲; for each B value at position 共x⬘, y 苸⍀1g, we move it to the lower position 共x+ 1 , y兲. After moving R and B values, the resultant pattern of Zmo is il- lustrated in Fig. 8, and then the missing R and B color values can be estimated by the existent R and B color pix- els. Unfortunately, moving the R value to its right position and the B value to its lower position directly would lead producing acute color artifacts in the nonhomogenous re- gions and degrading the image quality of the zoomed im- age.

To overcome this problem, instead of using the preced- ing approach, we use the color difference interpolation to

expand the G-R color difference and the G-B color differ- ence. Based on the Bayer CFA structure, it is known that the R pixels will be fully populated inr=兵共2m,2n + 1兲兩m,n苸Z,0艋2m艋2X−1,0艋2n+1艋2Y −1其 after the zooming process. Further, for the pixels in⍀Dr=兵共4m,4n + 2兲兩m,n苸Z,0艋4m艋2X−1,0艋4n+2艋2Y −1其, the G-R color difference values can be calculated by Dr共x, y

= Zmog 共x, y兲−Zmo

r 共x, y兲, ∀共x, y兲苸⍀Dr. For exposition, the pixels in⍀rand⍀Drare denoted by the symbols䊏 and 䊊, respectively, in Fig.9. Then, we can estimate the color difference values of the pixels in⍀rfrom the color differ- ence values of the pixels in⍀Drby using the bilinear inter- polation estimation. In Fig. 10, we observe that the G-R color difference values of the four corner pixels have been known. Using the bilinear interpolation estimation, the color difference values of the pixels at positions 兵共x+k1, y + k2兲兩k1苸兵0, ⫾2其,k2苸兵⫾1其其, which are denoted by gray colors, could be estimated by

Dr共x + k1,y + k2兲 =

1

2 +␦1k2 4

2

2 +␦2k1

4 Dr共x + 2␦2,y

+ 2␦1兲, 共5兲

where␦1,␦2苸兵⫾1其; k1苸兵0, ⫾2其; and k2苸兵⫾1其. By the

0 1 2 3 4 5 6 7 8 9 10 11 0

1 2 3 4 5 6 7 8 9 10 11

G G

G

G G G

G G

G

G G G

G G

G

G G G

j i

B G

GB

B

G GB

GB B

G GB

GB

B G R G

R G

R G R G

R G

R G R G

R G

R G

Fig. 7 Pattern of Zmoafter performing the estimation for the G val- ues of pixels inΩ1g.

G

0 1 2 3 4 5 6 7 8 9 10 11

0 1 2 3 4 5 6 7 8 9 10 11

G G R G G R

R

G G G G G

B B B

G G R G G R

R G G

G G G G G G

B B B

G G R G G R

R G G

G G G G G G

B B B

G G

j i

Fig. 8 Pattern of Zmoafter moving R and B values to the positions corresponding to the Bayer CFA structure.

0 1 2 3 4 5 6 7 8 9 10 11

   



      

     

     

   



      

     

     

   



      

     

     

0 1 2 3 4 5 6 7 8 9 10 11

j' i'

Fig. 9 Depiction of the pixels inr,ΩDr, andΩb, where䊏苸Ωr,䊊 苸ΩDr, and䊐苸Ωb.

G

G G

G

y-2 y-1 y y+1 y+2

x+1 x+2 x-2 x-1 x

D

r

D

r

D

r

D

r

B G

Fig. 10 Subimage used to explain how to estimate the color differ- ence values of the pixels inΩr.

(6)

same arguments, the G-B color difference values of the pixels in ⍀b=兵共2m+1,2n兲兩m,n苸Z,0艋2m+1艋2X−1,0 艋2n艋2Y −1其, i.e., the pixels denoted by 䊐 in Fig.9, can be estimated. Then, Fig.11illustrates the current pattern of Zmo after estimating the G-B color difference values of the pixels in⍀rand⍀b. In Fig.11, the pixels in⍀rand⍀bare denoted by gray colors and black colors, respectively.

3.1.2 Step 2 in stage 1: estimating the G values of the pixels inΩ2g

After describing how to estimate the G values of the pixels in⍀1g, we now describe how to estimate the G values of the pixels in ⍀2g. After comparing the arrangement of the G channel in Fig.12with that of the R共or B兲 channel in the mosaic image 共see Fig. 1兲, it is not hard to find that the arrangements of two channels are the same except the num- ber of pixels in the two channels. From Ref.29, we have known that the four SI-quad-masks, which are derived by combining the Sobel operator and the bilinear interpolation, can be used to extract the horizontal, vertical,

␲/4-diagonal, and −␲/4-diagonal gradient information of

the R共or B兲 channel in the mosaic image as shown in Fig.

1. The four SI-quad-masks are shown in Figs. 13–16. Be- cause the arrangement of the G channel in Fig.12is similar to that of the R共or B兲 channel in Fig.1, we can directly use the four SI-quad-masks as shown in Figs.13–16to extract the G gradient information of all pixels in Zmo.

From Fig.12, it is observed that for each pixel at posi- tion 共x,y兲苸⍀2g, the G value can be estimated from its four neighbors with movement ⍀n2

g =兵共x, y兲兩共x, y⬘兲

=共x⫾1,y⫾1兲其. Similar to the G value estimation for pix- els in⍀1g, to estimate Zmog 共x,y兲 in Fig.12more accurately, four diagonal gradients are considered to determine the four proper weights.

G

0 1 2 3 4 5 6 7 8 9 10 11

0 1 2 3 4 5 6 7 8 9 10 11

G G G G

G G

G G G G G

G G G G

G G

G G G G G G

G G G G

G G

G G G G G G

j i

Fig. 11 Current pattern of Zmoafter estimating the color difference values of the pixels inΩrandΩb.

G G G

G

G G G G

G G G

G

G B G G

x-2 x-1 x x+1 x+2 x+3 x-3

y-2 y-1 y y+1 y+2 y+3 y-3

Fig. 12 Subimage used to explain how to estimate the G values of pixels inΩ2g.

Fig. 13 For all pixels at positions共x, y兲苸兵共x±2m,y±2n兲其 in Fig.

12, the four SI-based masks for G channel.共a兲 The horizontal SI- based mask.共b兲 The vertical SI-based mask. 共c兲 The/ 4-diagonal SI-based mask.共d兲 The −/ 4-diagonal SI-based mask.

Fig. 14 For all pixels at positions 共x, y兲苸兵共x±2m+1,y±2n兲其 in Fig.12, the four SI-based masks for G channel.共a兲 The horizontal SI-based mask. 共b兲 The vertical SI-based mask. 共c兲 The

/ 4-diagonal SI-based mask. 共d兲 The −/ 4-diagonal SI-based mask.

(7)

Considering the neighboring G pixel located at location 共x−1,y−1兲, if there is a ␲/4-diagonal edge passing through it, i.e., the −␲/4-diagonal G gradient response

⌬Zdm−␲/4,g共x−1,y−1兲 is large, the G value of this pixel makes less contribution to the estimation of Zmog 共x,y兲, oth- erwise, the G value of this pixel makes more contribution to the estimation of Zmog 共x,y兲. Further, to reduce the estima- tion error, the two −␲/4-diagonal G gradient responses

⌬Zdm␲/4,g共x,y兲 and ⌬Zdm␲/4,g共x−2,y−2兲 are also considered.

Consequently, the weight of the pixel at location 共x−1,y

− 1兲 can be given by wg共V,x−1,y−1兲=1/兵1 +关⌬Zdm−␲/4,g共x,y兲+2⌬Zdm−␲/4,g共x−1,y−1兲+⌬Zdm−␲/4,g共x−2,y

− 2兲兴其. Similarly, the weights of the four neighboring G pixels can be expressed by wg共−␲/4,x−1,y−1兲=1/兵1 +关兺k=02k⌬Zdm−␲/4,g共x−k,y−k兲兴其, wg共␲/4,x−1,y+1兲=1/兵1 +关兺k=02k⌬Zdm␲/4,g共x−k,y+k兲兴其, wg共␲/4,x+1,y−1兲=1/兵1 +关兺k=02k⌬Zdm␲/4,g共x+k,y−k兲兴其, and wg共−␲/4,x+1,y+1兲

= 1/兵1+关兺k=02k⌬Zdm−␲/4,g共x+k,y+k兲兴其, wherek= 2 if k = 1;

k= 1, otherwise. According to the preceding description, the G value of Zmog 共x,y兲 in Fig.12can be estimated by

Zmog 共x,y兲 =

共d,x

,y兲苸␰

wg共d,x,y兲Zmo g 共x,y⬘兲

共d,x

,y兲苸␰

wg共d,x,y , 共6兲

where ␰=兵共−␲/4,x−1,y−1兲,共/4,x−1,y+1兲,共/4,x + 1 , y − 1兲,共−␲/4,x+1,y+1兲其.

After performing the G value estimation for pixels in

2g, the zooming process for the G channel of Zmohas been completed. Afterward, the current pattern of Zmo is illus- trated in Fig. 17. In Sec. 3.2, the zooming processing for the mosaic R and B channels is presented.

3.2 Stage 2: Zooming the Mosaic R and B Channels

In this subsection, the second stage of the proposed zoom- ing algorithm, i.e., the zooming approach for R and B chan- nels, is presented. Since the zooming approach for the mo- saic R channel is the same as that for the mosaic B channel, in what follows, we only present it for the mosaic R chan- nel.

For easy exposition, Fig.18 is taken as the representa- tive to explain how to estimate the R values of pixels inr. Based on the color difference concept, the R value of the current pixel at position共x,y兲 can be estimated by

Fig. 15 For all pixels at positions 共x, y兲苸兵共x±2m,y±2n+1兲其 in Fig.12, the four SI-based masks for G channel共a兲 The horizontal SI-based mask. 共b兲 The vertical SI-based mask. 共c兲 The

/ 4-diagonal SI-based mask. 共d兲 The −/ 4-diagonal SI-based mask.

Fig. 16 For all pixels at positions共x, y兲苸兵共x±2m+1,y±2n+1兲其 in Fig.12, the four SI-based masks for G channel.共a兲 The horizontal SI-based mask. 共b兲 The vertical SI-based mask. 共c兲 The

/ 4-diagonal SI-based mask. 共d兲 The −/ 4-diagonal SI-based mask.

G

0 1 2 3 4 5 6 7 8 9 10 11

0 1 2 3 4 5 6 7 8 9 10 11

G G G G

G G G G G G

G G G G G

G G G G G G

G G G G

G G

G G G G G G

G G G G G G

G G G G G G

G G G G

G G

G G G G G G

G G G G G G

G G G G G G

G

j i

G

Fig. 17 Current pattern of Zmoafter completing the zooming pro- cess for the mosaic G channel.

(8)

Zmor 共x,y兲 =

共d,x

,y兲苸␰

wg共d,x,y兲Zmo g 共x,y⬘兲

共d,x

,y兲苸␰

wg共d,x,y − Dr共x,y兲, 共7兲

where ␰=兵共V,x⫾1,y兲,共H,x,y⫾1兲其; the four proper weights are wg共V,x−1,y兲=1/兵1+关兺k=02k⌬ZdmV,g共x−k,y兲兴其, wg共V,x+1,y兲=1/兵1+关兺k=02k⌬ZdmV,g共x+k,y兲兴其, wg共H,x,y

− 1兲=1/兵1+关兺k=02k⌬ZdmH,g共x,y−k兲兴其, and wg共H,x,y+1兲

= 1/兵1+关兺k=02k⌬ZdmH,g共x,y+k兲兴其, wherek= 2 if k = 1;k= 1, otherwise.

Finally, the B values of the pixels inbcan be estimated by the same way, and then the fully populated zoomed mosaic image Zmo, as shown in Fig.19, is obtained.

4 Experimental Results

In this section, based on 24 test mosaic images, some ex- perimental results illustrate the zoomed image quality ad- vantage of our proposed mosaic zooming algorithm when compared with two previous zooming algorithms, one by Battiato et al.33called the locally adaptive zooming共LAZ兲 algorithm and the other by Lukac et al.35 called the Bayer pattern zooming共BPZ兲 algorithm. The three concerned al- gorithms are implemented on the IBM compatible com- puter with Intel Core 2 Duo CPU 1.6 GHz and 1 Gbyte RAM. The operating system used is MS-Windows XP and the program developing environment is Borland C⫹⫹

Builder 6.0. The programs of the three concerned algo- rithms have been uploaded in Ref.39.

Figure 20 illustrates the 24 test images from Kodak PhotoCD.40Like those test images used in Refs.33and35, in our experiments, the 24 512⫻728 color test images are first shrunk and down-sampled by Eqs.共8兲and共9兲, respec- tively, to obtain the 48 256⫻364 shrunk mosaic images.

Imo共i, j兲 =

Ofcr共2i,2j兲 if i 苸 even and j 苸 odd Ofcb共2i,2j兲 if i 苸 odd and j 苸 even

Ofcg共2i,2j兲 otherwise

, 共8兲

Imo共i, j兲 =

14k

11=0k

21=0Ofcr共2i + k1,2j + k2兲 if i 苸 even and j 苸 odd 1

4

k1=0

1

k2=0 1

Ofcb共2i + k1,2j + k2兲 if i 苸 odd and j 苸 even 1

4k

1=0 1 k

2=0 1

Ofcg共2i + k1,2j + k2兲 otherwise, 共9兲

where Ofcr共x,y兲, Ofcg共x,y兲, and Ofcb共x,y兲 denote the three color components of the color pixel at position共x,y兲 in the original full-color image; Imo共i, j兲 denotes the color value of the pixel at position 共i, j兲 in the shrunk mosaic image.

For convenience, the mosaic images shrunk by Eqs.共8兲and 共9兲 are called the shrunk-sampling mosaic image and the shrunk-averaging mosaic image, respectively; the zoomed mosaic images obtained by using the shrunk-sampling mo- saic image and the shrunk-averaging mosaic image are called the zoomed-sampling mosaic image and the zoomed- averaging mosaic image, respectively. Furthermore, the boundaries of each image are dealt with using the mirroring method.

We adopt the peak signal-to-noise ratio 共PSNR兲 to jus- tify the advantage of the proposed zooming algorithm. The PSNR of a M⫻N mosaic image is defined by

PSNE = 10 log10 2552

1/共MN兲

i=0 M−1

j=0 N−1

关Omo共i, j兲 − Zmo共i, j兲兴2 ,

共10兲 where Omo共i, j兲 denotes the color value of the pixel at po- sition 共i, j兲 in the 512⫻728 mosaic image generated by mosaicing the original full-color image and Zmo共i, j兲 de- notes the color value of the pixel at position 共i, j兲 in the

G

0 1 2 3 4 5 6 7 8 9 10 11

0 1 2 3 4 5 6 7 8 9 10 11

G R G R G R G R

R

B G B G B G B G B G B G

G R G R R G R G R G R

B G B G B G B G B G B G

G R G R G R G R

R G R G

B G B G B G B G B G B G

G R G R G R G R G R G R

B G B G B G B G B G B G

G R G R G R G R

R G R G

B G B G B G B G B G B G

G R G R G R G R G R G R

B G B G B G B G B G B G

G R G j

i

Fig. 19 Fully populated zoomed mosaic image Zmo.

G G

G G G

G G

G G G

G G

y-2 y-1 y y+1 y+2

x+1 x+2 x-2 x-1 x

Fig. 18 Subimage used to explain how to estimate the R values of the pixels inΩr.

(9)

zoomed mosaic image obtained by applying the zooming algorithm on Imo.

Table2demonstrates the zoomed mosaic image quality comparison in terms of PSNR for the three concerned al- gorithms. In Table2, the second to fourth columns and the fifth to seventh columns demonstrate the comparisons for the zoomed-sampling mosaic image and the zoomed- averaging mosaic image, respectively. In Table2, the entry with the best PSNR is highlighted by boldface. Table 3 demonstrates the average PSNR quality comparison for the zoomed-sampling mosaic image and the zoomed-averaging mosaic image. On average, our proposed zooming algo- rithm has more than 1.79 dB of quality improvement when compared with two previous zooming algorithms.

Next, we adopt the subjective visual measure to demon- strate the visual quality advantage of our proposed zooming algorithm. For simplicity, seven magnified subimages cut from the test image No. 7 are used to compare the visual effect. Figures 21共a兲–21共g兲 illustrate the seven magnified subimages cut from the mosaic image obtained by mosaic- ing the original test image No. 7 directly; cut from the zoomed-sampling mosaic images obtained by LAZ algo- rithm, LAZ algorithm, and the proposed mosaic zooming algorithm; cut from the zoomed-averaging mosaic images obtained by the preceding three concerned algorithms, re- spectively. To show the mosaic images more clearly, the color value of each pixel is represented by its gray value.

Comparing the visual effect between the magnified subim- age in Fig. 21共a兲 and the corresponding one in Figs.

21共b兲–21共g兲, we observe that the shutters in the two zoomed mosaic images obtained by the proposed mosaic zooming algorithm look clearer and have fewer artifacts when compared with those in the other four zoomed mosaic

images obtained by the previous zooming algorithms. Simi- lar to the visual comparison for the test image No. 7, we further take the magnified subimages cut from the test im- age No. 23 for visual comparison. Figures22共a兲and22共g兲 illustrate seven magnified subimages cut from the mosaic image obtained by mosaicing original color test image No.

23; cut from the zoomed-sampling mosaic images; cut from the zoomed-averaging mosaic images. From visual com- parison, we observe that the face textures of the birds in the two zoomed mosaic images obtained by our proposed mo- saic zooming algorithm look clearest and have least arti- facts, i.e., the best visual effect.

Besides evaluating the zoomed image quality perfor- mance under the mosaic image domain, we further evaluate the image quality performance under the demosaiced full- color image domain. Here, three demosaicing algorithms proposed by Pei and Tam,22 Lukac and Plataniotis,21 and Chung and Chan,4respectively, are adopted to demosaic the zoomed mosaic images. For convenience, the three demo- saicing algorithms proposed in Refs. 22, 21, and 4 are called the signal correlation demosaicing共SCD兲 algorithm, the normalized color-ratio modeling demosaicing共NCMD兲 algorithm, and the variance of color differences demosaic- ing共VCDD兲 algorithm, respectively.

For fitting the demosaiced full color domain, we adopt three objective color image quality measures, the color PSNR 共CPSNR兲, the S-CIELAB ⌬Eab* metric,23,41 and the mean structural similarity42 共MSSIM兲, and one subjective color image quality measure, the color artifacts, to justify the better quality performance of our proposed zooming algorithm in the demosaiced full color domain. The CPSNR for an M⫻N color image is defined by

CPSNR = 10 log10 2552

1/共3MN兲

i=0 M−1

j=0 N−1c

苸C关Ofcc共i, j兲 − Zdmc 共i, j兲兴2

, C =兵r,g,b其, 共11兲

Fig. 20 Twenty-four test images from Kodak PhotoCD.40

(10)

where Ofcr共i, j兲, Ofcg共i, j兲, and Ofcb共i, j兲 denote the three color components of the color pixel at position共i, j兲 in the origi- nal full color image; Zdmr 共i, j兲, Zdm

g 共i, j兲, and Zdm

b 共i, j兲 denote the three color components of the color pixel at position 共i, j兲 in the zoomed and demosaiced full color image. The greater the CPSNR is, the better the image quality is. The S-CIELAB⌬Eab* of an M⫻N color image is defined by

⌬Eab* = 1 MN

i=0 M−1

j=0

N−1

c

苸⌿关EOfcc共i, j兲 − EZdmc 共i, j兲兴2

1/2,

=兵L,a,b其 共12兲

where EOfcL共i, j兲, EOfca共i, j兲, and EOfcb共i, j兲 denote the three CIELAB color components of the color pixel at position 共i, j兲 in the original full color image; EZdmL 共i, j兲, EZdma 共i, j兲,

Table 2 PSNR comparison for three concerned algorithms based on mosaic image domain.

Zoomed-Sampling Mosaic Images Zoom-Averaging Mosaic Images

LAZ共Ref.33兲 BPZ共Ref.35兲 Ours LAZ共Ref.33兲 BPZ共Ref.35兲 Ours

Image 01 21.0309 22.0561 24.7832 21.2226 23.0226 23.7134

Image 02 28.2049 28.8876 31.1006 28.4293 29.9341 30.4789

Image 03 29.0091 29.7613 32.4967 29.0248 30.6286 31.5853

Image 04 28.0358 28.4953 31.4473 27.6440 29.5121 29.9524

Image 05 20.9758 21.4357 25.0237 20.8922 22.4231 23.5210

Image 06 22.9187 23.6897 26.3666 23.1881 24.8081 25.4316

Image 07 26.3770 27.6181 31.5838 25.6686 28.4169 29.0751

Image 08 18.7949 19.5576 22.1909 18.7016 20.6395 21.0174

Image 09 26.2863 27.7866 30.8980 25.9515 28.5187 29.1765

Image 10 26.9833 27.6467 30.9190 26.7956 28.6003 29.5745

Image 11 24.3368 24.9773 27.6679 24.3862 26.0407 26.6855

Image 12 28.2548 29.0543 31.9817 28.0030 29.9528 30.5984

Image 13 19.4352 19.7921 22.5522 19.7506 21.0273 21.8124

Image 14 23.7403 24.3755 26.7303 23.6557 25.2574 25.6734

Image 15 27.6946 28.2269 30.8834 27.0980 29.0145 29.3562

Image 16 26.5490 27.3799 29.9867 26.8854 28.5203 29.1772

Image 17 26.8729 27.4734 30.7729 26.5713 28.4128 29.1937

Image 18 23.0155 23.4576 26.3295 23.0911 24.5974 25.2960

Image 19 22.5968 23.8859 26.7228 22.6393 24.8886 25.4410

Image 20 26.2403 27.2153 30.3149 25.9125 28.1746 28.7152

Image 21 23.1969 24.0252 27.1012 23.2785 25.1467 25.8918

Image 22 25.6045 26.0330 28.6118 25.6056 27.1060 27.6788

Image 23 28.3455 29.2668 32.7028 28.0118 30.0933 31.0779

Image 24 22.0551 22.2335 25.0977 22.2721 23.4373 24.3001

Average 24.8564 25.5971 28.5111 24.7783 26.5906 27.2677

(11)

and EZdmb 共i, j兲 denote the three CIELAB color components of the color pixel at position共i, j兲 in the zoomed and demo- saiced full-color image. The smaller the S-CIELAB⌬Eab* is, the better the image quality is. Further, the MSSIM, which is based on human visual system and the comparison between local patterns, is used to measure the image quality performance. The MSSIM for an M⫻N color image is de- fined by

MSSIM =1

3c苸兵r,g,b其

兺 冢

MN1 M−1

i=0 N−1

j=0

␳苸⌽

2␳苸⌽

c共i, j兲兴c共i, j兲 + k2+ k1

冎再

1

␳苸⌽

关2ozc共i, j兲 + kc共i, j兲兴22+ k 2

,

c共i, j兲 =

x=−5

5

y=−5 5

关wxyIc共i + x, j + y兲兴,

c共i, j兲 =

x=−5

5 y=−5

5 兵wxy关Ic共i + x, j + y兲 −c共i, j兲兴2

1/2

oz

c共i, j兲 =

x=−5

5

y=−5

5

wxy␳苸⌽

关Ic共i + x, j + y兲 −c共i, j兲兴

共13兲

where ⌽苸兵o,z其; Ic共i, j兲=Ofc

c共i, j兲 if= o; otherwise, Ic共i, j兲=Zdm

c 共i, j兲 where the definitions of Ofcc共i, j兲 and Zdmc 共i, j⬘兲 are the same as those in Eq.共11兲; W =兵wx,y兩−5 艋x,y艋5其 are the coefficients of the 11⫻11 circular- symmetric Gaussian mask. The greater is the MSSIM, the better is the image quality.

Based on the same test images, among the nine schemes that combine one of the concerned three mosaic zooming algorithms and one of the three existing demosaicing algo- rithms, Tables4–6demonstrate the image quality compari-

son in terms of the average CPSNR, the average S-CIELAB⌬Eab*, and the average MSSIM, respectively. In Tables4–6, the second to fourth columns and fifth to sev- enth columns demonstrate the quality comparison for demosaicing results based on the zoomed-sampling mosaic image and the zoomed-averaging mosaic image, respec- tively, and the entries with the largest CPSNR and MSSIM, and the smallest S-CIELAB⌬Eab* are highlighted by bold- face. From Tables 4–6, we observe that our proposed zooming algorithm produces the best zoomed and demosa- iced image quality in terms of CPSNR, S-CIELAB ⌬Eab*, and MSSIM.

Finally, we adopt the subjective visual quality measure, color artifacts, to demonstrate the visual quality advantage of our proposed zooming algorithm under the demosaiced full-color domain. After demosaicing the zoomed mosaic image, some color artifacts may appear on nonsmooth re- gions of the demosaiced image. To evaluate the color arti- facts among the concerned algorithms, the magnified sub- images containing nonsmooth contents are adopted from the demosaiced images. First, Figs.23共a兲–23共s兲illustrate 19 magnified subimages, one cut from the original test image

Table 3 Average PSNR comparison for the zoomed-sampling mo- saic image and the zoomed-averaging mosaic image.

共Ref.LAZ33兲 BPZ

共Ref.35兲 Ours Zoomed-sampling mosaic images 24.8564 25.5971 28.5111

Zoomed-averaging mosaic images 24.7783 26.5906 27.2677

Average 24.8174 26.0939 27.8894

Quality improvement 3.0720 1.7955

Fig. 21 Seven magnified subimages cut from共a兲 the mosaic image obtained by mosaicing the original test image No. 7 directly; the ones cut from the zoomed-sampling mosaic images obtained from 共b兲 the LAZ algorithm, 共c兲 the BPZ algorithm, and 共d兲 our proposed mosaic zooming algorithm; the ones cut from the zoomed-averaging mosaic images obtained from 共e兲 the LAZ algorithm, 共f兲 the BPZ algorithm, and共g兲 our proposed mosaic zooming algorithm.

(12)

No. 19; the others cut from the ones by running the nine zooming and demosaicing schemes, which combine one of the three concerned mosaic zooming algorithms and one of the three demosaicing algorithms already mentioned, on the zoomed-sampling mosaic image and the zoomed-averaging mosaic image. From a visual comparison, we observe that based on the same demosaicing algorithm, the demosaiced images, which use the zoomed mosaic images created by our proposed zooming algorithm as the input images, have fewer color artifacts on the plank walls when compared with those that use the zoomed mosaic images created by the other two zooming algorithms as the input images. Fur- ther, we take the magnified subimages cut from the test image No. 23 for visual comparison. Figures 24共a兲–24共s兲 are the magnified subimages cut from the original full color test image No. 23 and the 18 zoomed and demosaiced im- ages. Similar to the visual comparison for the test image No. 19, experimental results for the test image No. 23 im- age also reveal that the face textures of the birds in the demosaiced images, that use the zoomed mosaic images created by our proposed zooming algorithm as the input images, have least color artifacts and the best visual effect.

More visual results of the concerned algorithms are avail- able in Ref.39.

5 Conclusion

The new quality-effective zooming algorithm for mosaic images was presented. Utilizing the adaptive heterogeneity

Table 4 Average CPSNR comparison for three concerned demosaicing algorithms based on demo- saiced full-color domain.

Zoomed-Sampling Mosaic Images Zoomed-Aveaging Mosaic Images

LAZ共Ref.33兲 BPZ 共Ref.35兲 Ours LAZ共Ref.33兲 BPZ 共Ref.35兲 Ours SCD共Ref.22兲 25.2014 25.9048 28.3469 24.9344 26.8063 27.2044 NCMD共Ref.21兲 25.2456 25.9615 28.3609 24.9787 26.8715 27.2044

VCDD共Ref.4兲 25.1836 25.8916 28.3311 24.9233 26.800 27.1771

Average 25.2102 25.9193 28.3463 24.9454 26.8259 27.1916

Table 5 Average S-CIELAB⌬Eab* comparison for three concerned demosaicing algorithms based on demosaiced full-color domain.

Zoomed-Sampling Mosaic Images Zoomed-Aveaging Mosaic Images

LAZ共Ref.33兲 BPZ 共Ref.35兲 Ours LAZ共Ref.33兲 BPZ 共Ref.35兲 Ours SCD共Ref.22兲 4.87254 4.40272 3.13853 4.40765 3.87297 3.28052 NCMD共Ref.21兲 4.80572 4.22144 3.08994 4.34464 3.67143 3.20609 VCDD共Ref.4兲 4.82128 4.30877 3.14600 4.38060 3.81125 3.29513

Average 4.83318 4.31098 3.12482 4.37763 3.78522 3.26058

Fig. 22 Seven magnified subimages cut from共a兲 the mosaic image obtained by mosaicing the original test image No. 23 directly; the ones cut from the zoomed-sampling mosaic images obtained from 共b兲 the LAZ algorithm, 共c兲 the BPZ algorithm, and 共d兲 our proposed mosaic zooming algorithm; the ones cut from the zoomed-averaging mosaic images obtained from 共e兲 the LAZ algorithm, 共f兲 the BPZ algorithm, and共g兲 our proposed mosaic zooming algorithm.

(13)

projection masks and the Sobel-and luminance-estimation- based masks, the gradient information can be extracted from the input mosaic image directly. Then, the extracted gradient information and the color difference concept are combined to assist the design of the proposed quality- effective zooming algorithm. Based on 24 test images, ex-

perimental results demonstrated that the proposed zooming algorithm has more than 1.79 dB quality improvement when compared with two previous zooming algorithms, one by Battiato et al.33 and the other by Lukac et al.35 In addition, based on the demosaiced full-color domain, the proposed zooming algorithm has the best image quality

Table 6 Average MSSIM comparison for three concerned demosaicing algorithms based on demosa- iced full-color domain.

Zoomed-Sampling Mosaic Images Zoomed-Aveaging Mosaic Images

LAZ共Ref.33兲 BPZ 共Ref.35兲 Ours LAZ共Ref.33兲 BPZ 共Ref.35兲 Ours SCD共Ref.22兲 0.38018 0.47815 0.60142 0.32424 0.50118 0.51600 NCMD共Ref.21兲 0.38290 0.48475 0.60543 0.32682 0.50724 0.51839 VCDD共Ref.4兲 0.32424 0.47788 0.60111 0.32259 0.50103 0.51470

Average 0.36244 0.48026 0.60265 0.32455 0.50315 0.51636

Fig. 23 Nineteen magnified subimages cut from共a兲 the original test image No. 19; based on the zoomed-sampling mosaic image, the demosaiced full-color images obtained by共b兲 LAZ+SCD, 共c兲 LAZ + NCMD,共d兲 LAZ+VCDD 共e兲 BPZ+SCD, 共f兲 BPZ+NCMD, 共g兲 BPZ + VCDD, 共h兲 ours+SCD, 共i兲 ours+NCMD, and 共j兲 ours+VCDD;

based on the zoomed-averaging mosaic image, the demosaiced full-color images obtained by共k兲 LAZ+SCD, 共l兲 LAZ+NCMD, 共m兲 LAZ+ VCDD共n兲 BPZ+SCD, 共o兲 BPZ+NCMD, 共p兲 BPZ+VCDD, 共q兲 ours+ SCD,共r兲 ours+NCMD, and 共s兲 ours+VCDD.

Fig. 24 Nineteen magnified subimages cut from共a兲 the original test image No. 23; based on the zoomed-sampling mosaic image, the demosaiced full color images obtained by共b兲 LAZ+SCD, 共c兲 LAZ + NCMD,共d兲 LAZ+VCDD 共e兲 BPZ+SCD, 共f兲 BPZ+NCMD, 共g兲 BPZ + VCDD, 共h兲 ours+SCD, 共i兲 ours+NCMD, and 共j兲 ours+VCDD;

based on the zoomed-averaging mosaic image, the demosaiced full color images obtained by共k兲 LAZ+SCD, 共l兲 LAZ+NCMD, 共m兲 LAZ + VCDD 共n兲 BPZ+SCD, 共o兲 BPZ+NCMD, 共p兲 BPZ+VCDD, 共q兲 ours+ SCD,共r兲 ours+NCMD, and 共s兲 ours+VCDD.

參考文獻

相關文件

The CFSLBG algorithm searches only the palette entries that are close to the palette entry for the training vector in the previous iteration.. However, the number of palette

In this paper, we propose a novel algorithm which utilizes identity-labeled face images to tackle the identity-based intra- class variation of AU detection that the appearances of

Demosaicking algorithm interpolates the other two colors per pixel, and transfers mosaic images to full color images.. Imperfect demosaicking algorithms make artifact, such as,

We propose a digital image stabilization algorithm based on an image composition technique using four source images.. By using image processing techniques, we are able to reduce

Results show that our proposed heuristic scheme outperforms the simple but naïve full power allocation scheme as well as the scheme using ElBatt’s algorithm by 14% in

The algorithm used is averaging algorithm, i.e., when shrinking a 512 by 512 image into a 64 by 64 image, which is 1/8 of the original size, we take 8 by 8 blocks and use the

Our proposed algorithm consists of the following three stages: (1) interpolating the mosaic G plane to construct the fully populated G plane by using the edge-sensing

Real Schur and Hessenberg-triangular forms The doubly shifted QZ algorithm.. Above algorithm is locally