• 沒有找到結果。

User-friendly sharing of images: progressive approach based on modulus operations

N/A
N/A
Protected

Academic year: 2021

Share "User-friendly sharing of images: progressive approach based on modulus operations"

Copied!
9
0
0

加載中.... (立即查看全文)

全文

(1)

User-friendly sharing of images: progressive approach

based on modulus operations

Kun-Yuan Chao Ja-Chen Lin

National Chiao Tung University

Department of Computer and Information Science 1001 Ta Hsueh Road

Hsinchu, Taiwan, 300 E-mail: kychao@cis.nctu.edu.tw

Abstract. Image sharing is a popular technology to secure impor-tant images against damage. The technology decomposes and transforms an important image to produce several other images called shadows or shares. To decode, the shared important image can be reconstructed by combining the collected shadows, as long as the number of collected shadows reaches a specified threshold value. A few sharing methods produce user-friendly (i.e., visually recognizable) shadows—in other words, each shadow looks like a replica of reduced visual quality of a given image, rather than com-pletely meaningless random noise. This facilitates visual manage-ment of shadows. (For example, if there are 100 important images and each creates 2 to 17 shadows of its own, then it is easy to visually recognize that a stored shadow is from, say, a House im-age, rather than from the other 99 images.) In addition to visually recognizable shadows, progressive decoding is also a convenient feature: it provides the decoding meeting a convenient manner to view a moderately sensitive image. Recently, Fang combined both conveniences of visually recognizable shadows and progressive de-coding [W. P. Fang, Pattern Recogn., 41, 1410–1414 (2008)]. But that method was memory expensive because its shadows were too big. In order to save memory space, we propose a novel method based on modulus operations. It still keeps both conveniences, but shadows are two to four times smaller than Fang’s, and the visual quality of each shadow can be controlled by using a simple expression. © 2009 SPIE and IS&T. 关DOI: 10.1117/1.3206950兴

1 Introduction

Sharing can be utilized to secure an image for storage and transmission. Usually, a sharing method shares an impor-tant image among several extremely noisy images called shadows or shares. By combining these shadows, one can reconstruct the image later. Several works have extended this fundamental concept.1,2. Examples include reduction of memory cost for shadows,3 fast decoding and small pixel expansion rate,4 and extension of binary visual cryptogra-phy共VC兲 to grayscale images.5

Some other extensions are “application-oriented,” such as user-friendly shadows6,7for easier management of shad-ows, and progressive decoding7,8of an image that is mod-erately sensitive but needs to be processed frequently. Among these methods, Thien and Lin6first introduced the idea of using user-friendly共visually recognizable兲 shadows,

Jin et al.8developed a progressive technique for grayscale/ color images with three types of decryptions to enable re-covery in varying qualities, and Fang7utilized user-friendly shadows and progressive decoding simultaneously.

From the viewpoint of shadow management, to classify or locate a shadow, attaching a name tag to each shadow in advance is needed if each shadow looks like random noise. 共Most reported methods have this kinds of shadows.兲 An-other method is to use visually identifiable shadows. These are also called user-friendly shadows 共first mentioned in Ref. 6 and then in Ref. 7兲, because their visually

identifi-able features 共each shadow looks like a visual-quality-reduced version of a given image兲 make the job of manag-ing shadows easier for the database manager.

Although Thien and Lin6 first introduced the idea of using user friendly 共visually recognizable兲 shadows, their method is not progressive, and the reconstruction by all shadows is not lossless. These two weaknesses will be avoided by our method. So far, only Fang’s method7共which is lossless when all shadows are collected兲 simultaneously provides two application-convenient features: user-friendly shadows and progressive decoding. Unfortunately, its shad-ows are four times larger than the input image and thus are not economic in memory. To improve this, we propose here a novel progressive and user-friendly approach based on modulus operations. Better than Fang’s method,7 our method possesses extra advantages: nonexpansion of the shadow size and controllable quality of shadow images. Meanwhile, like Fang’s method, our method has lossless recovery, when all n shadows are used, and the decoding complexity is O共k兲 for the reconstruction using k shadows 共k艋n兲.

The remaining portion of this paper is organized as fol-lows. Section 2 briefly describes Fang’s user-friendly pro-gressive sharing method.7 Section 3 presents the proposed method. Experimental results and some comparisons are shown in Sec. 4. Last, conclusions are given in Sec. 5. 2 Brief Review of Fang’s Method

This section reviews briefly Fang’s progressive and user-friendly method.7

Sharing phase. See the flowchart in Fig. 1 for Fang’s sharing phase:

Paper 09003R received Jan. 4, 2009; revised manuscript received Jun. 2, 2009; accepted for publication Jul. 1, 2009; published online Aug. 21, 2009.

(2)

Step 1. According to the two leftmost columns in Table

1, expand every pixel O共x,y兲 of the input binary image

O to a 2⫻2 block at the corresponding position of the expanded image O

.关If O共x,y兲 is white, then the corre-sponding 2⫻2 block is randomly selected from the six possibilities listed in the lower part of column O

.兴 Step 2. For each 2⫻2 block of the expanded image O

, by checking the pixel value at the corresponding posi-tion of a given stego-image T, Fang randomly selected one of the corresponding patterns listed in the rightmost column of Table 1 to create the 2⫻2 sharing block at

the corresponding position of the first shadow S1. Simi-lar arguments created each of the remaining n − 1 shadows.

Recovering phase. Assume that k shadows are collected. Then, each pixel j of the black-or-white image is recon-structed using the k sharing pixels at the same position j of the k shadows. The reconstruction rule is an OR-like opera-tion: The reconstructed pixel is black iff at least one of the k sharing pixels is black.

Fang’s method has two disadvantages: 共1兲 The size of each shadow Siis four times larger than the input image O; and共2兲 the image quality 关such as peak signal-to-noise ratio 共PSNR兲兴 of shadows is not easy to control. We will im-prove these aspects.

3 Proposed Method

This section presents our user-friendly progressive sharing method based on modulus operations. The method gener-ates n user-friendly shadows whose image quality共such as PSNR兲 is lower than the input image’s quality; later, the input image can be reconstructed with progressively im-proved image quality after gathering k共2艋k艋n兲 shadows. The description of the method is divided into three subsec-tions. First, a fundamental共n,n兲 sharing version based on modulus operations is introduced in Sec. 3.1. This simple version is neither user-friendly nor progressive. Then, the fundamental version is extended in Sec. 3.2 to an diate version with friendly shadows, although the interme-diate version is still nonprogressive. Last, Sec. 3.3 presents

the final version by extending the intermediate 共user-friendly兲 version further to include both progressive decod-ing and user-friendly features. A comparison between our progressive and user-friendly method共Sec. 3.3兲 and Fang’s 共Sec. 2兲 is given in Sec. 3.4.1, while a stego version of our method is given in Sec 3.4.2.

3.1 An共n,n兲 Fundamental Sharing Version Based on Modulus Operations

This section illustrates a fundamental共n,n兲 sharing version for grayscale images based on modulus operations. This version splits a grayscale image A among n extremely noisy shadows B1, B2, . . . , Bnwhose sizes are all the same as A. The n noisy shadows together can reconstruct each pixel of A by using one modulus operation and n − 1 addi-tion.共In this paper, ⫹ and Mod denote addition and modu-lus operations, respectively.兲 The sharing and recovering phases of the fundamental version are listed in the follow-ing.

Sharing phase.

Step 1. Input a grayscale secret image A.

Step 2. Generate n − 1 random images B1, B2, . . . , Bn−1as shadows. Each is as large as A.

Step 3. Create the n’th shadow Bn by

Bn=共A + 兵256 − 关共B1+ B2+ . . . + Bn−1Mod 256兴其兲Mod 256.

共1兲 Step 4. Output the n noisy 共nonfriendly兲 shadows B1, B2, . . . , Bn.

Recovering phase. Retrieve A using the formula A =共B1+ B2+ . . . + BnMod 256. 共2兲

Notably, both ⫹ and Mod are pixel-by-pixel operations. This sharing scheme can also work for binary or color im-ages by using 2共=21兲 and 16777216共=224兲, respectively, to replace the constant 256 in the two preceding formulas. An experiment using the grayscale image Lena as image A is shown in Fig.2, with共n,n兲=共4,4兲.

3.2 A User-Friendly but Nonprogressive共n,n兲 Version

This section describes how to extend the共n,n兲 fundamental version in Sec. 3.1 to an intermediate version whose n shadows are all user-friendly. What we do is to use a smaller value m to replace the value 256 in the modulus operations in Sec. 3.1.

Sharing phase.

Step 1. Input an integer parameter m 共2艋m艋256兲 and an 8-bit grayscale image A. Generate a smaller range image

A

=共A兲Mod m, 共3兲

whose size is identical to A, but with pixel value less than m共rather than 256兲.

(3)

Step 2. Generate n − 1 “random” images B1

, . . . , Bn−1

whose sizes are all as large as A, but each pixel is a random value chosen from关0,1, ... ,共m−1兲兴. Then cre-ate

Bn

=共A

+兵m − 关共B1

+ B2

+ . . . + Bn−1

Mod m兴其兲Mod m,

共4兲 which implies that共B1

+ B2

+ . . . +Bn

Mod m= A

.

Step 3. Output the n friendly shadows 兵B1, . . . , Bn其 de-fined by

Bi=共A − A

兲 + Bi

for i = 1, . . . ,n. 共5兲 Recovering phase. Retrieve A by

A =关Bi共BiMod m兴 + 关共B1+ B2+ . . . + BnMod m兴. 共6兲

In Eq.共6兲, it does not matter which one of B1, B2, . . . , Bnis used as Bi; the result is the same. Also, if m = 256 is used in Eqs. 共3兲–共6兲, then this intermediate version is identical to the共n,n兲 sharing one in Sec. 3.1.

3.3 The User-Friendly and Progressive Version The intermediate version 共Sec 3.2兲 is still nonprogressive, although user-friendly. Section 3.3 extends the intermediate Table 1 Fang’s selection of sharing patterns共Ref.7兲.

Secret pixel O共x,y兲 Expanded secret O Cover pixel T共x,y兲

Possible choices for the related 2⫻2 block of a share Si共1艋i艋n兲

Ba 共B,B,B,B兲 B 共B,B,W,W兲,共B,W,B,W兲, 共B,W,W,B兲, 共W,B,B,W兲, 共W,B,W,B兲, 共W,W,B,B兲 共W,W,W,W兲,共B,W,W,W兲,共W,B,W,W兲, W 共W,W,B,W兲, 共W,W,W,B兲 W 共B,B,W,W兲b B 共B,B,W,W兲 W 共W,W,W,W兲, 共B,W,W,W兲, 共W,B,W,W兲 共B,W,B,W兲 B 共B,W,B,W兲 W 共W,W,W,W兲, 共B,W,W,W兲, 共W,W,B,W兲 共B,W,W,B兲 B 共B,W,W,B兲 W 共W,W,W,W兲, 共B,W,W,W兲, 共W,W,W,B兲 共W,B,B,W兲 B 共W,B,B,W兲 W 共W,W,W,W兲, 共W,B,W,W兲, 共W,W,B,W兲 共W,B,W,B兲 B 共W,B,W,B兲 W 共W,W,W,W兲, 共W,B,W,W兲, 共W,W,W,B兲 共W,W,B,B兲 B 共W,W,B,B兲 W 共W,W,W,W兲, 共W,W,B,W兲, 共W,W,W,B兲 Note:共See Fig.1for definitions of O, O, and T兲.

aB represents a black pixel; W represents a white pixel.

bEach 2⫻2 block in the expanded image O⬘共or in each shadow S

i兲 is represented as 共left-top pixel, right-top pixel, left-bottom pixel, right-bottom pixel兲.

Fig. 2 An example of the共n,n兲 fundamental sharing version intro-duced in Sec 3.1. Here, 共n,n兲=共4,4兲; 共a兲 is the given grayscale image Lena A;共b兲 to 共e兲 are the four generated “nonfriendly” shad-ows B1, B2, B3, B4; and共f兲 is the recovered error-free Lena using the formula A =共B1+ B2+ B3+ B4兲Mod 256.

(4)

version to a progressive one. Because it is an extension of Sec. 3.2, the modulus-base notation m共2艋m艋256兲 is still used in this section.

Sharing phase.

Step 1. Input an integer parameter m共2艋m艋256兲 and an 8-bit grayscale secret image A.共A can also be one of the three 8-bit color-components of a 24-bit color im-age.兲

Step 2. In a pixel-by-pixel manner, generate a smaller-range image

A

=共A兲Mod m, 共7兲

whose size is identical to A, but pixel value is at most m − 1, rather than 255.

Step 3. Generate n − 1 random images R1, R2, . . . , Rn−1. 共Each image Riis as large as A, and each pixel of Riis 8-bit.兲

Step 4. Create n images B1

, B2

, . . . , Bn

in a pixel-by-pixel manner:

If A

= 0, then B1

= 0; else B1

=共R1兲Mod共A⬘+1兲. Anyway, define A1

= A

− B1

.

If A1

= 0, then B2

= 0; else B2

=共R2兲Mod共A

1

⬘+1兲. Anyway, define A2

= A1

− B2

.

If An−2

= 0, then Bn−1

= 0; else Bn−1

=共Rn−1Mod共A

n−2

⬘ +1兲. Anyway, define An−1

= An−2

− Bn−1

. Last, let Bn

= An−1

.

Here, B1

=共R1兲Mod共A⬘+1兲 means that B1

共t兲 =关R1共t兲兴Mod关A共t兲+1兴at pixel t. Then, after creating B1

共t兲, we create A1

共t兲 by the formula A1

共t兲=A

共t兲−B1

共t兲. The explanation for the remaining operations in step 4 is the same. Also, as t changes, for random effect, we ran-domly switch the order of assigning these values to 关B1

共t兲,B2

共t兲, ... ,Bn

共t兲兴. For example, when t=0, assign the computed values to B1

共t兲,B2

共t兲, ... ,Bn

共t兲 as earlier, respectively; then, when t = 1, assign the computed val-ues to Bn

共t兲,Bn−1

共t兲, ... ,B1

共t兲, respectively; then, when t = 2 , . . .. Here, we may use a random number generator to create the permutation order for this.

Step 5. Output n final shadows B1, B2, . . . , Bndefined by Bi=共A − A

兲 + Bi

for i = 1, . . . ,n. 共8兲 Recovering phase. After gathering any k共2艋k艋n兲 shadows

Bi共1兲,Bi共2兲, . . . ,Bi共k兲 共1 艋 i共j兲 艋 n for 1 艋 j 艋 k兲,

which are a subset of the n shadows 兵B1, B2, . . . , Bn其, re-trieve A using the formula

A

˜ = 关Bi共j兲共Bi共j兲Mod m兴 + 关共Bi共1兲+ Bi共2兲+ . . . + Bi共k兲Mod m兴.

共9兲 Here, Bi共j兲can be any one of Bi共1兲, Bi共2兲, . . . , Bi共k兲.

Lemma 1. In Eq.共9兲, any one of Bi共1兲, Bi共2兲, . . . , Bi共k兲can be used as Bi共j兲.

Proof. Equation共8兲implies that

Bi− Bi

= A − A

for all i = 1, . . . ,n, 共10兲 so we have 共Bi共j兲− Bi

共j兲Mod m=共A−A

Mod m for all 1

艋i共j兲艋n for 1艋 j艋k. However, 共A−A

Mod m= 0 because

A

=共A兲Mod m by Eq.共7兲. Hence,共Bi共j兲− Bi

共j兲Mod m= 0. So

共Bi共j兲Mod m=共B

i共j兲Mod m= Bi

共j兲, 共11兲 where the last identity is due to the fact that Bi

共j兲⬍共A

+ 1兲 by step 4 earlier, and the range of A

is 兵0, ... ,m−1其 by Eq.共7兲. We may thus say that

Bi共j兲共Bi共j兲Mod m= Bi共j兲− Bi

共j兲= A − A

共here, 1 艋 i共j兲

艋 n for 1 艋 j 艋 k兲. 共12兲

End of proof Lemma 2. In step 4 of the preceding sharing phase, A

= B1

+ B2

+ . . . + Bn−1

+ Bn

. 共13兲

Proof. Because A1

= A

− B1

, we have A

= B1

+ A1

. Because A2

= A1

− B2

, we have A

= B

1+ A1

= B1

+ B2

+ A2

. Because A3

= A2

− B3

, we have A

= B1

+ B2

+ B3

+ A3

. …

Because An−1

= An−2

− Bn−1

, we have A

= B1

+ B2

+ . . . +Bn−1

+ An−1

.

Last, because Bn

= An−1

, we have A

= B1

+ B2

+ . . . +Bn−1

+ Bn

.

End of proof Lemma 3. When all n shadows are received—i.e., when k = n—then A can be recovered losslessly by Eq. 共9兲. In other words,

A =关Bi共BiMod m兴 + 关共B1+ B2+ . . . + BnMod m兴. 共14兲

共Again, it does not matter which one of 兵B1, B2, . . . , Bn其 is used as Bi.兲

Proof. Here, we show why the recovery image A˜ be-comes the original image A when k = n. Since k = n, Eqs.

共9兲,共12兲, and共13兲imply that A

˜ = 关Bi共j兲共Bi共j兲Mod m兴 + 关共Bi共1兲+ Bi共2兲+ . . . + Bi共n兲Mod m兴,

=关Bi共BiMod m兴 + 关共B1+ B2+ . . . + BnMod m兴,

=关A − A

兴 + 关共B1兲Mod m+共B2兲Mod m+ . . .

+共BnMod mMod m,

=关A − A

兴 + 关B1

+ B2

+ . . . + Bn

Mod m,

=关A − A

兴 + 关A

兴, =A.

(5)

Step 4 implies that each pixel of B1

, B2

, . . . , Bn

is non-negative because each pixel is created by a modulus func-tion. Moreover, in step 4, the pixel values of A

are distrib-uted randomly among B1

, B2

, . . . , Bn

, and Eq.共13兲reads B1

+ B2

+ . . . + Bn−1

+ Bn

= A

,

in whichall pixel values are nonnegative. So to estimate the image quality共PSNR兲 of shadows B1, B2, . . . , Bn, we may start from the rough estimation

Bi

A

n . 共15兲

Now, the root-mean-square error 共RMSE兲 for each Bi 共1 艋i艋n兲, as compared with the input image A, is defined as

RMSE共Bi兲 =

all t 关A共t兲 − Bi共t兲兴2 Count共t兲

1/2 . 共16兲

Here, A共t兲 is a pixel value in A, and Bi共t兲 is in Bi. By Eq.

共8兲, RMSE共Bi兲 is evaluated as

all t

兵A共t兲 − 关A共t兲 − A

共t兲 + Bi

共t兲兴其2

Count共t兲

1/2 , which can be reduced as

all t

A

共t兲 ⫻ 共n − 1兲

n

2

Count共t兲

1/2

by Eq.共15兲. Because共n−1兲/n is a given constant due to the known value of n, the preceding rough estimation of RMSE共Bi兲 can be rewritten as

all t A

共t兲2 Count共t兲

1/2 ⫻共n − 1兲 n . Although the actual value of

all t

A

共t兲2 Count共t兲

1/2

depends on the histogram of the image A

, we may roughly estimate

all t

A

共t兲2 Count共t兲

1/2 as

0 m−1 t2dt

/共m − 1兲

1/2 =共m − 1兲/1.73, 共17兲

which is the probabilistic average value considering the fact that A

共t兲苸兵0,1, ... ,共m−1兲其. Therefore, we have

RMSE共Bi兲 ⬇

共m − 1兲 ⫻ 共n − 1兲

1.73⫻ n . 共18兲

Then, we can get the rough estimation PSNR共Bi兲 = 10 ⫻ log10 2552 关RMSE共Bi兲兴2 ⬇ 10 ⫻ log10 2552

共m − 1兲 ⫻ 共n − 1兲 1.73⫻ n

2. 共19兲

Some experimental results of PSNR共Bi兲 are shown in Table

2, which uses the five images in Fig.2共a兲and Fig.3. From Table 2 The PSNR of shadows when n = 4 shadows were generated for each image.

m’s value PSNR共Bi兲 in Eq.共19兲 PSNR in Lena’s shadows PSNR in Jet’s shadows PSNR in Monkey’s shadows PSNR in Pepper’s shadows PSNR in Boat’s shadows m = 256 7.26 7.37 7.40 7.16 7.45 7.34 m = 128 13.31 13.41 13.74 13.12 13.09 13.50 m = 64 19.40 19.02 20.09 18.46 18.75 19.89 m = 32 25.56 25.45 25.13 25.36 25.46 25.85 m = 16 31.87 31.21 31.21 31.15 31.11 31.97

Fig. 3 The other four images兵Jet, Monkey, Pepper, Boat其 used in Table2.

(6)

this table, we can see that the experimental value of PSNR is close to the estimation given by Eq.共19兲.

In our experiments, for the same value k, all recon-structed images have similar PSNR values. For example, in each of the three experimental results of Figs.4–6, the four images respectively reconstructed by shadows兵B1, B2, B3其 共or by 兵B1, B2, B4其, or by 兵B1, B3, B4其, or by 兵B2, B3, B4其兲 all have very similar PSNR values. Likewise, the six images reconstructed by any two shadows of 兵B1, B2, B3, B4其 also have similar PSNR values. In the recovering phase, when more shadows are gathered共k becomes larger兲, the recon-structed image then has higher image quality. In particular, when all n shadows are gathered, then k = n, and the recon-structed image A is error-free due to Lemma 3. In summary, the proposed version has a progressive decoding feature, and it uses only one subtraction, two modulus operations, and k additions to reconstruct a gray value from pixels of k available shadows.

The⫹, ⫺, and Mod in this section are all byte-by-byte operations among gray values. Hence, if the input image is

color共24 bits per pixel兲, then A must be first decomposed into three components 共AR, AG, and AB兲 of 8 bits each. Then the preceding sharing process is implemented for each component to generate n shadows. Then, for each in-dex i = 1 , . . . , n, the three corresponding shadows Bi

R , Bi G , and Bi B

are combined to get final shadow Bi.

3.4 Comparison with Fang’s Method and a Stego Version of Our Method

3.4.1 Comparison with Fang’s method

Comparing to Fang’s method7reviewed in Sec. 2, which is also user-friendly and progressive, our method in Sec. 3.3 has two more advantages:

• The size of each of our shadows in B1, B2, . . . , Bn is the same as A共not expanded兲.

• Our shadows’ image quality PSNR共Bi兲 can be roughly controlled by the base parameter m of modulus opera-tions共2艋m艋256 is an integer兲. Just estimate m by

m⬇ 441⫻ n

共10PSNR共Bi兲/101/2⫻ 共n − 1兲+ 1. 共20兲

Equation共20兲is derived from Eq.共19兲, an estimation tool whose validity is checked in Table2.

3.4.2 Stego version of our method

In Fang’s method,7 each shadow is hidden using a cover image T共also known as a host image兲 so that all shadows 共called stego-shadows兲 look like T. Our method in Sec. 3.3 can also be modified to have a stego version by using stego-shadows smaller in size than Fang’s. Our stego ver-sion is as follows.

Sharing phase.

Step 1. Input an integer parameter m 共2艋m艋64 in stego version, but 16艋m艋64 is suggested to avoid large per兲; input an 8-bit grayscale cover image T whose size共w⫻h兲 is also the size of the 8-bit grayscale secret image A.

Fig. 4 An example of the共n=4兲 case using m=256 in the non-stego version共Sec. 3.3兲. Here, 共a兲 to 共d兲 are the final shadows B1, B2, B3,

B4关RMSE=109.13 and PSNR=7.37 for 共a兲 to 共d兲兲兴; 共e兲 to 共g兲 are the

recovered Lena images 关RMSE=80.22 and PSNR=10.04 for 共e兲; RMSE= 49.75 and PSNR= 14.20 for共f兲; lossless for 共g兲兴 using 共re-spectively兲 any two, any three, and all four final shadows.

Fig. 5 An example of the共n=4兲 case using m=64 in the non-stego version共Sec. 3.3兲. Here, 共a兲 to 共d兲 are the final shadows B1, B2, B3,

B4关RMSE=28.54 and PSNR=19.02 for 共a兲 to 共d兲兴; 共e兲 to 共g兲 are the

recovered Lena images 关RMSE=21.07 and PSNR=21.66 for 共e兲; RMSE= 13.10 and PSNR= 25.79 for共f兲; lossless for 共g兲兴 using 共re-spectively兲 any two, any three, and all four final shadows.

Fig. 6 An example of the共n=4兲 case using m=16 in the non-stego version共Sec. 3.3兲. Here, 共a兲 to 共d兲 are the final shadows B1, B2, B3,

B4关RMSE=7.01 and PSNR=31.21 for 共a兲 to 共d兲兴; 共e兲 to 共g兲 are the

recovered Lena images 关RMSE=5.21 and PSNR=33.79 for 共e兲; RMSE= 3.28 and PSNR= 37.81 for共f兲; lossless for 共g兲兴 using 共re-spectively兲 any two, any three, and all four final shadows.

(7)

Step 2. Let sz = 8/log2m. Use pixels duplication to ex-pand T to a larger image T

whose size is

sz⫻ w兲 ⫻ 共

sz⫻ h兲. 共21兲

Step 3. Generate n − 1 random images R1, R2, . . . , Rn−1 共Each image Riis as large as A, and each pixel of Riis 8-bit.兲

Step 4. Create n images B1

, B2

, . . . , Bn

according to step 4 of the sharing phase in Sec. 3.3, except that here we use A to replace the role of A

in all formulas there. Step 5. Use a random key r to create an order to permute all pixels in B1

. Each of the remaining n − 1 images B2

, . . . , Bn

is also permuted using the random key r. Then use Shamir’s共2,n兲兲-threshold sharing method1to share the key r among n created numbers r1, r2, . . . , rn. Then, store ri in Bi

for each i = 1 , . . . , n.

Step 6. Treat each grayscale image Bi

共1艋i艋n兲 as a bit stream 共i.e., a very big binary integer兲, then partition each Bi

to共

sz⫻w兲⫻共

sz⫻h兲 smaller range numbers Bi

共t兲 关Here, 0艋Bi

共t兲⬍m and 0艋t⬍共sz⫻w⫻h兲.兴 Then hide each number Bi

共t兲 in T

共t兲 to get a pixel value Bi共t兲 by the formula

Bi共t兲 = round

T

共t兲 − Bi

共t兲

m

⫻ m + Bi

共t兲, 共22兲

where the round operator rounds its argument to the

nearest integer. Add共subtract兲 m to 共from兲 the result of Eq.共22兲if Bi共t兲⬍0 or ⬎255.

Step 7. Output n stego-shadows B1, B2, . . . , Bn whose sizes are all identical to T

.

Recovering phase.

Step 1. After gathering any k 共2艋k艋n兲 shadows 兵Bi共1兲, Bi共2兲, . . . , Bi共k兲其傺兵B1, B2, . . . , Bn其, where 1艋i共j兲 艋n for each j=1, ... ,k, retrieve all 共sz⫻w⫻h兲 smaller range numbers Bi

共j兲共t兲 in each stego-image Bi共j兲 by the dehiding formula:

Bi

共j兲共t兲 = 关Bi共j兲共t兲兴Mod m for t = 0, . . . ,共sz ⫻ w ⫻ h兲 − 1.

共23兲 Step 2. Combine the共sz⫻w⫻h兲 smaller range numbers Bi

共j兲共t兲 to retrieve each Bi

共j兲as an 8-bit grayscale image of w⫻h pixels.

Step 3. Recover the random key r by inverse sharing. Then use the key r to restore the original pixels’ order in image Bi

共1兲, Bi

共2兲, . . . , Bi

共k兲.

Step 4. Last, retrieve A in pixel-by-pixel manner by the formula

Fig. 7 An example of the共n=4兲 case using m=32 in the stego version共Sec. 3.4.2兲. Here, 共a兲 to 共d兲 are the final stego-shadows B1,

B2, B3, and B4;共e兲 to 共g兲 are the progressively recovered Lena

im-ages using, respectively, “any” two, “any” three, and all four final shadows. PSNR= 26.66 for共a兲 to 共d兲; PSNR=10.04 for 共e兲; PSNR = 14.21 for共f兲; and 共g兲 is lossless.

Fig. 8 Comparing the stego-shadows in two stego methods for the 共n=4兲 case. The hidden image is Lena 关Fig.2共a兲兴, and the host

image is Jet关Fig.3共a兲兴. Here, 共a兲 is one of the four stego-shadows

with PSNR= 26.66 dB in our stego version共when m=32兲; 共b兲 is one of the four stego-shadows with PSNR= 31.26 dB in our stego ver-sion 共when m=16兲; 共c兲 is one of the four stego-shadows with PSNR= 10.02 in Fang’s method共Sec. 2兲. Note that our stego size is only 1.6 times关in 共a兲兴 or 2 times 关in 共b兲兴 larger than the original Jet image’s size, whereas Fang’s stego size is 4 times larger than origi-nal Jet.

(8)

A

˜ = Bi

共1兲+ Bi

共2兲+ . . . + Bi

共k兲. 共24兲 In our preceding stego version, the final stego-shadows B1, B2, . . . , Bnare sz = 8/log2m times larger than the input secret image A. So the pixel expansion rate is per = 8/log2m艋8/4=2 if we set the parameter m艌16. An example using m = 32 is shown in Fig. 7, where the Jet images are stego-shadows utilized to cover 共and progres-sively recover兲 the important image Lena. In this example 共m=32兲, our stego version’s pixel expansion rate is per = 8/5=1.6, better than Fang’s per=4 关shown in Fig.8共c兲兴. Moreover, our shadows’ image quality is also better than Fang’s. For example, as shown in Figs. 7共a兲–7共d兲 or Fig.

8共a兲, our Jet shadows have image quality of PSNR = 26.66 dB. 关PSNR would be 31.26 dB, as shown in Fig.

8共b兲, if we used m = 16 to get the shadows whose size are all two times larger than the original Jet image.兴 On the contrary, after implementing Fang’s method in each bit-plane of the same grayscale important image A共Lena兲 and the same cover image T 共Jet兲, each of Fang’s n=4

quadruple-size stego-shadows has PSNR= 10.02 dB only 关see Fig. 8共c兲兴. Our stego version is still progressive in decoding, lossless when all n shadows are collected, and has small decoding complexity O共k兲 when k of the n shad-ows are used in decoding.

4 Experimental Results and Some Comparisons 4.1 Experimental Results

In the proposed method in Sec. 3.3, the input image A is the grayscale image Lena in Fig.2共a兲. Figure4 shows the ex-perimental result for the 共n=4兲 case when m=256. The image A can be roughly seen in any of the four generated user-friendly shadows shown in Figs. 4共a兲–4共d兲. In Figs.

4共e兲–4共g兲, when more shadows are available in retrieval, the recovered image has better quality.

Other experiments using m = 64 and m = 16 for 共n=4兲 case are shown in Figs.5and6, respectively. The shadows in Fig.6have higher PSNR than those in Figs.4and5due to the use of a smaller m value. This is according to Eq.

共19兲, where we have Table 3 Comparisons with reported image sharing methods共Refs.3–8兲.

Methods

Computational complexitya

Memory sizeb for each shadow

Recovered qualityc Wang and Su共Ref.3兲 O共log2k兲 共math

operations兲d

per⬇共1/k兲⫻60%

艌共1/n兲⫻60% Lossless

Wang et al.共Ref.4 k − 1共XOR operations兲 per= 1 Lossless Lin and Tsai共Ref.5兲 O共k⫻per兲 共OR-like

operations

per艌2 Lossless

Thien and Lin共Ref.6兲

共visually recognizable shadows兲 O共log2k兲 共math operations兲 per⬇1/k艌1/n Lena’s PSNR= 37.98 Jet’s PSNR= 39.93 Monkey’s PSNR= 35.33 Fang共Ref.7兲 共visually recognizable shadows and progressive 4⫻共k−1兲 共OR-like operations兲 per= 4 Lossless

Jin et al.共Ref.8兲

共progressive兲 4⫻共k−1兲 共XORoperations兲

per= 4 Lossless Section 3.3 共visually recognizable shadows and progressive k additions; 2 Mod operations; 1 subtraction per= 1 Lossless

Our stego version, Sec. 3.4.2 共visually recognizable shadows and progressive兲 共k−1兲 additions; 2 Mod operations; 1 attaching of

a short binary number to the other to get an 8-bit

number

1.33艋per= 8 /log2m艋2

when 64艌m艌16

Lossless

aOperations needed to recover one secret pixel by k shadows in共k, n兲 system. bThe pixel expansion rate共per兲 of each shadow as compared to the input secret image. cThe secret image recovered by all shadows.

(9)

PSNR共Bi兲 ⬇ 10 ⫻ log10 共441 ⫻ n兲2 关共m − 1兲 ⫻ 共n − 1兲兴2= 10 ⫻ log10 共441 ⫻ 4兲2 关共m − 1兲 ⫻ 3兴2 because n = 4.

Notably, when m = 256, 64, and 16, respectively, the PSNR共Bi兲 values estimated by Eq. 共19兲 are 7.26 db, 19.40 dB, and 31.87 dB. These are all very close to the actual PSNR values of the shadows 共7.37 dB, 19.02 dB, and 31.21 dB, respectively兲 shown in Figs.4–6.

4.2 Comparisons

Our method provides at least two convenient features: it is user-friendly and progressive. We can compare our method with other image-sharing researches.3–8 Table 3 compares in three aspects: computational complexity to reconstruct a pixel; memory space of a shadow 共represented by pixel expansion rate关per兴, as compared to the size of input im-age兲; and image quality of the image recovered by all n shadows. Table3shows that in our method:共1兲 each pixel can be reconstructed by k shadows using about k opera-tions; 共2兲 the size of each shadow is not expanded 共per = 1兲 for the nonstego version; and 共3兲 the recovery by all n shadows is lossless. Although our per or computational complexity is in the middle rank rather than the best, note that in Table 3, only Refs. 6 and 7 and ours are user-friendly 共provide visually recognizable shadows兲. In these three user-friendly approaches, Ref. 7 is four times ex-panded in shadow size, whereas Ref.6 is neither progres-sive nor lossless in recovery. As for Refs. 3–5, they are neither progressive nor user-friendly.

To compare with Fang’s further, we provide the stego version in Sec 3.4.2, in which the pixel expansion rate共per兲 is 1.33艋per=8/log2m艋2 when 64艌m艌16. For ex-ample, per= 1.6 when m = 32. These per values are still bet-ter than Fang’s per= 4.共Hence, regardless of whether the stego version is used, our per is better than Fang’s.兲 More-over, our stego-shadow’s image quality is also better than Fang’s.共See Fig.8; our Jet stego-shadows are with PSNR = 26.66 dB for m = 32 and 31.26 dB for m = 16, both better than Fang’s 10.02 dB.兲

5 Conclusion

In this paper, based on modulus operations, we successfully designed a novel image sharing method with user- friendly shadows and progressive decoding. According to the ex-perimental results and comparisons in Sec. 4, in addition to being user-friendly, and progressive, each pixel is recon-structed by k shadows quickly with about k operations, and the recovery is lossless after collecting all n shadows. The

proposed method also provides following features: the non-stego-shadows’ image quality can be controlled by the pa-rameter value m using Eq. 共20兲; each shadow is not ex-panded in the non-stego version 共Sec. 3.3兲 and is only 1.33艋per=8/log2m艋1.6 times larger than the original secret image if we restrict 64艌m艌32 in the stego version 共Sec. 3.4.2兲; and the stego-shadows have quality much bet-ter than Fang’s shadows共Fig.8兲.

Acknowledgment

This work is supported by the National Science Council, Taiwan, R.O.C., under Grant NSC 97-2221-E-009-120-MY3.

References

1. A. Shamir, “How to share a secret,” J. Assoc. Comput. Mach. 22共11兲, 612–613共1979兲.

2. M. Naor and A. Shamir, “Visual cryptography,” in Advances in

Cryptology-Eurocript’94, A. D. Santis, Ed.,Lect. Notes Comput. Sci.

950, 1–12共1995兲.

3. R. Z. Wang and C. H. Su, “Secret image sharing with smaller shadow images,”Pattern Recogn. Lett.27, 551–555共2006兲.

4. D. Wang, L. Zhang, N. Ma, and X. Li, “Two secret sharing schemes based on Boolean operations,” Pattern Recogn. 40, 2776–2785 共2007兲.

5. C. C. Lin and W. H. Tsai, “Visual cryptography for gray-level images by dithering techniques,”Pattern Recogn. Lett.24, 349–358共2003兲. 6. C. C. Thien and J. C. Lin, “An image-sharing method with user-friendly shadow images,”IEEE Trans. Circuits Syst. Video Technol.

13共12兲, 1161–1169 共2003兲.

7. W. P. Fang, “Friendly progressive visual secret sharing,”Pattern Rec-ogn.41, 1410–1414共2008兲.

8. D. Jin, W. Q. Yan, and M. S. Kankanhalli, “Progressive color visual cryptography,”J. Electron. Imaging14共3兲, 033019 共2005兲.

Kun-Yuan Chao received his BS degree in computer and information science in 1996 from National Chiao Tung University, Tai-wan. He received his MS degree in com-puter science and information engineering in 1999 from National Taiwan University, Taiwan. He is currently a PhD candidate in computer and information science, Na-tional Chiao Tung University, Taiwan. His recent research interests include secret im-age sharing, visual cryptography, and image processing.

Ja-Chen Lin received his BS degree in computer science and his MS degree in applied mathematics, both from National Chiao Tung University, Taiwan. He re-ceived his PhD degree in mathematics from Purdue University, West Lafayette, In-diana. He joined the Department of Com-puter and Information Science at National Chiao Tung University in 1988, and then became a professor there. His research in-terests include pattern recognition and image processing.

數據

Fig. 1 The sharing flowchart of Fang’s method.
Table 1 Fang’s selection of sharing patterns 共Ref. 7 兲.
Table 2 The PSNR of shadows when n = 4 shadows were generated for each image.
Fig. 5 An example of the 共n=4兲 case using m=64 in the non-stego version 共Sec. 3.3兲. Here, 共a兲 to 共d兲 are the final shadows B 1 , B 2 , B 3 ,
+3

參考文獻

相關文件

Table 1: Characteristics of interviewed visitors, by place of residence Table 2: Average length of stay of interviewed visitors, by place of residence Table 3: Per-capita spending

Table 1: Characteristics of interviewed visitors, by place of residence Table 2: Average length of stay of interviewed visitors, by place of residence Table 3: Per-capita spending

Table 1: Characteristics of interviewed visitors, by place of residence Table 2: Average length of stay of interviewed visitors, by place of residence Table 3: Per-capita spending

Let us suppose that the source information is in the form of strings of length k, over the input alphabet I of size r and that the r-ary block code C consist of codewords of

The superlinear convergence of Broyden’s method for Example 1 is demonstrated in the following table, and the computed solutions are less accurate than those computed by

For R-K methods, the relationship between the number of (function) evaluations per step and the order of LTE is shown in the following

– For each k, the faster, smaller device at level k serves as a cache for the larger, slower device at level k+1. • Why do memory

Dynamic programming is a method that in general solves optimization prob- lems that involve making a sequence of decisions by determining, for each decision, subproblems that can