• 沒有找到結果。

Anisometric Solid Texture Synthesis

CHAPTER 4 ANISOMETRIC SYNTHESIS PROCESS

4.2 Anisometric Solid Texture Synthesis

4.2.1 Pyramid Upsampling

The goal for upsmapling step in anisometric synthesis is the same as it in isometric synthesis. It helps synthesize from coarse level to fine level, and we have to upsample the coordinate values of parent voxels for the next level.

The difference is that the child-dependent offset for upsmapling step in anisometric synthesis is dependent on the anisometric field . We use the anisometric field to compute the distance for spacing.

A

where denotes the regular output spacing of exemplar coordinates, and means the relative locations for 8 children.

hl Δ

4.2.2 Voxel Correction

The goal for correction step is to make the coordinates similar to those in the exemplar V . For every voxel p , we collect the feature values of warped

neighbors by the anisometric field and the inverse anisometric field to obtain , and then search the most similar voxel from the transformed exemplar

The method presented by Lefebvre and Hoppe [12] for anisometric synthesis is able to reproduce arbitrary affine deformations, including shears and non-uniform scales. They only accessed immediate neighbors of pixel p to construct the neighborhood vector . They used the Jacobian field and the inverse Jacobian field to infer which pixel neighbors to access, and the results will be transformed by the inverse Jacobian field at the current point.

We will apply this to 3D space.

) construct the warped neighborhood vector ~ ( )

p

Fig. 4.3 shows the 8 warped neighbors for every voxel p . Their locations

are changed from diagonal locations by the inverse anisometric fieldAl1.

Figure 4.3 Eight warped neighbors for ~ ( ) p Nsl

Second, we have to find the 4 synthesized voxels nearby warped neoghborhod voxels of voxel p . We use the inverse anisometric field to infer the 4 synthesized voxels for voxel

1

, and compute the averaged feature value as the new feature value at p+ϕ~ Δ . Fig. 4.4 shows the locations of 4 warped synthesized voxels for each warped neighbor.

Δ]

Figure 4.4 Four warped sub-neighbors for warped neighbors of voxel p

We search the voxel u′ which is most similar to voxel p by comparing neighborhood vectors ~ ( )

p

Nsl and ~ ( ) u

Nsl ′ . We utilize the 8 warped voxels nearby voxel p and the anisometric fields to infer where the voxel is.

For example, the warped neighbor voxel

A u′

i′ ( ′i=1~8), we can get the most similar 3 voxels ( , , ) for voxel i′1 i′2 i′3 i′ from the similarity set, and then use the warped relationship with the anisometric fields A between voxel i′ and voxel

p to infer the candidate voxels (i1′ ,p i2′ ,p i3′ ) for voxel p , as shown in Fig. 4.5. p With the candidates for voxel p , we can compute the warped neighborhood vectors )~ (

1 sl i

N ′ ,p ~ ( )

2 p

Nsl i and ~ ( ) i′2 p

Nsl with the inverse anisometric fieldAl1.

i′3

Figure 4.5 Process for inferring warped candidates for voxel p . Using three similar voxels of warped neighbor voxel i′ to infer candidates for voxel p

By the same way, we will infer 24 candidates for voxel p , and compute the 24 ~ ( )

u

Nsl ′ s, where u′ is a candidate. Comparing these ~ ( )

sl u

N ′ s with ~ ( ) p Nsl for neighborhood matching, we could find the most similar voxel to replace voxel

u′

p .  

i′1

i’3p   

i′2

i’1p 

i’2p 

Chapter 5

       

Implementation and Results

We implement our system on a PC with 2.67GHz and 2.66GHz Core2 Quad CPU and 4.0GB of system memory. We use MATLAB to implement our method.

For a 32×32×32 volume data V , it needs about 3 minutes to construct a transformed exemplar from feature vectors and about 2 hours to construct a similarity set. For a 64×64×64 volume data , it needs about 90~120 minutes for a transformed exemplar

V ′

V

V ′ and about 85~95 hours for a similarity set. The transformed exemplar V from feature vectors and the similarity set can be reused for synthesis process. It means that once the feature vectors and similarity sets are constructed, we can use them for other syntheses with different target sizes for results and with different vector fields.

For a 64×64×64 result data, it needs about 6 hours to synthesize solid texture. For a 128×128×128 result data, it needs about 7~10 hours for synthesis process. We will show our results with 64×64×64 input volume data and 128×128×128 result data in this chapter. The detail computation time for different textures are shown in Table 5.1. In Section 5.1, we show some isometric synthesis results, and we show anisometric results with different vector fields

control in Section 5.2.

We use 5×5×5 grids for feature vectors at each voxel, 7×7×7 grids for similarity set that the voxels in this area could not be the candidate of the center voxel, and the parameter for jitter step is set to 0.7.

5.1 Isometric Results

The input data in Fig. 5.1(b) (case_1) is stochastic and marble-like texture. It only contains two kinds of colors, and it is vivid. It is information-rich that only needs small amount of data to represent the whole texture. It means that we can synthesize larger results (bigger than two times of input data size) with this kind of textures. Fig. 5.1(c)~(f) show the result. As we can see, the result is continuous and not the duplication of the input data.

The input data in Fig. 5.2(b) (case_2) is particle-like texture. It contains few kinds of color, and it is very different between particles and background. The particles in case_2 are the same kind. As long as there are few complete particle patterns in the input data, we can synthesize good result, as shown in Fig.

5.2(c)~(f). Because the few particle patterns can represent whole texture, we can synthesize it from 32×32×32 to 128×128×128, even from 64×64×64 to 256×256×256 volume data (the result size is four times of input size).

The input data in Fig. 5.3(b) (case_3) is another type of particle texture. It contains different sizes and different colors of particles, and most of particles look like the same color as background. Fig. 5.3(c)~(f) show the result from

64×64×64 to 128×128×128 volume result. Because the input data is not information-rich, the distribution of the particles in the result is sparse, not as it in the input data. It means that the size of the input data is not enough to contain enough information for us to synthesize.

The input data in Fig. 5.4(b) (case_4) is about sea water. It is a kind of homogeneous textures because it is almost in the same color in the whole volume.

The main feature in the input data is the highlight area. As the result in Fig. 5.4(c)~(f) shown, there are few highlight area in the result volume data.

The input data in Fig. 5.5(b) (case_5) is a kind of structural textures. The patterns in the input data are small and compact, so the texture is information-rich. Only small size for input data could contain enough patterns for synthesis. It can be synthesized with a few input data and obtain good results.

The result is shown in Fig. 5.5(c)~(f).

The input data in Fig. 5.6(b) (case_6) is structural and continuous on two directions and broken on the other direction. It is consist of thin stokes from black and white. The result in Fig. 5.6(c)~(f) is good at the two continuous directions, which is continuous and not duplicate, but broken at the other direction because the input data is information-poor.

The input data in Fig. 5.7(b) (case_7) is structural with bigger patterns. The feature in the input data is too large, so the input data could not represent the whole volume data if the input size is too small. As we can see in Fig. 5.7(c)~(f), the information in the 64×64×64 input data is regular, not various, so the result is

not as we expect.

Table 5.1 computation time for different textures

Feature Vector

Construction

Similarity Set Construction

Synthesis Process Case_1 47.8 seconds 85 hours 37 minutes 7 hours 55 minutes Case_2 31.1 seconds 85 hours 28 minutes 8 hours 17 minutes Case_3 35.4 seconds 86 hours 5 minutes 8 hours 22 minutes Case_4 48.6 seconds 84 hours 42 minutes 8 hours 5 minutes Csse_5 34.3 seconds 83 hours 46 minutes 7 hours 28 minutes Csse_6 38.2 seconds 86 hours 21 minutes 8 hours 41 minutes Csse_7 35.7 seconds 86 hours 51 minutes 8 hours 43 minutes Csse_8 37.3 seconds 93 hours 30 minutes 6 hours 52 minutes Csse_9 34.9 seconds 85 hours 4 minutes 8 hours 43 minutes

(a) (b)

(c) (d)

(e) (f) Figure 5.1 Input and result data for case_1

(a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_1

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) result volume data for case_1

(a) (b)

(c) (d)

(e) (f) Figure 5.2 Input and result data for case_2

(a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_2

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) result volume data for case_2

(a) (b)

\

(c) (d)

(e) (f) Figure 5.3 Input and result data for case_3

(a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_3

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) result volume data for case_3

(a) (b)

(c) (d)

(e) (f) Figure 5.4 Input and result data for case_4

(a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_4

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) result volume data for case_4

(a) (b)

(c) (d)

(e) (f) Figure 5.5 Input and result data for case_5

(a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_5

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) result volume data for case_5

(a) (b)

(c) (d)

(e) (f) Figure 5.6 Input and result data for case_6

(a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_6

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) result volume data for case_6

(a) (b)

(c) (d)

(e) (f) Figure 5.7 Input and result data for case_7

(a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_7

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) result volume data for case_7

5.2 Anisometric Results

We show anisometric results with different vector field controls : circular pattern on XY plane, oval pattern on XY plane, slant pattern on XY plane, zigzag pattern on XY plane, and slant control on 3D space. In order to emphasize the control effect, we show the results about structural textures, as brickwalls and woods et. al.

The vector field about circular control is in Fig. 5.8. The results with circular pattern control on the XY plane are shown in Fig. 5.9 and Fig. 5.10. Fig. 5.9 shows the result for case_5. The result is good because of its small and compact pattern. The input data in Fig. 5.10(b) (case_8) is wood texture, it is continuous on the two directions and broken on the other direction. We can see the patterns changed with circular control on XY plane.

(a) (b) Figure 5.8 5×5×5 3D vector field about circular control

(a) XY plane (b) three orthogonal axes at every point

(a) (b)

(c) (d) Figure 5.9 Anisometric result with circular control for case_5

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with circular control for case_5

(a) (b)

(c) (d)

(e) (f)

Figure 5.10 Input data and anisometric result with circular control for case_8 (a) cross sections at X=32, Y=32, and Z=32 for input data

(b) input volume data for case_8

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data (f) anisometric result with circular control for case_8

The vector field about oval control is in Fig. 5.11. The results with oval pattern control on the XY plane are shown in Fig. 5.12 and Fig. 5.13. Fig. 5.12 shows the result for case_5, it is good at this kind of control. Fig. 5.13 is the result for case_8, it is continuous with the oval control.

(a) (b) Figure 5.11 5×5×5 3D vector field about oval control

(a) XY plane (b) three orthogonal axes at every point

(a) (b)

(c) (d) Figure 5.12 Anisometric result with oval control for case_5

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with oval control for case_5

(a) (b)

(c) (d) Figure 5.13 Anisometric result with oval control for case_8

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with oval control for case_8

The vector field about slant control is in Fig. 5.14. The results with slant control on the XY plane are shown in Fig. 5.15 and Fig. 5.16. As we can see, there are slant control on XY direction, and no change from isometric results on XZ and YZ direction. The input data in Fig. 5.15(b) (case_9) is about brickwalls, a kind of structural texture. The result in Fig. 5.15(c)~(f) is continuous with the slant control at the whole volume. It is good at the cross sections of different planes. Fig 5.16 shows the result for case_5, there is a little discontinuity on XY plane and no change on the other planes.

(a) (b)

(c) (d) Figure 5.14 5×5×5 3D vector field about slant control

(a) one axes on XY plane (b) one axes at every point

(c) three axes on XY plane (d) three orthogonal axes at every point

(a) (b)

(c) (d)

(e) (f)

Figure 5.15 Input data and anisometric result with slant control for case_9 (a) cross sections at X=32, Y=32, and Z=32 for input data (b) input volume data for case_9

(c) cross section at X=126, Y=126, and Z=126 for result data (d) cross section at X=80, Y=80, and Z=80 for result data (e) cross section at X=64, Y=64, and Z=64 for result data       (f) anisometric result with slant control for case_9

(a) (b)

(c) (d) Figure 5.16 Anisometric result with slant control for case_5

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with slant control for case_5

The vector field about zigzag control is in Fig. 5.17, and we make it changed by two different directions for slant. We make the texture results changed on the XY plane with zigzag control, as shown in Fig. 5.18~ Fig. 5.20. As Fig. 5.15(b) shown, there is almost no information on XZ plane in the input data for case_9.

We reconstruct the texture and there are some information shown in the result volume data (Fig. 5.18). The zigzag control is the same as we expect for case_9.

Fig. 5.19 shows the result for case_5 with zigzag control on XY plane, and it is better than slant control (Fig. 5.16). Fig. 5.20 shows the result for case_8, the patterns on XY plane are changed by zigzag control. It keeps the continuity between different planes (XY planes with XZ planes, and XY planes with YZ planes).

(a) (b)

(c) (d) Figure 5.17 5×5×5 3D vector field about zigzag control

(a) one axes on XY plane (b) one axes at every point

(c) three axes on XY plane (b) three orthogonal axes at every point

(a) (b)

(c) (d) Figure 5.18 Anisometric result with zigzag control for case_9

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with zigzag control for case_9

(a) (b)

(c) (d) Figure 5.19 Anisometric result with zigzag control for case_5

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with zigzag control for case_5

(a) (b)

(c) (d) Figure 5.20 Anisometric result with zigzag control for case_8

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with zigzag control for case_8

The vector field about 3D slant control is in Fig. 5.21. We make the results changed in the 3D space with slant control, as shown in Fig. 5.22~ Fig. 5.24.

There are slant patterns on all planes and inside the volume results. Besides, they are continuous between different planes. Fig. 5.22 shows the anisometric result for case_9. There are on information on XZ plane of the input data for case_9, but it is continuous on all planes and inside the result in Fig. 5.22. The result is good in Fig. 5.23 because of the compact information in the input data. The anisometric result for case_8 is in Fig. 5.24. After 3D slant control, the information ion XZ plane shows up, even the result is not very continuous.

(a) (b)

(c) (d) Figure 5.21 5×5×5 3D vector field about 3D slant control

(a) one axes on XY plane (b) one axes at every point

(c) three axes on XY plane (b) three orthogonal axes at every point

(a) (b)

(c) (d) Figure 5.22 Anisometric result with 3D slant control for case_9

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with 3D slant control for case_9

(a) (b)

(c) (d) Figure 5.23 Anisometric result with 3D slant control for case_5

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with 3D slant control for case_5

(a) (b)

(c) (d) Figure 5.24 Anisometric result with 3D slant control for case_8

(a) cross section at X=126, Y=126, and Z=126 for result data (b) cross section at X=80, Y=80, and Z=80 for result data (c) cross section at X=64, Y=64, and Z=64 for result data (d) anisometric result with 3D slant control for case_8

 

Chapter 6

Conclusions and Future Works

We have presented an exemplar-based system for solid texture synthesis with anisometric control. We apply 2D texture synthesis algorithm to 3D space.

In the preprocessing, we construct the feature vectors and a similarity set for an input volume data. We use the feature vectors not traditional RGB values to construct neighbor vectors for more accurate neighborhood matching. The similarity set which records 3 candidates for each voxel helps more effective neighborhood matching. In the synthesis process, we use the pyramid synthesis method to synthesize textures from coarse level to fine level, from one voxel to m×m×m result data. We can only use 8 locations of each voxel for neighborhood matching in synthesis process. In the anisometric synthesis process, we generate vector fields and make the result textures changed by the vector fields.

Comparing to other methods for solid synthesis, they only considered the information on three orthogonal 2D slices. They could not capture the information in the 3D space, and they can only control the textures on the slices not in the 3D space. We present a system for more accurate, effective and various solid texture synthesis.

In the future, we may control the anisometric textures with flow fields that make the results changed with time. Kwatra et. al. [9] presented a method for 3D surface texture synthesis with flow field. We may apply their method to synthesize anisometric textures changing with time in the 3D space. Besides, we may reduce the cost time for similarity set construction. The most time are spent on constructing similarity set in our system now, we may try another algorithm for similarity set construction to make the system more effective.

Reference

[1] Ashikhmin, M., “Synthesizing Natural Textures”, ACM SIGGRAPH Symposium on Interactive 3D graphics, pp. 217-226, 2001.

[2] Chiou, J. W., and Yang, C. K., “Automatic 3D Solid Texture Synthesis from a 2D Image”, Master’s Thesis, Department of Information Management

National Taiwan University of Science and Technology, 2007

[3] Dischler, J. M., Ghazanfarpour, D., and Freydier, R., “Anisotropic Solid Texture Synthesis Using Orthogonal 2D Views”, EUROGRAPHICS 1998, vol.

17, no. 3, pp. 87-95, 1998

[4] Ebert, D.S., Musgrave, F.K., Peachey, K.P., Perlin, K. and Worley, S., Texturing & Modeling: A Procedural Approach, third ed. Academic Press, 2002.

[5] Heeger, D. J., and Bergen, J. R., “Pyramid-Based Texture Analysis Synthesis”, ACM SIGGRAPH 1995, vol. 14, no. 3, pp. 229-238, 1995

[6] Jagnow, D., Dorsey, J., and Rushmeier, H., “Stereological Techniques for Solid Textures”, ACM SIGGRAPH 2004, vol. 23, no. 3, pp. 329-335, 2004 [7] Kraevoy, V., Sheffer, A., and Gotsman, C., “Matchmaker: Constructing

Constrained Texture Maps”, ACM SIGGRAPH 2003, vol. 22, no. 3, pp.

326-333, 2003

[8] Kwatra, V., Essa, I., Bobick, A., and Kwatra, N., “Texture Optimization for Exampled-based Synthesis”, ACM SIGGRAPH 2005, vol. 24, no. 3, 2005

[9] Kwatra, V., Adalsteinsson, D., Kim, T., Kwatra, N., Carlson, M., AND Lin, M.,

“Texturing Fluids”, IEEE Transactions on Visualization and Computer Graphics 2007, vol.13, no. 5, pp.939 – 952, 2007

[10] Kopf, J., Fu, C. W., Cohen-Or, D., Deussen, O., Lischinski, D., and Wong, T.

T., “Solid Texture Synthesis from 2D Exemplars”, ACM SIGGRAPH 2007, vol.

T., “Solid Texture Synthesis from 2D Exemplars”, ACM SIGGRAPH 2007, vol.

相關文件