• 沒有找到結果。

Solid Texture Synthesis

CHAPTER 2 RELATED WORKS

2.2 Solid Texture Synthesis

Now we review different methods for solid texture synthesis.

Jagnow et al. [6] gave a stereological technique for solid textures. This approach used traditional stereological methods to synthesize 3D solid textures from 2D images. They synthesized solid textures for spherical particles and then extended the technique to apply to particles of arbitrary shapes. Their approach needs cross-section images to record the distribution of circle sizes on 2D slices and builds the relationship of 2D profile density and 3D particle density. Users could use the particle density to reconstruct the volume data by adding one particle at a time, and it means the step is manual. This method uses many 2D profiles to construct 3D density for volume result. Their results are good for marble textures, but their system is not automatic and only for particle textures.

Chiou and Yang [2] improved this method to automatic process, but it still only for particle textures.

Qin et al. [15] presented an image-based solid texturing based on basic

gray-level aura matrices (BGLAMs) framework. They used BGLAMs rather than traditional gray-level histograms for neighborhood matching. They created aura matrix from input exemplars and then generated a solid texture from multiple view directions. For every voxel in the volume result, they will only consider the pixels on the three orthogonal slices for neighborhood matching. Their system is fully automatic and requires no user interaction in the process. Furthermore, they can generate faithful results of both stochastic and structural textures. But they needed large storages for large matrix and their results are not good for color textures. They used the information on three slices to create the aura matrix, so they could not do texture control on the results in 3D space.

Kopf et al. [10] introduced a solid texture synthesis method from 2D exemplars. They extended 2D texture optimization techniques to synthesize 3D solid textures and then used optimization approach with histogram matching to preserve global statistical properties. They only considered the neighborhood coherence in three orthogonal slices for one voxel, and iteratively increase the similarity between the solid textures and the exemplar. Their approach could generate good results for wide range of textures. However, they synthesized the texture with the information on the slices. It is difficult to control in 3D space.

Takayama et. al. [16] presented a method for filling a model with anisotropic textures. They had some volume textures and then specify it how to map to 3D objects. They pasted solid texture exemplars repeatedly on the 3D object. Users can design volumetric tensor fields over the mesh, and the texture patches are placed according to these fields.

Chapter 3

Solid Synthesis Process

In this section, we will present our approach for synthesizing solid textures from volume textures. In Section 3.1, we describe the feature vector in appearance space and how we obtain the feature vectors. Then we use the similarity set to accelerate neighborhood matching in Section 3.2. In Section 3.3, we introduce how to apply 2D pyramid texture synthesis to solid texture synthesis. The upsampling process is to increase the texture sizes between different levels, that every one voxel in parent level will generate eight voxels in children level. The jitter step is to perturb the textures to achieve deterministic randomness. The last step in pyramid solid synthesis is voexl correction, using neighborhood matching to make the results more similar to the exemplar.

3.1 Feature Vector Generation

Solid texture synthesis using RGB color values for neighborhood matching needs larger neighborhood size and numerous data. Appearance vectors have been proved that they are continuous and low-dimensional than RGB color values for neighborhood matching. Therefore, we decide to transform the volume data values in color space into feature vectors in appearance space. As shown in

Fig. 3.1, we transform volume data into appearance space volume data . We use the information-rich feature vectors for every voxel to obtain high-quality and efficient solid texture results.

V V ′

Figure 3.1 Overview of volume data transformation

Lefebvre and Hoppe [12] introduced appearance vectors in 2D space, and we will apply it in 3D space. After getting the RGB color values of input volume data, we take the values in 5×5×5 grids (Fig. 3.2 (b)) to construct feature vectors for every voxel in the input volume exemplar (Fig. 3.2 (a)). The exemplar is consisted of the feature vectors at every voxel. There are 375 dimensions (125 for grids and 3 for RGB) for one voxel in

V

V

V ′

′, and then we perform PCA to reduce the dimensions for a transformed exemplar (Fig. 3.2 (c)). It means that we project the exemplar using PCA to obtain the transformed exemplar .

V~ ′

V ′ V~ ′

We suppose that the data on each side is connected with them on the opposite side. For the voxels on the border, we will treat the voxels on the opposite border as its neighbors, and then take their RGB values to construct feature vectors. However, the data is always not continuous at the borders for some input exemplars. In order to avoid border effect problems, we will discard the data of 2 voxels (about half of 5) on each border, and we compute the feature vectors for the (n−2)×(n−2)×(n−2) voxels in the exemplar V .

Figure 3.2 The process for feature vector generation

(a) input volume data V (b) 5×5×5 grids structure for feature vectors (c) transformed exemplar V~ ′

3.2 Similarity Set Generation

By the -coherence search method [21], we could find the most similar candidates in the exemplar V for the pixel

k k

p , and then search from the candidates for neighborhood matching. The method can accelerate neighborhood matching because we do not have to search from each pixel in the exemplar for neighborhood matching. Therefore, we have to construct a similarity set to record the candidates similar to each voxel.

V

k

We apply the 2D -coherence search method [21] to 3D space. In the 3D space, we find the most similar voxels from all voxels in the transformed exemplar for voxel

k k

V~ ′ p , and then construct the candidate set to record the candidates similar to every voxel

)

p

By the principle of coherence synthesis [1], searching candidates from the neighbors of pixel p in the exemplar V can accelerate the synthesis process. Following this principle, we can find the most similar voxels from the neighbors of voxel parameter to control the window size for coherent synthesis. In the experiments,

is set as 7.

n

n

However, it will suffer the local minimum problem if we follow the principle of coherence synthesis [1] because we only consider the

neighbors of voxel

n n n× × p to be candidate voxels for p . We do not consider the global optimization. Therefore, we reform the method for the similarity set. We still find the k most similar voxels for voxel p in the transformed exemplar V~ ′. In order to avoid the local minimum problem, after finding C , we restrict that the voxel in the

)

1l(p n

n

n× × neighbors of can not be , and we search from the other voxels besides the

) similarity set for the transformed exemplar V

) similarity set in synthesis process.

3.3 Pyramid Solid Texture Synthesis

3.3.1 Pyramid Upsampling

The pyramid synthesis method [5] synthesizes textures from coarse level to fine level. There are levels in synthesis process, l=0~ , where m is the size of the target texture. They synthesized an image from , where . We will apply this 2D pyramid synthesis method to 3D space.

+1

l log2m

S S ~0 SL m

L=log2

 

S0 S1 S2

=0

l l =1 l=2

Figure 3.3 Synthesis from one voxel to m×m×m solid texture

We will synthesize from one voxel to a m×m×m solid texture, from , as shown in Fig. 3.3. We synthesize a volume data in which each voxel stores the coordinate value of the exemplar voxel. At the first, we build a voxel and assign value (1,1,1) to it as coordinate value, and then we upsample the coordinate values of parent voxels for next level, assigning each eight children the scaled parent coordinates plus child-dependent offset.

SL

S ~0

[ ]

p S S

Δ

where denotes the regular output spacing of exemplar coordinates, and means the relative locations for 8 children.

hl Δ

3.3.2 Jitter Method

After upsampling the coordinate values, we have to jitter our texture to achieve deterministic randomness. We plus the upsampled coordinates at each level a jitter function value to perturb it. The jitter function is produced by a hash function

In order to make the coordinates similar to those in the exemplar , we will take the jittered results to recreate neighbors. There is a feature value for every voxel after constructing feature vectors. For every voxel

V

p , we collect the feature values of its neighbors to obtain the neighborhood vector , and then search the most similar voxel from the transformed exemplar

) p ( Nsl

V~ ′ to make the result similar to the exemplar V .

In neighborhood matching, we take 8 diagonal locations for voxel p to

Fig. 3.4 shows the 8 diagonal locations for every voxel p .

Figure 3.4 Eight neighbors for Nsl( p)

By [12], they averaged the pixels nearby pixel p+Δ to improve convergence without increasing the size of the neighborhood vector . They averaged the appearance values from 3 synthesized pixels nearby

as the new feature value at pixel

)

p , and then used the new feature values at 4 diagonal pixel to construct neighborhood vector Nsl( p).

We apply this approach to perform 3D coordinate correction. First, we average the feature values from 4 synthesized voxels nearby neighborhood voxel of p , p+Δ, as the new feature value at voxelp+Δ. means the averaged feature value at voxel

)

synthesized voxels for every neighbor. Then we use the new feature values from 8 diagonal voxels to construct neighborhood vectorsNsl( p).

] neighborhood vectors and . We use the similarity sets and coherence synthesis method in the searching process, utilizing the 8 voxels nearby voxel voxel from the similarity set, and then use the relationship between voxel and voxel

i i=1~8 i1 i2 i3

i i

p to infer the candidate voxels (i1p,i2p,i3p) for voxel p , as

shown in Fig. 3.6. We set the 3 voxels (i1p,i2p,i3p) as the candidates for voxel p . We compute the neighborhood vectors by the averaged feature values from the 8 nearby voxels of 3 candiates. We can obtain , , and

.

) (1 p

sl i

N Nsl(i2 p) )

(i3 p Nsl

Figure 3.6 Process for inferring candidates for voxel p . Using three similar voxels of neighbor voxel to infer candidates for voxel i p

In the same way, there are 8 voxel neighbors near voxel p , and each of them has 3 similar voxels. Therefore, we will infer 24 candidates for voxel p . Then we compute the 24 s, where is a candidate, compare these

s with for neighborhood matching, and find the most similar )

(u

Nsl u

) (u

Nsl Nsl( p)

i1 

i3 

i2  i1p 

i3p

i2p 

voxel u to replace voxel p .

We can synthesize any size for results if the information in the exemplar is enough. It means the size of the input data must be large enough.

V

Chapter 4

Anisometric Synthesis Process

In this section, we present the proposed anisometric synthesis approach for solid textures with vector field control. In Section 4.1, we introduce 3D vector fields for texture control and how we generate anisometric fields and inverse anisometric fields with the 3D vector fields. Then we introduce the differences between solid synthesis and anisometric synthesis in Section 4.2. These differences are about upsampling and voxel correction. The jitter step is the same as it in the solid synthesis process.

4.1 3D Vector Field

We need the user-defined 3D vector fields to implement anisometric solid texture synthesis. We use the vector fields to control the result.

We design a 3D space that contains three orthogonal axes at every point first, and then use mathematics formulas to control the three axes. Fig. 4.1 shows the 3D vector field with orthogonal axes at every point, and the space size is 5×5×5.

We make the three axes various, and expect that the texture results would be changed with the fields. For example, we design a circular field, and there will be

a circular pattern on the texture. Fig. 4.2 shows the 3D vector field with a circular pattern on XY plane. The vector field should be the same size as the texture result.

(a) (b)

(c)

Figure 4.1 5×5×5 3D vector field with orthogonal axes (a) XY plane (b) XZ plane

(c) three orthogonal axes at every point

(a) (b)

(c)

Figure 4.2 5×5×5 3D vector field with a circle pattern on XY plane (a) XY plane (b) and (c) are three axes at every point

We have to make the anisometric field and the inverse anisometric field for each level with the user-defined 3D vector field,. The anisometric field

A

1

A

A is made by downsampling the 3D vector field, and we will obtain for each level. Then, we inverse the to get inverse anisometric field for

Al

1

Al

Al

each level. In the anisometrc synthesis process, the steps for upsampling and

4.2 Anisometric Solid Texture Synthesis

4.2.1 Pyramid Upsampling

The goal for upsmapling step in anisometric synthesis is the same as it in isometric synthesis. It helps synthesize from coarse level to fine level, and we have to upsample the coordinate values of parent voxels for the next level.

The difference is that the child-dependent offset for upsmapling step in anisometric synthesis is dependent on the anisometric field . We use the anisometric field to compute the distance for spacing.

A

where denotes the regular output spacing of exemplar coordinates, and means the relative locations for 8 children.

hl Δ

4.2.2 Voxel Correction

The goal for correction step is to make the coordinates similar to those in the exemplar V . For every voxel p , we collect the feature values of warped

neighbors by the anisometric field and the inverse anisometric field to obtain , and then search the most similar voxel from the transformed exemplar

The method presented by Lefebvre and Hoppe [12] for anisometric synthesis is able to reproduce arbitrary affine deformations, including shears and non-uniform scales. They only accessed immediate neighbors of pixel p to construct the neighborhood vector . They used the Jacobian field and the inverse Jacobian field to infer which pixel neighbors to access, and the results will be transformed by the inverse Jacobian field at the current point.

We will apply this to 3D space.

) construct the warped neighborhood vector ~ ( )

p

Fig. 4.3 shows the 8 warped neighbors for every voxel p . Their locations

are changed from diagonal locations by the inverse anisometric fieldAl1.

Figure 4.3 Eight warped neighbors for ~ ( ) p Nsl

Second, we have to find the 4 synthesized voxels nearby warped neoghborhod voxels of voxel p . We use the inverse anisometric field to infer the 4 synthesized voxels for voxel

1

, and compute the averaged feature value as the new feature value at p+ϕ~ Δ . Fig. 4.4 shows the locations of 4 warped synthesized voxels for each warped neighbor.

Δ]

Figure 4.4 Four warped sub-neighbors for warped neighbors of voxel p

We search the voxel u′ which is most similar to voxel p by comparing neighborhood vectors ~ ( )

p

Nsl and ~ ( ) u

Nsl ′ . We utilize the 8 warped voxels nearby voxel p and the anisometric fields to infer where the voxel is.

For example, the warped neighbor voxel

A u′

i′ ( ′i=1~8), we can get the most similar 3 voxels ( , , ) for voxel i′1 i′2 i′3 i′ from the similarity set, and then use the warped relationship with the anisometric fields A between voxel i′ and voxel

p to infer the candidate voxels (i1′ ,p i2′ ,p i3′ ) for voxel p , as shown in Fig. 4.5. p With the candidates for voxel p , we can compute the warped neighborhood vectors )~ (

1 sl i

N ′ ,p ~ ( )

2 p

Nsl i and ~ ( ) i′2 p

Nsl with the inverse anisometric fieldAl1.

i′3

Figure 4.5 Process for inferring warped candidates for voxel p . Using three similar voxels of warped neighbor voxel i′ to infer candidates for voxel p

By the same way, we will infer 24 candidates for voxel p , and compute the 24 ~ ( )

u

Nsl ′ s, where u′ is a candidate. Comparing these ~ ( )

sl u

N ′ s with ~ ( ) p Nsl for neighborhood matching, we could find the most similar voxel to replace voxel

u′

p .  

i′1

i’3p   

i′2

i’1p 

i’2p 

Chapter 5

       

Implementation and Results

We implement our system on a PC with 2.67GHz and 2.66GHz Core2 Quad CPU and 4.0GB of system memory. We use MATLAB to implement our method.

For a 32×32×32 volume data V , it needs about 3 minutes to construct a transformed exemplar from feature vectors and about 2 hours to construct a similarity set. For a 64×64×64 volume data , it needs about 90~120 minutes for a transformed exemplar

V ′

V

V ′ and about 85~95 hours for a similarity set. The transformed exemplar V from feature vectors and the similarity set can be reused for synthesis process. It means that once the feature vectors and similarity sets are constructed, we can use them for other syntheses with different target sizes for results and with different vector fields.

For a 64×64×64 result data, it needs about 6 hours to synthesize solid texture. For a 128×128×128 result data, it needs about 7~10 hours for synthesis process. We will show our results with 64×64×64 input volume data and 128×128×128 result data in this chapter. The detail computation time for different textures are shown in Table 5.1. In Section 5.1, we show some isometric synthesis results, and we show anisometric results with different vector fields

control in Section 5.2.

We use 5×5×5 grids for feature vectors at each voxel, 7×7×7 grids for similarity set that the voxels in this area could not be the candidate of the center voxel, and the parameter for jitter step is set to 0.7.

5.1 Isometric Results

The input data in Fig. 5.1(b) (case_1) is stochastic and marble-like texture. It only contains two kinds of colors, and it is vivid. It is information-rich that only needs small amount of data to represent the whole texture. It means that we can synthesize larger results (bigger than two times of input data size) with this kind of textures. Fig. 5.1(c)~(f) show the result. As we can see, the result is continuous and not the duplication of the input data.

The input data in Fig. 5.2(b) (case_2) is particle-like texture. It contains few kinds of color, and it is very different between particles and background. The particles in case_2 are the same kind. As long as there are few complete particle patterns in the input data, we can synthesize good result, as shown in Fig.

5.2(c)~(f). Because the few particle patterns can represent whole texture, we can synthesize it from 32×32×32 to 128×128×128, even from 64×64×64 to 256×256×256 volume data (the result size is four times of input size).

The input data in Fig. 5.3(b) (case_3) is another type of particle texture. It contains different sizes and different colors of particles, and most of particles look like the same color as background. Fig. 5.3(c)~(f) show the result from

64×64×64 to 128×128×128 volume result. Because the input data is not information-rich, the distribution of the particles in the result is sparse, not as it in the input data. It means that the size of the input data is not enough to contain enough information for us to synthesize.

The input data in Fig. 5.4(b) (case_4) is about sea water. It is a kind of homogeneous textures because it is almost in the same color in the whole volume.

The main feature in the input data is the highlight area. As the result in Fig. 5.4(c)~(f) shown, there are few highlight area in the result volume data.

The input data in Fig. 5.5(b) (case_5) is a kind of structural textures. The patterns in the input data are small and compact, so the texture is information-rich. Only small size for input data could contain enough patterns for synthesis. It can be synthesized with a few input data and obtain good results.

The result is shown in Fig. 5.5(c)~(f).

The input data in Fig. 5.6(b) (case_6) is structural and continuous on two directions and broken on the other direction. It is consist of thin stokes from black and white. The result in Fig. 5.6(c)~(f) is good at the two continuous directions, which is continuous and not duplicate, but broken at the other direction because the input data is information-poor.

The input data in Fig. 5.7(b) (case_7) is structural with bigger patterns. The feature in the input data is too large, so the input data could not represent the whole volume data if the input size is too small. As we can see in Fig. 5.7(c)~(f),

The input data in Fig. 5.7(b) (case_7) is structural with bigger patterns. The feature in the input data is too large, so the input data could not represent the whole volume data if the input size is too small. As we can see in Fig. 5.7(c)~(f),

相關文件