• 沒有找到結果。

We show the results of real objects in Figure 6.2. The first three objects are plastic balls wrapped with different pieces of cloth. There are two reasons for us to choose these source objects. One is because the sphere shape can provide enough coverage of normals for texture

Figure 6.1: Texture transfer and relighting for synthetic objects. The images in the left column are rendered in OpenGL as our source objects. The images in the middle and right columns are generated by our method and rendered in Lambertian Model and in Phong Model

Figure 6.2: Texture transfer and relighting for real objects. The images in the left column are the source objects. The images in the middle and right columns are generated by our method and rendered in Lambertian Model and in Phong Model respectively.

transfer, and the other is that such an object is more available. In the last example, we use a melon as our source object. The melon has an appearance with an irregular net structure which is very different with others.

6.3 Environment mapping

Except relighting with single point light source, we can also apply Environment Mapping on our target objects, as shown in Figure 6.4. This will produce more vivid results with complex lighting conditions from the around environment.

Figure 6.3: Texture transfer and relighting for real objects. The images in the left column are the source objects. The images in the middle and right columns are generated by our method and rendered in Lambertian Model and in Phong Model respectively.

Figure 6.4: Rendering with environment mapping.

Chapter 7

Conclusion and future work

7.1 Conclusion

In this thesis, we present an algorithm about texture synthesis for image relighting and tex-ture transfer. The main contribution of our work is that we keep the textex-ture consistency and make it relightable during synthesis. We perform Photometric Stereo to retrieve the normal map and the unshaded image from input photographs. The texel clustering will then guess the underlying distribution of texture points. With the knowledge of these texels, we can alter the illumination and achieve real-time rendering by the construction of shading maps.

Besides, we can also replace the texture on other surfaces with these texels. Although we make some assumptions about the input photographs and the recovered reflectance proper-ties are not as physically correct as in the real world, the results are still unnoticeable for human eyes.

Our work provides a brand-new point of view for image-based rendering, and the texel clustering method also attempts to proved another approach for texture generation. Instead of interpolation for complete BTF reconstruction as in the previous work, we take the ap-proach of synthesis to analyse the reflectance behavior and are able to perform a pseudo but fast rendering under varied illumination.

7.2 Future work

There are still many interesting problems to be discussed for making this work more practical.

We are interested in using this approach to produce more vivid results and apply other artifacts from real objects. Although we can replace the texture from the source object on

the target one, we would like to transfer more shading artifacts to make the target more realistic, such as bump mapping or displacement mapping. We think this may require more information in constructing feature vectors for matching, and the synthesis algorithm we used now will probably have to be modified so that it may be able to work for accurate normal map propagation. Besides, in preprocessing, we only use simple illumination model to be fit by our captured photographs. The specular highlights or self-shadow cannot be handled very well, and it may produce incorrect results. We should use more complex models to analyse the source object such that we can get more accurate normal maps and unshaded images for better results.

As shown in the previous results, our method works well for stochastic textures. However, for regular textures, a better synthesis algorithm may be helpful to maintain the structured pattern of the texture. On the other hand, like other texture synthesis methods did, most of them focus on the issues about 2D images stitching. Although the input and output to our system are also 2D images, we think maybe we can try to apply our work on real 3D surface.

The idea is from mesh parameterization, for example, geometry image. If we can project the object on the texture space, we can do our work for its whole body, like texture coordinate generation. We can also use more traditional texture synthesis methods for better results.

Therefore, after re-projecting back to 3D domain, we may rotate the object and change different viewpoints instead of only in the front of it.

References

[Ash01] Michael Ashikhmin. Synthesizing natural textures. In Symposium on Interactive 3D Graphics, pages 217–226, 2001.

[DvGNK99] Kristin J. Dana, Bram van Ginneken, Shree K. Nayar, and Jan J. Koenderink.

Reflectance and texture of real-world surfaces. ACM Transactions on Graphics, 18(1):1–34, 1999.

[FH04] Hui Fang and John C. Hart. Textureshop: Texture synthesis as a photograph editing tool. In Proc. SIGGRAPH 2004, 2004.

[GCHS04] Dan B. Goldman, Brian Curless, Aaron Hertzmann, and Steve Seitz. Shape and spatially-varying brdfs from photometric stereo. Technical report, University of Washington, 2004.

[HE03] A. Haro and I. Essa. Exemplar based surface texture. In Proc. Vision, Modeling, and Visualization 2003, 2003.

[HJO+01] Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. Image analogies. In Proc. SIGGRAPH 2001, pages 327–340, 2001.

[HS03] Aaron Hertzmann and Steven M. Seitz. Shape and materials by example: A photometric stereo approach. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 533–540, 2003.

[Kaj86] James T. Kajiya. The rendering equation. In SIGGRAPH ’86: Proceedings of the 13th annual conference on Computer graphics and interactive techniques, pages 143–150, New York, NY, USA, 1986. ACM Press.

[KRFB06] Erum Arif Khan, Erik Reinhard, Roland Fleming, and Heinrich Buelthoff.

Image-based material editing. In SIGGRAPH, 2006.

[KSS+04] J. Kautz, M. Sattler, R. Sarlette, R. Klein, and H.-P. Seidel. Decoupling brdfs from surface mesostructures. In Proc. Graphics Interface 2004, pages 177–184, 2004.

[LH05] Sylvain Lefebvre and Hugues Hoppe. Parallel controllable texture synthesis.

ACM Trans. Graph., 24(3):777–786, 2005.

[LLH04] Yanxi Liu, Wen-Chieh Lin, and James H. Hays. Near regular texture anal-ysis and manipulation. ACM Transactions on Graphics (SIGGRAPH 2004), 23(3):368 – 376, August 2004.

[LYS01] Xinguo Liu, Yizhou Yu, and Heung-Yeung Shum. Synthesizing bidirectional texture functions for real-world surfaces. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 97–106, New York, NY, USA, 2001. ACM Press.

[MA97] D. Mount and S. Arya. Ann: A library for approximate nearest neighbor search-ing, 1997.

[MMS+04] G. M¨uller, J. Meseth, M. Sattler, R. Sarlette, and R. Klein. Acquisition, syn-thesis and rendering of bidirectional texture functions. In Christophe Schlick and Werner Purgathofer, editors, Eurographics 2004, State of the Art Reports, pages 69–94. INRIA and Eurographics Association, September 2004.

[MPBM03] Wojciech Matusik, Hanspeter Pfister, Matt Brand, and Leonard McMillan. A data-driven reflectance model. ACM Trans. Graph., 22(3):759–769, 2003.

[PCF05] J. A. Paterson, D. Claus, and A. W. Fitzgibbon. Brdf and geometry cap-ture from extended inhomogeneous samples using flash photography. Computer Graphics Forum (Special Eurographics Issue), 24(3):383–391, September 2005.

[Tur01] Greg Turk. Texture synthesis on surfaces. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 347–354, New York, NY, USA, 2001. ACM Press.

[War92] Gregory J. Ward. Measuring and modeling anisotropic reflection. In SIG-GRAPH, pages 265–272, 1992.

[WL01] Li-Yi Wei and Marc Levoy. Texture synthesis over arbitrary manifold surfaces.

In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer

相關文件