• 沒有找到結果。

Discussion and limitation

In general, the overall results of histogram adjustment [19] exhibit more details but less contrast.

Optimization tone mapping [22] produces a sharper image but preserve fewer details than our approach. Our tone mapping function is similar to photographic tone reproduction [30], so our results are similar to those of photographic tone reproduction in most parts of the tested images;

however, our approach can preserve details better and maintain higher contrast in most tested images. A limitation of our approach is that our tone mapping function has a weakness for dealing with whitely cloudy sky.

5.4 Discussion and limitation 32

(a) Histogram adjustment (b) Optimization tone mapping

(c) Photographic tone reproduction (d) Our result

Figure 5.4: Memorial Church in Stanford University. Randiance map courtesy of Paul Debevec.

5.4 Discussion and limitation 33

(a) Histogram adjustment (b) Optimization tone mapping

(c) Photographic tone reproduction (d) Our result Figure 5.5: Bathroom. Randiance map courtesy of Paul Debevec.

5.4 Discussion and limitation 34

(a)

(b)

(c)

(d)

Figure 5.6: MtTamWest.hdr Randiance map courtesy of ILM (a) Histogram adjustment (b) Optimization tone mapping (c) Photographic tone reproduction (d) Our result

5.4 Discussion and limitation 35

(a)

(b)

(c)

(d)

Figure 5.7: dani belgium.hdr Randiance map courtesy of Karol Myszkowski (a) Histogram adjustment (b) Optimization tone mapping (c) Photographic tone reproduction (d) Our result

5.4 Discussion and limitation 36

Figure 5.8: The scene is captured by Grzegorz Krawczyk. (a)-(c) and (g)-(i) are produced by our algorithm. (d)-(f) and (j)-(l) are produced by photographic tone production.

C H A P T E R 6

Conclusion and Future Work

We present a tone mapping method that considers the attention and adaptation effects in Human Visual System. We implicitly model the effect of sustained attention in adaptation and propose two models for transient attention and adaptation based on studies in psychophysics and neuro-science. We adopt the HDR saliency map [15][16] based on Itti et al. [3] to model the bottom-up process of attention. We also demonstrate that our results can preserve the contrast and details better than those produced by three state-of-the-art tone mapping methods: histogram adjust-ment [19], photography tone mapping operator[30], and optimization tone mapping operator [22].

In the future, when researchers learn more about the brain functions, we will replace the saliency map with a more sophisticated computational model of attention. We will also conduct some experiments to measure the visual quality of our results against the real scene in the world [47]. We would also like to apply our approach to HDR compression [7][8][9].

37

Bibliography

[1] F. Pestilli, G. Viera, and M. Carrasco, “How do attention and adaptation affect contrast sensitivity?,” Journal of vision, vol. 7, no. 7, 2007.

[2] S. Ling and M. Carrasco, “When sustained attention impairs perception,” Nature Neuro-science, vol. 9, pp. 1243–1245, September 2006.

[3] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, 1998.

[4] J. A. Ferwerda, S. N. Pattanaik, P. Shirley, and D. P. Greenberg, “A model of visual adap-tation for realistic image synthesis,” in SIGGRAPH ’96: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 249–258, 1996.

[5] E. Reinhard, High Dynamic Range Imaging.: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, November 2005.

[6] H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs, “High dynamic range display systems,” ACM Trans. Graph., vol. 23, no. 3, pp. 760–768, 2004.

[7] G. Ward and M. Simmons, “Subband encoding of high dynamic range imagery,” in SIG-GRAPH ’06: ACM SIGSIG-GRAPH 2006 Courses, ACM, 2006.

38

Bibliography 39

[8] G. Ward and M. Simmons, “Jpeg-hdr: a backwards-compatible, high dynamic range ex-tension to jpeg,” in SIGGRAPH ’05: ACM SIGGRAPH 2005 Courses, ACM, 2005.

[9] R. Xu, S. N. Pattanaik, and C. E. Hughes, “High-dynamic-range still-image encoding in jpeg 2000,” IEEE Comput. Graph. Appl., vol. 25, no. 6, pp. 57–64, 2005.

[10] S. N. Pattanaik, J. Tumblin, H. Yee, and D. P. Greenberg, “Time-dependent visual adapta-tion for fast realistic image display,” in SIGGRAPH ’00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 47–54, 2000.

[11] E. L. Cameron, J. C. Tai, and M. Carrasco, “Covert attention affects the psychometric function of contrast sensitivity,” Vision Research, vol. 42, no. 8, pp. 949 – 967, 2002.

[12] M. Carrasco, S. Ling, and S. Read, “Attention alters appearance.,” Nat Neurosci, vol. 7, pp. 308–313, March 2004.

[13] J. C. M. acute accent]nez Trujillo and S. Treue, “Attentional modulation strength in corti-cal area mt depends on stimulus contrast,” Neuron, vol. 35, no. 2, pp. 365 – 370, 2002.

[14] M. Carrasco, C. Penpeci-Talgar, and M. Eckstein, “Spatial covert attention increases con-trast sensitivity across the csf: support for signal enhancement,” Vision Research, vol. 40, no. 10-12, pp. 1203 – 1215, 2000.

[15] J. Petit, R. Br´emond, and J.-P. Tarel, “Saliency maps of high dynamic range images,” in APGV ’09: Proceedings of the 6th Symposium on Applied Perception in Graphics and Visualization, pp. 134–134, 2009.

[16] R. Bremond, J. Petit, and J.-P. Tarel, “Saliency maps of high dynamic range images,” in Media Retargeting Workshop in conjunction with ECCV’10, 2010.

[17] J. Tumblin and H. Rushmeier, “Tone reproduction for realistic images,” IEEE Comput.

Graph. Appl., vol. 13, no. 6, pp. 42–48, 1993.

Bibliography 40

[18] G. Ward, “A contrast-based scalefactor for luminance display,” in Graphics gems IV, pp. 415–421, 1994.

[19] G. W. Larson, H. Rushmeier, and C. Piatko, “A visibility matching tone reproduction op-erator for high dynamic range scenes,” IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 4, pp. 291–306, 1997.

[20] J. M. Henderson, “Object identification in context: the visual processing of natural scenes.,” Canadian journal of psychology, vol. 46, pp. 319–341, September 1992.

[21] R. W. G. Hunt, The reproduction of colour. 2004.

[22] R. Mantiuk, S. Daly, and L. Kerofsky, “Display adaptive tone mapping,” ACM Trans.

Graph., vol. 27, no. 3, pp. 1–10, 2008.

[23] J. H. Van Hateren, “Encoding of high dynamic range video with a model of human cones,”

ACM Trans. Graph., vol. 25, no. 4, pp. 1380–1399, 2006.

[24] K. Chiu, M. Herf, P. Shirley, S. Swamy, C. Wang, and K. Zimmerman, “Spatially nonuni-form scaling functions for high contrast images,” in In Proceedings of Graphics Interface 93, pp. 245–253, 1993.

[25] J. Tumblin and G. Turk, “Lcis: a boundary hierarchy for detail-preserving contrast re-duction,” in SIGGRAPH ’99: Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 83–90, 1999.

[26] P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, pp. 629–639, 1990.

[27] R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range com-pression,” in SIGGRAPH ’02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, (New York, NY, USA), pp. 249–256, ACM, 2002.

Bibliography 41

[28] F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” in SIGGRAPH ’02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, (New York, NY, USA), pp. 257–266, ACM, 2002.

[29] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in ICCV ’98:

Proceedings of the Sixth International Conference on Computer Vision, (Washington, DC, USA), p. 839, IEEE Computer Society, 1998.

[30] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” in IN PROCEEDINGS OF SIGGRAPH 2002, pp. 267–276, 2002.

[31] A. Adams, The Camera.The Ansel Adams Photography series. Little, Brown and Com-pany, 1980.

[32] A. ADAMS, The negative.The Ansel Adams Photography series. Little, Brown and Com-pany., 1981.

[33] A. ADAMS, The print. The Ansel Adams Photography series. Little, Brown and Company, 1983.

[34] H.-T. Chen, T.-L. Liu, and T.-L. Chang, “Tone reproduction: A perspective from luminance-driven perceptual grouping,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on, vol. 2, pp. 369–376, 2005.

[35] Y. Rubner, C. Tomasi, and L. J. Guibas, “A metric for distributions with applications to image databases,” in ICCV ’98: Proceedings of the Sixth International Conference on Computer Vision, (Washington, DC, USA), p. 59, IEEE Computer Society, 1998.

[36] D. Lischinski, Z. Farbman, M. Uyttendaele, and R. Szeliski, “Interactive local adjustment of tonal values,” ACM Trans. Graph., vol. 25, no. 3, pp. 646–653, 2006.

Bibliography 42

[37] C.-K. Liang, W.-C. Chen, and N. Gelfand, “TouchTone: Interactive local image adjust-ment using point-and-swipe,” in Computer Graphics Forum (Proc. Eurographics), p. to appear, 2010.

[38] C. Hickey, W. van Zoest, and J. Theeuwes, “The time course of exogenous and endogenous control of covert attention.,” Experimental brain research. Experimentelle Hirnforschung.

Experimentation cerebrale, November 2009.

[39] J. Nachmias and R. V. Sansbury, “Letter: Grating contrast: discrimination may be better than detection,” Vision Res, vol. 14, pp. 1039–1042, Oct 1974.

[40] F. Pestilli, S. Ling, and M. Carrasco, “A population-coding model of attention’s influ-ence on contrast response: Estimating neural effects from psychophysical data,” Vision Research, vol. 49, no. 10, pp. 1144 – 1153, 2009.

[41] A. M. Treisman and G. Gelade, “A feature-integration theory of attention,” Cognitive Psy-chology, vol. 12, no. 1, pp. 97 – 136, 1980.

[42] L. Itti, “Automatic foveation for video compression using a neurobiological model of vi-sual attention,” IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1304–1318, 2004.

[43] L. Itti and N. Dhavale, “Realistic avatar eye and head animation using a neurobiological model of visual attention,” in Proc. SPIE, pp. 64–78, SPIE Press, 2003.

[44] W. Reichardt, “Evaluation of optical motion information by movement detectors,” J Comp Physiol A, vol. 161, pp. 533–547, Sep 1987.

[45] http://en.wikipedia.org/wiki/Contrast(vision).

[46] L. Meylan, D. Alleysson, and S. Ssstrunk, “Model of retinal local adaptation for the tone mapping of color filter array images,” Journal of the Optical Society of America A, vol. 24, pp. 2807–2816, 2007.

Bibliography 43

[47] R. Mantiuk, S. Daly, K. Myszkowski, and H.-P. Seidel, “Predicting visible differences in high dynamic range images - model and its calibration,” in Human Vision and Elec-tronic Imaging X, IS&T/SPIE’s 17th Annual Symposium on ElecElec-tronic Imaging (2005), vol. 5666, pp. 204–214, 2005.

相關文件