• 沒有找到結果。

Two-Image Alignment

Background Removal

B.3 Two-Image Alignment

There are many applications of image alignment for photographic images, for example, object tracing, image stitching, and panoramic photography [64]. Fig . B.1 shows an example of the stitching of two images.

Given an image I, and a transformation function T with a parameter vector u, the transformed image I0 is derived by the following equation.

I0(x) = I(T (x)). (B.14)

Given two images, IA and IB, the two-image alignment problem is that searching a u to transform the location of all pixels in IA, such that the similarity of IA and IB is maximum. The simplest solution for two-image alignment problem is the brute force method that tests all possible u. However, this method is impracticable when the images are large or the dimension of u is greater than two, for example, a transformation includes

Figure B.1: The image alignment. (a) IA. (b) IB. (c) The alignment result of IA and IB.

shift, rotation, and scaling. The level-of-detail method (LOD) is a more efficient method than the brute force method. However, LOD could find a false solution although it can be easily implemented. The most popular alignment method is the LucasKanade method (LKM) [67]. The fundamental of LKM is that it iteratively find the increase of u by the LevenbergMarquardt algorithm such that the SSE between two images is minimum.

However, because the LKM is based on SSE, the alignment results of LKM could be failed if the images are taken from different imaging environments.

The mutual information can also solve the two-image alignment problem. Viola [51]

proposed an alignment method by maximizing the mutual information of two images.

Dowson and Bowden combined the mutual information and LKM to devise an efficient alignment algorithm.

Another alignment method based on the analysis in frequency domain has been pro-posed. Frank used the cross-correlation (Eq. B.5) to estimate the transformation from the frequency domain of images [38]. As the LKM, this method could not align the images taken by different imaging modalities.

Bibliography

[1] R. Meuli, Y. Hwu, J. H. Je, and G. Margaritondo. Synchrotron radiation in radiology:

radiology techniques based on synchrotron sources. European Radiology, 14, Issue 9:1550–1560, 2004.

[2] G. Shenoy. Basic characteristics of synchrotron radiation. Structural Chemistry, 14(1):3–14, 2003.

[3] F. Zernike. How i discovered phase contrast. Science, 121:345–349, 1955.

[4] F. Natterer. The Mathematics of Computerized Tomography. SIAM, 2001.

[5] M. Ikits, J. Kniss, A. Lefohn, and C. Hansen. Volume rendering techniques, chap-ter 39, pages 667–692. GPU Gems: Programming Techniques, Tips, and Tricks for Real-Time Graphics. Addison Wesley, 2004.

[6] S. Dunne, S. Napel, and B. Rutt. Fast reprojection of volume data. In Conference on Visualization in Biomedical Computing, pages 11–18, 1990.

[7] T. Malzbender. Fourier volume rendering. ACM Transactions on Graphics, 12(3):233–250, 1993.

[8] J. Radon. On the determination of functions from their integral values along certain manifolds. Medical Imaging, IEEE Transactions on, 5(4):170–176, 1986.

[9] L.A. Shepp and B.F. Logan. The fourier reconstruction of a head section. Nuclear Science, IEEE Transactions on, 21(3):21–43, 1974.

[10] R. N. Bracewell. The Fourier transform and its applications, 3rd edition. McGraw-Hill, 1999.

[11] J. W. Cooley and J. W. Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of Computation, 19(90):297–301, 1965.

[12] F. Xu and K. Mueller. Accelerating popular tomographic reconstruction algo-rithms on commodity pc graphics hardware. IEEE Transactions on Nuclear Science, 52(3):654–663, 2005.

[13] A. C. Kak and M. Slaney. Principles of computerized tomographic imaging. Society for Industrial and Applied Mathematics, 2001.

[14] J. I. Agulleiro and J. J. Fernandez. Fast tomographic reconstruction on multicore computers. Bioinformatics, 27(4):582–583, 2011.

[15] J. I. Agulleiro and J. J. Fernandez. Evaluation of a multicore-optimized implemen-tation for tomographic reconstruction. PLoS ONE, 7(11), 2012.

[16] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3d surface construction algorithm. In Proceedings of ACM SIGGRAPH Computer Graphics

’87, volume 21, pages 163–169, 1987.

[17] N. Max. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2):99–108, 1995.

[18] B. T. Phong. Illumination for computer generated pictures. Commun. ACM, 18(6):311–317, June 1975.

[19] J. Kniss, C. Hansen, P. Shirley, and A. Mcpherson. A model for volume lighting and modeling. IEEE Transactions on Visualization and Computer Graphics, 9:150–162, 2003.

[20] K. Engel, M. Kraus, and T. Ertl. High-quality pre-integrated volume rendering using hardware-accelerated pixel shading. In Proceedings of Eurographics/SIGGRAPH

[21] G. Kindlmann, R. Whitaker, T. Tasdizen, and T. Mller. Curvature-based transfer functions for direct volume rendering: Methods and applications. In Proceedings IEEE Visualization 2003, pages 513–520, 2003.

[22] Eric B. Lum and Kwan-Liu Ma. Lighting transfer functions using gradient aligned sampling. In Proceedings IEEE Visualization 2004, pages 289–296, 2004.

[23] J. J. Caban and P. Rheingans. Texture-based transfer functions for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 14(6):1364–

1371, 2008.

[24] C. D. Correa and K. L. Ma. Size-based transfer functions: A new volume exploration technique. IEEE Transactions on Visualization and Computer Graphics, 14(6):1380–

1387, 2008.

[25] C. D. Correa and K. L. Ma. The occlusion spectrum for volume classification and visu-alization. IEEE Transactions on Visualization and Computer Graphics, 15(6):1465–

1472, 2009.

[26] C. D. Correa and K. L. Ma. Visibility histograms and visibility-driven transfer functions. IEEE Transactions on Visualization and Computer Graphics, 17(2):192–

204, 2011.

[27] Y. Wu and H. Qu. Interactive transfer function design based on editing direct vol-ume rendered images. IEEE Transactions on Visualization and Computer Graphics, 13(5):1027–1040, 2007.

[28] J. Zhou and M. Takatsuka. Automatic transfer function generation using contour tree controlled residue flow model and color harmonics. IEEE Transactions on Visualization and Computer Graphics, 15(6):1481–1488, 2009.

[29] E. LaMar, B. Hamann, and K. I. Joy. Multiresolution techniques for interac-tive texture-based volume visualization. In Proceedings of the Conference on Visualization ’99: Celebrating Ten Years, VIS ’99, pages 355–361, 1999.

[30] C. Wang, S. Member, and H. W. Shen. Lod map, a visual interface for navigat-ing multiresolution volume visualization. IEEE Transactions on Visualization and Computer Graphics, 12:1029–1036, 2006.

[31] M. Levoy. Volume rendering using the fourier projection-slice theorem. In Proceedings of Graphics Interface, pages 61–69, 1992.

[32] I. Viola, A. Kanitsar, and M. E. Gr¨oller. Gpu-based frequency domain volume ren-dering. In Proceedings of the 20th spring conference on Computer graphics, pages 55–64, 2004.

[33] A. Entezari, R. Scoggins, T. Moller, and R. Machiraju. Shading for fourier volume rendering. In Proceedings of the 2002 IEEE symposium on Volume visualization and graphics, pages 131–138, 2002.

[34] Z. Nagy, G. Miiller, and R. Klein. Classification for fourier volume rendering. In Proceedings of the Computer Graphics and Applications, volume 0, pages 51–58, 2004.

[35] C. C. Cheng and Y. T. Ching. Transfer function design for fourier volume rendering and implementation using gpu. In Proceedings of SPIE Medical Imaging, 2008.

[36] C. C. Cheng and Y. T. Ching. Real-time adjustment of transfer function for fourier volume rendering. Journal of Electronic Imaging, 20(4), 2011.

[37] L. Piegl and W. Tiller. The NURBS book, 2nd edition. Springer-Verlag, 1997.

[38] J. Frank. Three-dimensional electron microscopy of macromolecular assemblies:

visualization of biological molecules in their native state. Oxford University Press, 2006.

[39] T. R. Shaikh, H. Gao, W. T. Baxter, F. J. Asturias, N. Boisset, A. Leith, and J. Frank. Spider image processing for single-particle reconstruction of biological macromolecules from electron micrographs. Nature Protocols, 3(12):1941–1974, 2008.

[40] C. C. Cheng, C. C. Chien, H. H. Chen, Y. Hwu, and Y. T. Ching. Image alignment for tomography reconstruction from synchrotron x-ray microscopic images. PLoS ONE, 9(1), 2014.

[41] H. H. Chen, C. C. Chien, C. Petibois, C. L. Wang, Y. S. Chu, S. F. Lai, T. E. Hua, Y. Y. Chen, X. Cai, I. M. Kempson, Y. Hwu, and G. Margaritondo. Quantitative analysis of nanoparticle internalization in mammalian cells by high resolution x-ray microscopy. Journal of Nanobiotechnology, 9(14), Apr. 2011.

[42] H. H. Chen, C. C. Chien, C. Petibois, C. L. Wang, Y. S. Chu, S. F. Lai, T. E. Hua, Y. Y. Chen, X. Cai, I. M. Kempson, Y. Hwu, and G. Margaritondo. Quantitative analysis of nanoparticle internalization in mammalian cells by high resolution x-ray microscopy. Journal of Nanobiotechnology, 9(1):14, 2011.

[43] R. C. Gonzalez and R. E. Woods. Digital image processing, 3rd edition. Prentice Hall, 2007.

[44] R. Szeliski. Computer vision: algorithms and applications. Springer, Nov. 2010.

[45] C. Harris and M. Stephens. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, Aug. 1988.

[46] T. Kadir and M. Brady. Saliency, scale and image description. International Journal of Computer Vision, 45(2):83–105, Nov. 2001.

[47] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, Nov. 2004.

[48] H. Bay, T. Tuytelaars, and L. V. Gool. Surf: Speeded up robust features. In 9th European Conference on Computer Vision, Graz, Austria, May 2006.

[49] S. Rady, A. Wagner, and E. Badreddin. Entropy-based features for robust place recognition. In IEEE International Conference on Systems, Man and Cybernetics, 2008, Singapore, 2008.

[50] S. Suri, P. Schwind, P. Reinartz, and J. Uhl. Combining mutual information and scale invariant feature transform for fast and robust multisensor sar image registration. In American Society of Photogrammetry and Remote Sensing . 75th Annual ASPRS Conference, Baltimore, MD, USA, 2009.

[51] P. Viola. Alignment by maximization of mutual information. International Journal of Computer Vision, 24(2):137–154, 1997.

[52] J. P. W. Pluim, J. B. Antoine M., and M. A. Viergever. Mutual information based registration of medical images: a survey. IEEE Transactions on Medical Imaging, 22(8):986–1004, Jul. 2003.

[53] N. Dowson and R. Bowden. Mutual information for lucas-kanade tracking: an inverse compositional formulation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(1):180–185, Jan. 2008.

[54] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography.

Communications of the ACM, 24, Issue 6:381–395, 1981.

[55] K. Shafique and M. Shah. A noniterative greedy algorithm for multiframe point correspondence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(1):51–65, Jan. 2005.

[56] F. T. and H. Tao. Probabilistic object tracking with dynamic attributed relational feature graph. IEEE Transactions on Circuits and Systems for Video Technology, 18(8):1064–1074, Aug. 2008.

[57] R. Szeliski and H. Y. Shum. Creating full view panoramic image mosaics and envi-ronment maps. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 251–258, New York, NY, USA, 1997.

[58] M. Brown and D. G. Lowe. Automatic panoramic image stitching using invariant features. International Journal of Computer Vision, 74(1):59–73, Aug. 2007.

[59] J. Kopf, M. Uyttendaele, O. Deussen, and M. F. Cohen. Capturing and viewing gigapixel images. ACM Transactions on Graphics (TOG), 26(3):93–1–93–10, July 2007.

[60] P. J. Burt and E. H. Adelson. A multiresolution spline with application to image mosaics. ACM Transaction on Graphics, 2:217–236, October 1983.

[61] S. Lin, J. Gu, S. Yamazaki, and H. Y. Shum. Radiometric calibration from a single image. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages 938–945, 2004.

[62] M.D. Grossberg and S.K. Nayar. Modeling the space of camera response functions.

Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(10):1272–1282, 2004.

[63] P. Goebel, N. Belbachir, and M. Truppe. Blind background subtraction in dental panoramic x-ray images: An application approach. 3663:434–441, 2005.

[64] R. Szeliski. Image alignment and stitching: A tutorial. Foundations and Trends in Computer Graphics and Computer Vision, 2 Issue 1:1–104, Dec. 2006.

[65] A. V. Oppenheim and R. W. Schafer. Discrete-time signal processing. Prentice Hall Press, 3rd edition, 2009.

[66] C. E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27(3):379423, Jul. 1948.

[67] B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of Imaging Understanding Workshop 1981, pages 121–131, 1981.

相關文件