# 不同病人之間的電腦斷層之二維影像重合

66

0

0

全文

(2) 不同病人之間的電腦斷層之二維影像重合 指導教授：殷堂凯 博士 國立高雄大學資訊工程所 指導教授：徐忠枝 博士 國立高雄大學電機工程所. 學生：呂竑錤 國立高雄大學電機工程所. 摘要. 隨著醫療相關技術的進步，還有電子產品的蓬勃發展，現在的研究人員想出了 很多方法來幫助醫療人員更容易的判斷病症，並且可以更準確的治療病症。在醫學 影像的領域當中，校準是一個非常重要的課題，通過某些適當的校準方式，可以將 醫學影像轉變到一致的基礎之下，才可以做後續的處理或者是分析資料的動作。 本論文是以不同病人之間的電腦斷層影像為資料去做分析及研究，首先把電腦 斷層影像中不要的部分(病患所躺的平台)以侵蝕的方法去除掉，之後再將處理過的 影像，用 ICP(iterative closest point)演算法將影像作旋轉以及平移的處理，接下來再 輔以選取定位點的方式，在要比較的兩張圖當中選取若干的點，作細部調整，讓兩 張電腦斷層的影像不管是外圍輪廓或是內部輪廓都可以變的更為相似。. 關鍵字：影像重合、ICP 演算法、侵蝕、膨脹、定位點、仿射轉換（又稱做六參數 轉換）。. -I-.

(3) Image Registration of Two-dimensional Computed Tomography between Different Patients Advisor(s): Dr. Tang-Kai Yin Institute of Computer Science and Information Engineering National University of Kaohsiung Advisor(s): Dr. Jong-Jy Shyu Institute of Electrical Engineering National University of Kaohsiung. Student: Hung-Chi Lu Institute of Electrical Engineering National University of Kaohsiung. ABSTRACT. With advances in medical technology and the rapid development of electronic products, the researchers recently offer various ways to help medical personnel to diagnose specific diseases easily. Therefore, the treatment can be accurately provided to patients. In the field of medical imaging, the calibration is a very important issue. Through some suitable calibration methods, the medical images can be normalized to a same reference template before medical personnel determine or make more analyses. In this research the differences of CT images among patients are studied. First of all, remove parts of CT images that we do not want (the platform that patients lie on) by the erosion method. Then, rotate and horizontally transfer the modified CT images by using the ICP algorithm. Finally, select significant control points as matching points and adjust all the other pixels based on these control points. These processes lead to the - II -.

(4) registration of the external and internal contours of different images.. Keywords: Image registration, ICP algorithm, erosion, dilation, control point selection, affine transformation.. -III-.

(5) 致謝 對於這篇論文能夠順利完成，首先最感謝的是我的指導教授-殷堂凱博士，在 他耐心與詳盡的指導之下，讓我從不懂如何做研究直到現在完成研究，這之間的 過程恩師著實給我莫大的幫助。尤其最後完稿這段時間，他在百忙之中還是再三 撥空指點我論文該改進與加強的地方。此外，跟隨老師研究的這段時間，除了研 究、為人處事方面，在日常生活上也深受老師的照顧，老師總是和顏悅色的指導 著我，使我可以專心做研究，不至於感受到太大的壓力。另外，也要特別感謝徐 忠枝老師，在我茫然無助的時候，總會適時的鼓勵我，提醒著我要照顧身體，並 且還提供他做研究的方法。 另外感謝擔任口試委員的黃文楨老師和柯正雯老師，感謝他們撥冗參與我的 口試，並給我的未來研究日上的寶貴指點與鼓勵，在此感謝他們。 此外研究所的所有同學以及學長姐和學弟，科瑋、長隆、正賢、亮嘉、宗益在 平常就互相鼓勵，在課業上也互相幫忙。另外要特別感謝祥任學弟，他在我的程 式方面給我很大的幫助。另外感謝岳洋學弟和懋忠學弟在這段時間給我的幫助， 也希望之後你們可以好好加油。 當然，這篇論文要感謝的還有我親愛的家人，因為你們的體貼與鼓勵，我才 可以專心完成這篇論文並實現自己的理想。最後，僅向這段日子曾經幫助我、鼓 勵我的所有朋友致上最深的謝意。. 呂竑錤. 於高雄大學電機工程研究所 中華民國九十七年一月. -IV-.

(6) Contents 摘要 ................................................................................................................................... I ABSTRACT .....................................................................................................................II 致謝 ................................................................................................................................ IV Contents............................................................................................................................V List of Figures................................................................................................................VII List of Tables ................................................................................................................VIII Chapter 1 Introduction...................................................................................................... 1 1.1 Motivation .......................................................................................................... 1 1.2 Contribution........................................................................................................ 1 1.3 Organization ....................................................................................................... 1 Chapter 2 Background and Related Work ........................................................................ 3 2.1 Mathematical Morphology for Erosion and Dilation ......................................... 3 2.1.1 Dilation .................................................................................................... 4 2.1.2 Erosion..................................................................................................... 6 2.1.3 Example ................................................................................................... 7 2.2 ICP Algorithm..................................................................................................... 9 2.2.1 Set Data Shape and Model Shape.......................................................... 10 2.2.2 Search for Correspondence.................................................................... 10 2.2.3 Computing Geometric Transformation.................................................. 12 2.2.4 Update the Location Coordinates .......................................................... 14 2.2.5 Mean Square Error and Stop Condition ................................................ 15 2.3 Control Points Selection and Affine Transformation ....................................... 15 2.3.1 Control Points Selection ........................................................................ 16 2.3.2 Affine Transformation ........................................................................... 17 Chapter 3 Hybrid ICP Algorithm and Affine Transformation (ICPAT).......................... 22 3.1 Introduction ...................................................................................................... 22 3.2 The Experiment Steps....................................................................................... 23 3.2.1 Erosion and Dilation.............................................................................. 23 3.2.2 ICP Algorithm........................................................................................ 29 3.2.3 Control Point Selection.......................................................................... 30 3.2.4 Affine Transformation ........................................................................... 32 3.2.5 Example ................................................................................................. 32 Chapter 4 Experiment and Discussion ........................................................................... 35 4.1 The Outcomes of the ICP, AT and ICPAT Methods between Patient 1 and Patient 3 .................................................................................................................. 35 -V-.

(7) 4.1.1 The Results Using ICP .......................................................................... 35 4.1.2 The Results Using AT ............................................................................ 38 4.1.3 The Results Using ICPAT ...................................................................... 41 4.2 The Results of the ICP, AT, and ICPAT among Patient 1, Patient 2 and Patient 3 ................................................................................................................................ 43 4.2.1 The Results Using ICP .......................................................................... 44 4.2.2 The Results Using AT ............................................................................ 45 4.2.3 The Results Using ICPAT ...................................................................... 47 4.3 Conclusion ........................................................................................................ 48 Chapter 5 Conclusions and Future Works ...................................................................... 51 5.1 Conclusions ...................................................................................................... 51 5.2 Future Work ...................................................................................................... 51 References ...................................................................................................................... 53. -VI-.

(8) List of Figures Figure 2.1 A binary image requiring careful definition of object and background connectivity..................................................................................................... 7 Figure 2.2 A binary image containing two object sets A and B. The three pixels in B are "color-coded" as is their effect in the result. And (a) (b) is Dilation and Erosion respect. .............................................................................................. 8 Figure 2.3 The standard structuring elements N4 and N8. ................................................ 9 Figure 2.4 The rotation of the point A( x, y ) with an α angle gives the point A' ( x ' , y ' ) . ...................................................................................................................... 20 Figure 2.5 The scaling is a transformation that enlarges or diminishes objects. ............ 20 Figure 2.6 A shear leaves fixed all points on one axis and other points are shifted parallel to the axis by a distance proportional to their perpendicular distance from the axis ................................................................................................. 21 Figure 3.1 The flow chart for ICPAT.............................................................................. 23 Figure 3.2 The standard structuring elements (SE) N4 ................................................... 24 Figure 3.3 The erosion of a binary image....................................................................... 24 Figure 3.4 The process for a grayscale image ................................................................ 25 Figure 3.5 From top to bottom the images are original image、erosive image with denoise and dilative image order. The images in left are plans and in right are with pixels. ................................................................................................... 26 Figure 3.6 The procedure for erosion ............................................................................. 27 Figure 3.7 The procedure for dilation............................................................................. 27 Figure 3.8 The procedure for denoise............................................................................. 28 Figure 3.9 The procedure for dilation............................................................................. 28 Figure 3.10 The flow chart for ICP algorithm................................................................ 29 Figure 3.11 The procedure for ICP algorithm. ............................................................... 30 Figure 3.12 The procedure for control point selection ................................................... 31 Figure 3.13 The procedure for control point selection ................................................... 31 Figure 3.14 The procedure for Affine Transformation ................................................... 32 Figure 3.15 An example with all steps of the experiment. ............................................. 34 Figure 4.1 The best case of the whole body ................................................................... 36 Figure 4.2 The worst case of the whole body................................................................. 37 Figure 4.3. The best case of the head ............................................................................. 37 Figure 4.4. The worst case of the body........................................................................... 38 Figure 4.5. The best case of the whole body .................................................................. 39 Figure 4.6. The worst case of the whole body................................................................ 39 -VII-.

(9) Figure 4.7. The best case of the body ............................................................................. 40 Figure 4.8. The best case of the head ............................................................................. 40 Figure 4.9. The best case of the whole body .................................................................. 41 Figure 4.10. The worst case of the whole body.............................................................. 42 Figure 4.11. The worst case of the head ......................................................................... 42 Figure 4.12. The best case of the body ........................................................................... 43 Figure 4.13. The pictures with ICP between the patient 1 and the patient 2 .................. 44 Figure 4.14. The pictures with ICP between the patient 3 and the patient 2 .................. 45 Figure 4.15. The pictures with AT between the patient 1 and the patient 2.................... 46 Figure 4.16. The pictures with AT between the patient 3 and the patient 2.................... 46 Figure 4.17. The pictures with ICPAT between the patient 1 and the patient 2 ............. 47 Figure 4.18. The pictures with ICPAT between the patient 3 and the patient 2. ............ 48. - VIII.

(10) List of Tables Table 3.1. The easy rules for dilation and erosion.......................................................... 24 Table 4.1. A comparison of three methods (ICP, AT, ICPAT) among three patients using the overlap measure (OM)............................................................................ 49 Table 4.2. The mean values with max and min for original 、 ICP 、 AT and ICPAT between patient 1 and patient 3 .................................................................... 50. -IX-.

(11) Chapter 1 Introduction Introduction 1.1 Motivation With advances in medical technology and the rapid development of electronic products, the researchers recently offer various ways to help medical personnel to diagnose specific diseases easily. Therefore, the treatment can be accurately provided to patients. In the field of medical imaging, the calibration is a very important issue. Through some suitable calibration methods, the medical images can be normalized to a same reference template before medical personnel determine or make more analyses.. 1.2 Contribution The ICP (iterative closest point) algorithm [4] is a composition of rotations and translations. It is suitable for the image registration of large sizes. On the other hand, the affine transformation (AT) method [21] is a composition of rotations, translations, dilations, and shears. It is suitable for the image registration of small sizes. We propose the hybrid of the ICP algorithm and the AT method (ICPAT). Experiments show that this hybrid method has better performance in image registrations than the ICP algorithm and the AT method individually. It is suitable for both large and small image.. 1.3 Organization The remainder of this thesis is organized as follows. The background and related work is described in Chapter 2. Chapter 3 describes the main method about this thesis. The experiment results and discussions are delineated in Chapter 4. Finally, the 1.

(12) conclusions and future work are given in Chapter 5.. 2.

(13) Chapter 2 Background and Related Work Background and Related Work In this chapter, we will describe some background knowledge and review some research work related to our study. Our overview will focus on three major topics: （1） Erosion and Dilation [2, 3, 13]; （2） ICP (iterative closest point) Algorithm [1, 4]; and （3） Control Point Selection [16, 17] and Affine Transformations [20, 21, 22].. 2.1 Mathematical Morphology for Erosion and Dilation Mathematical Morphology was born in 1964 from the collaborative work of Georges Matheron and Jean Serra, at the École des Mines de Paris, France. Matheron supervised the PhD thesis of Serra, devoted to the quantification of mineral characteristics from thin cross sections, and this work resulted in a novel practical approach, as well as theoretical advancements in integral geometry and topology. In 1968, the Centre de Morphologie Mathématique was founded by the École des Mines de Paris in Fontainebleau, France, lead by Matheron and Serra. During the rest of the 1960's and most of the 1970's, MM dealt essentially with binary images, treated as sets, and generated a large number of binary operators and techniques: Hit-or-miss transform, dilation, erosion, opening, closing, granulometry, thinning, skeletonization, ultimate erosion, conditional bisector, and others. A random approach was also developed, based on novel image models. Most of the work in that period was developed in Fontainebleau. From mid-1970's to mid-1980's, MM was generalized to grayscale functions and images as well. Besides extending the main concepts (such as dilation, erosion, etc...) to functions, this generalization yielded new operators, such as morphological gradients, 3.

(14) top-hat transform and the Watershed (MM's main segmentation approach). Then we define the Parameters: f ( x, y ) and b( x, y ) are digital image functions, where f ( x, y ) is the input image and b( x, y) is a structuring element. If Z denotes the set of real integers, the assumption is that ( x, y ) are integers from. Z × Z and that f and b are functions that assign a gray-level value (a real number from the set of real numbers, R ) to each distinct pair of coordinates ( x, y ) . If the gray levels also are integers, Z replaces R .. 2.1.1 Dilation The dilation method [2, 3] was introduced by Georges Matheron and Jean Serra, it is the basic operation with morphological image process. N. Desikachari and Robert M. Haralick [14] propose recursive binary dilation and erosion using digital line structuring elements in arbitrary orientations, the Su Chen and Robert M. Haralick [15] propose recursive erosion and dilation transforms. We have detail description as above. Gray-scale dilation of f by b , denoted f ⊕ b , is defined as. [ f ⊕ b]( s, t ) = max { f ( s − x, t − y ) + b( x, y )} ( x, y )∈b. (2.1.1-1). Keep in mind that f and b are functions, rather than sets, as is the case in binary morphology. The condition that ( s − x) and (t − y ) have to be in the domain of f , and x and y have to be in the domain of b , is analogous to the condition in the binary definition of dilation, where the two sets have to overlap by at least one element. Note 4.

(15) also that the form of Eq. (2.1.1-1) is similar to 2-D convolution, with the max operation replacing the sums of convolution and the addition replacing the products of convolution. We illustrate the notation and mechanics of Eq. (2.1.1-1) by means of simple 1-D functions. For functions of one variable, Eq. (2.1.1-1) reduces to the expression. [ f ⊕ b]( s ) = max{ f ( s − x) + b( x)}. (2.1.1-2). ( x)∈b. Recall from the discussion of convolution that. f (− x) is simply. f ( x). mirrored with respect to the origin of the x axis. As in convolution, the function. f ( s − x) moves to the right for positive s and to the left for negative s. The requirements that the value of ( s − x) has to be in the domain of f and that the value of x has to be in the domain of b imply that f and b overlap. Eq. (2.1.1-1) could be written so that b undergoes translation instead of f . However, if the domain of b is smaller than the domain of f (a condition almost always found in practice), the form given in Eq. (2.1.1-1) is simpler in terms of indexing and achieves the same result. Conceptually, f sliding by b is really no different than b sliding by f . The general effect of performing dilation on a gray-scale image is twofold: (1) If all the values of the structuring element are positive, the output image tends to be brighter than the input. (2) Dark details either are reduced or eliminated, depending on how their values and shapes relate to the structuring element used for dilation.. 5.

(16) 2.1.2 Erosion The erosion method [2, 3] is the same as dilation, the difference between them is to narrow the images, here we use the basic part. We have detail description as above. Gray-scale erosion, denoted. f ⊖ b , is defined as. [ f ⊖ b ] ( s, t ) = min { f ( s + x, t + y ) + b( x, y )} ( x, y ) ∈b. (2.1.2-1). The condition that ( s + x) and (t + y ) have to be in the domain of f , and x and y have to be in the domain of b , is analogous to the condition in the binary definition of erosion, where the structuring element has to be complete contained by the set being eroded. Note that the form of Eq. (2.1.2-1) is similar in form to 2-D correlation, with the min operation replacing the sums of correlation and subtraction replacing the products of correlation. We illustrate the mechanics of Eq. (2.1.2-1) by eroding a simple 1-D functions. For functions of one variable, the expression for erosion reduces to. [ f ∗ b]( s ) = min{ f ( s + x) + b( x)} ( x)∈b. (2.1.2-2). As in correlation, the function f ( s + x) moves to the left for positive s and to the right for negative s . The requirements that ( s + x) have to be in the domain of f and x have to be in the domain of b imply that the range of b is completely contained within the range of the displaced f . Finally, unlike the binary definition of erosion, f , rather than the structuring element b , is shifted. Eq. (2.1.2-1) could be written so that b is the function translated, resulting in a more complicated expression in terms of indexing. Because f sliding past b conceptually is the same as b sliding past f . 6.

(17) The general effect of performing erosion on a gray-scale image is twofold: (1) If all the elements of the structuring element are positive, the output image tends to be darker than the input image. (2) The effect of bright details in the input image that is smaller in area than the structuring element is reduced, with the degree of the reduction being determined by the gray-level values surrounding the bright detail and by the shape and amplitude values of the structuring element itself. Gray-scale. dilation. and. erosion. are. duals. with. respect. to. function. complementation and reflection. That is, [ f ⊖ b ] c ( s, t ) = ( f c ⊕ b)( s, t ). (2.1.2-3). Where f c = − f ( x, y ) and b = b(− x, − y ) .. 2.1.3 Example. Figure 2.1 A binary image requiring careful definition of object and background. connectivity.. 7.

(18) (a) Dilation D(A,B). (b) Erosion E(A,B). Figure 2.2 A binary image containing two object sets A and B. The three pixels in B are. "color-coded" as is their effect in the result. And (a) (b) is Dilation and Erosion respect.. While either set A or B can be thought of as an "image", A is usually considered as the image and B is called a structuring element (SE). The structuring element is to mathematical morphology what the convolution kernel is to linear filter theory. Dilation, in general, causes objects to dilate or grow in size; erosion causes objects to shrink. The amount and the way that they grow or shrink depend upon the choice of the structuring element. Dilating or eroding without specifying the structural element makes no more sense than trying to lowpass filter an image without specifying the filter. The two most common structuring elements (given a Cartesian grid) are the 4-connected and 8-connected sets, N4 and N8.. 8.

(19) Figure 2.3 The standard structuring elements N4 and N8.. 2.2 ICP Algorithm ICP (iterative closest point) algorithm [4] was developed by Besl and McKay and is usually used to register two given point sets in a common coordinate system. The algorithm calculates iteratively the registration. In each iteration step, the algorithm selects the closest points as correspondences and calculates the transformation (rotation and translation). In each ICP iteration, the transformation can be calculated by any of these four methods: A SVD based method of Arun et al. [30], aquaternion method of Horn [31], an algorithm using orthonormal matrices of Horn et al. [32] and a calculation based on dual quaternions of Walker et al. [33]. D. Arthur and S. Vassilvitskii [5] propose an application to the k-means method, the Du SY, Zheng NN, Ying SH, You QB, and Wu Y [6] propose An Extension of the ICP Algorithm Considering Scale Factor. The basic idea of ICP algorithm we have description as above: It is a different use for the calibration method. The calibration of the two sets of data model is composed of the geometric structures, such as point, line, triangle, curves and surfaces. To find an appropriate relationship in the condition that minimizes the distance between data models, and calculating out a group of least-squares geometric transformation matrix. With several repetitive calculations for a best result, a rotation 9.

(20) matrix and a translation matrix are further obtained. The two sets of data model which can be aligned together by the methods of geometric transformation therefore achieve to the purpose of calibration.. 2.2.1 Set Data Shape and Model Shape ICP algorithm must initially set the data shape (expressed in D) and the model shape (expressed in M). And the data which are aligned called model shape. The format of data shape must be aggregated by the sets of points. If the original data models are aggregated by the sets of points, then we need to do sampling process for data model, which becomes the sets of point aggregation. Due to there is no restriction in the model shapes, we must set a threshold value for additional information, and this terminates the ICP algorithm. The threshold value is a determinant that can be applied to the value we calculated in iteration, if the value is greater than the threshold, and then the procedures can be repeated and run the next iteration. Otherwise, the procedures must be stopped and asked to leave (if the value is less than threshold); the ICP algorithm is consequently terminated. At the beginning (No. 0 iteration), initially set the data shape as number zero (No.0) iteration, Dk (k = 0) = D0 . D0 is defined as the original data shape that we want to align. And model shape is defined and expressed in M.. 2.2.2 Search for Correspondence Furthermore, find out a corresponding relationship between the two points, which is the shortest distance between the corresponding points.. 10.

(21) d (r1 , r 2 ) = r1 − r 2 = ( x2 − x1 ) 2 + ( y2 − y1 ) 2 + ( z2 − z1 ) 2. (2.2.2-1). Where r1 = ( x1 , y1 , z1 ) and r 2 = ( x2 , y2 , z2 ) . The model shape were following: （1）sets of points, （2）sets of line segments, （3）sets of parametric curves, （4） sets of implicit curves, （5） sets of triangles, （6）. sets of parametric surfaces, and （7） sets of implicit surfaces. If the model shape are the aggregation of points. The distance between the point p and the point set M are. d ( p, M ) = min d ( p, qi ) ∀qi ∈ M. (2.2.2-2). The closet point qi of M satisfies the equality d ( p, qi ) = d ( p, M ) . If model shape M are sets of line segments. Let l be the line segment connecting the two points r1 and r 2 . The distance between the point r and the line segment l is. d ( p, l ) = min ur1 + vr 2 − p u + v =1. (2.2.2-3). Where u ∈ [0,1] and v ∈ [0,1] . Let L be the set of N l line segments denoted li , and let M = {li } for i = 1,… , N l . The distance between the point p and the line segment set M is. d ( p, M ) = min d ( p, li ) i ∈{1,…, Nl }. 11. (2.2.2-4).

(22) The closet point. yj. on the line segment set. M. satisfies the equality. d ( p, y j ) = d ( p, M ) . If model shape M are sets of triangles. Let t be the triangle defined by the three points r1 = ( x1 , y1 , z1 ) , r 2 = ( x2 , y2 , z2 ) , and r 3 = ( x3 , y3 , z3 ) . The distance between the point p and the triangle t is. d ( p, t ) = min ur1 + vr 2 + wr 3 − p u + v + w =1. (2.2.2-5). Where u ∈ [0,1] and v ∈ [0,1] , and w ∈ [0,1] . Let T be the set of N t triangles denoted ti , and let M = {ti } for i = 1,… , Nt . The distance between the point p and the triangle set M is. d ( p, M ) = min d ( p, ti ) i ∈{1,…, Nt }. (2.2.2-6). The closet point y j on the triangle set M satisfies the equality d ( p, y j ) = d ( p, M ). 2.2.3 Computing Geometric Transformation The geometric conversion of ICP algorithm is a rigid body transformation, which includes a rotation matrix and a matrix transpose. Therefore, through previous corresponding points, the information of geometry conversion can be obtained. In order to make the geometric conversion apply to the original data shape and be able to align with the model shape. The original data we used does not correspond to this iteration, but the aggregation of points we found from the iteration this time can be used to form a 12.

(23) corresponding relationship with original data shape (di ,0 ↔ qi ,k ) . Assuming the corresponding points di ,0 and qi ,k which form the aggregations are called D 、 Q . First we compute the center of mass of D and Q respectively:. 1 μd = Nd. Nd. ∑d. 1 μq = Nq. and. i ,0. i =1. Nq. ∑q i =1. i,k. (2.2.3-1). N d and N q are the number of the aggregative points D and Q respectively. N d = N q is because of D and Q are one-to-one ratio. As the center of gravity is known, we can calculate cross-covariance matrix Σ dq between D and Q: 1 Σ dq = Nd. Nd. ∑ ⎡⎢⎣d i =1. ⋅ qi ⎤ − μ d ⋅ μ q ⎥⎦ t. i. t. (2.2.3-2). To subtract the transpose of Σ dq by the Σ dq itself, we can take out partial data to construct a 4 × 4 symmetric matrix Q(Σ dq ) :. Aij = (Σ dq − ΣTdq )ij. A23 A31 A12 ⎡tr (Σ dq ) ⎤ ⎢ ⎥ A23 ⎢ ⎥ Q ( Σ dq ) = ⎢ T Σ dq + Σ dq − tr (Σ dq ) I 3 ⎥ A ⎢ 31 ⎥ ⎢⎣ A12 ⎥⎦. (2.2.3-3). (2.2.3-4). Where I 3 is the 3 × 3 identity matrix. The unit eigenvector q R = [ q0 q1 q2 q3 ]. t. 13.

(24) corresponding to the maximum eigenvalue of the matrix Q ( Σ dq ) is selected as the optimal rotation. And refer to the following formula to get the rotation matrix.. ⎡ q02 + q12 − q22 − q32 ⎢ R (q R ) = ⎢ 2(q1q2 + q0 q3 ) ⎢ 2(q1q3 − q0 q2 ) ⎣. 2(q1q2 − q0 q3 ) q02 + q22 − q12 − q32 2(q2 q3 + q0 q1 ). 2(q1q3 + q0 q2 ) ⎤ ⎥ 2(q2 q3 − q0 q1 ) ⎥ q02 + q32 − q12 − q22 ⎥⎦. (2.2.3-5). The translation matrix consist of translation vector qT = [ q4 q5 q6 ]. t. T = qT = μ d − R ( q R ) μ q. (2.2.3-6). 2.2.4 Update the Location Coordinates We can update the location coordinates of original data shape after computing the geometric conversion. Then we record the update results for input data of next iteration. And. Dk +1 = R ⋅ D0 + T. (2.2.4-1). Note that the coordinates of the data shape have been updated in previous iteration, and which is used to find the corresponding relationship between two coordinates. Compared to the computationally-geometric conversion, the original coordinates of data shape has been used, hence the two methods are definitely different.. 14.

(25) 2.2.5 Mean Square Error and Stop Condition Final, we compute the mean square error of two models:. 1 dmsk = f ( R, T ) = Nd. 2. Nd. ∑q i =1. i. − ( R (q R ) ⋅ d i + qT ). (2.2.4-1). To subtract the previously calculated dms by the previous iteration of dms. Observe whether the deviation of error is less than the given threshold value. If the deviation is greater than the threshold, then repeatedly run the procedures and continue to the next iteration. Use geometric conversion to update the coordinates of the original data shape, and the updated coordinates are used as the next iteration to find out the corresponding relationship of the data shape with repeated procedures. Conversely, if less than the threshold, it means that we find the acceptable condition of the result, afterward it stops processing and ceases. The last iteration calculated out from the geometric conversion which is the best conversion for calibration.. 2.3 Control Points Selection and Affine Transformation The control point selection is the most important in this thesis. In this section we will introduce the method that we make use of image registration usually in image process. That is （1） Control Points Selection, and （2） Affine Transformations. First, in 2.3.1 we have discussion the control point selection. And second in 2.3.2 we have discussion the affine transformations.. 15.

(26) 2.3.1 Control Points Selection Control points selection recently research by W.E. Hart and M.H. Goldbaum [19], and L.M.G. Fonseca and C.S. Kenney [18]. The main method description we have discussing as above. In the field of image processing, many applications of the image positioning play an important role in all high dynamic range image synthesis, panoramic photos, video compression, object tracking, and the establishment of a virtual environment will require the application of its technology. The so-called image positioning, that is, the process regarding the different time and perspectives of the two or more images overlap in the same scene (coordinate system). Image positioning can be divided into two main categories, one for the area-based method, and the other for the feature-based method. The area-based method computes the pixel correlation coefficient between the two image areas. In other words, the area-based method must use the intensity of pixels in a small area to evaluate the correlation between two images. Therefore, if changes the image’s brightness or gets the images from different devices, then it will fail to use the area-based method for the image positioning. It has not such problems to use the feature-based method because of getting the feature points, such as brightness and so on, can not be affected with the disturbances. If two or more images can be found to match correct feature points, then the feature-based method will be more swift and efficient than in the area-based method. A good characteristic point shall meet the following several conditions: First of all, the degree of characteristic point must be highly identified, that is, the characteristic point must be specific and unique. Second, it must be easy to match, in other words, the matching correction rate must be high. Third, the characteristic point must have invariance for every kind of damage. Generally speaking, a complete feature point 16.

(27) algorithm can be divided into two parts. One for capturing the characteristic points, the other is the description of the feature points. In a feature point matching algorithms, first of all, must find an image in which we want to position the feature points. Generally, these points are unique.. 2.3.2 Affine Transformation In many imaging systems, detected images are subject to geometric distortion introduced by perspective irregularities wherein the position of the camera(s) with respect to the scene alters the apparent dimensions of the scene geometry. Applying an affine transformation to a uniformly distorted image can correct for a range of perspective distortions by transforming the measurements from the ideal coordinates to those actually used. (For example, this is useful in satellite imaging where geometrically correct ground maps are desired.) An affine transformation is an important class of linear 2-D geometric transformations which maps variables (e.g. pixel intensity values located at position ( x1 , y1 ) in an input image) into new variables (e.g. ( x2 , y2 ) in an output image) by applying a linear combination of translation, rotation, scaling and/or shearing (i.e. non-uniform scaling in some directions) operations. Affine transformations is the earliest research by Y. T. LO and S. W. LEE [29], Lyubomir Zagorchev and Ardeshir Goshtasby [20] use this transformation functions for. nonrigid image registration, and Yao Zhao and Baozong Yuan [22] propose a new affine transformation: its theory and application to image coding. The main method description we have discussing as below. An affine transformation is any transformation that preserves collinearity (i.e., all points lying on a line initially still lie on a line after transformation) and ratios of 17.

(28) distances (e.g., the midpoint of a line segment remains the midpoint after transformation). In this sense, affine indicates a special class of projective transformations that do not move any objects from the affine space to the plane at infinity or conversely. An affine transformation is also called an affinity. Geometric contraction, expansion, dilation, reflection, rotation, shear, similarity transformations, spiral similarities, and translation are all affine transformations, as are their combinations. In general, an affine transformation is a composition of rotations, translations, dilations, and shears. While an affine transformation preserves proportions on lines, it does not necessarily preserve angles or lengths. Any triangle can be transformed into any other by an affine transformation, so all triangles are affine and, in this sense, affine is a generalization of congruent and similar. A particular example combining rotation and expansion is the rotation-enlargement transformation. ⎡ x' ⎤ ⎡ cos α ⎢ '⎥ = s⎢ ⎣ − sin α ⎣y ⎦. sin α ⎤ ⎡ x − x0 ⎤ cos α ⎦⎥ ⎢⎣ y − y0 ⎥⎦. ⎡ cos α ( x − x0 ) + sin α ( y − y0 ) ⎤ = s⎢ ⎥ ⎣ − sin α ( x − x0 ) + cos α ( y − y0 ) ⎦. (2.3.2-1). (2.3.2-2). Separating the equations, ⎧ x ' = ( s cos α ) x + ( s sin α ) y − s ( x0 cos α + y0 sin α ) ⎨ ' ⎩ y = (− s sin α ) x + ( s cos α ) y + s ( x0 sin α − y0 cos α ). (2.3.2-3). This can be also written as ⎧ x ' = ax − by + c ⎨ ' ⎩ y = bx + ay + d 18. (2.3.2-4).

(29) Where a = s cos α. and. b = − s sin α. The scale factor is then defined by s ≡ a 2 + b2. (2.3.2-5). and the rotation angle by. ⎛ b⎞ ⎝ ⎠. α = tan −1 ⎜ − ⎟ a. (2.3.2-6). An affine transformation of ℜn is a map F : ℜn → ℜn of the form. F ( p) = Ap + q. (2.3.2-7). For all p ∈ ℜn , where A is a linear transformation of ℜn . If det( A) > 0 , the transformation is orientation-preserving; if det( A) < 0 , it is orientation-reversing. For example:. [x. ⎡a y 1] ⎢⎢ c ⎢⎣ e. b d f. 0⎤ 0 ⎥⎥ = ⎡⎣ x ' 1 ⎥⎦. y ' 1⎤⎦. ⎡a b ⎤ Where ⎢ ⎥ represent the linear transformation, and [ e ⎣c d ⎦. f ] the translation.. Example 2.3.2-1: The following example is the affine transformation of the point (2, 1): 1. A −90 rotation 2. A vector (3, 4) translation ⎡a We define first the values of ⎢ c ⎢ ⎢⎣ e. b d f. 0⎤ 0 ⎥⎥ : 1 ⎥⎦. So the linear transformation shows below: ⎡a b ⎤ ⎡ cos α = ⎢c d ⎥ ⎢ − sin α ⎣ ⎦ ⎣. sin α ⎤ ⎡ cos(−90) sin(−90) ⎤ ⎡ 0 −1⎤ = = ⎢1 0 ⎥ ⎢ − sin(−90) cos(−90) ⎥ cos α ⎥⎦ ⎣ ⎦ ⎣ ⎦. 19.

(30) And the translation is [ e. Thus, we obtain:. And:. ⎡a ⎢c ⎢ ⎢⎣ e. f ] = [3 4] b d f. 0⎤ ⎡ 0 −1 0 ⎤ ⎥ 0 ⎥ = ⎢⎢1 0 0 ⎥⎥ ⎢⎣ 3 4 1 ⎥⎦ 1 ⎥⎦. ⎡0 −1 0 ⎤ [ 2 1 1] ⎢⎢1 0 0⎥⎥ = [ 4 2 1] ⎢⎣ 3 4 1 ⎥⎦. The point (4, 2) is the affine transformation of the point (2, 1) by a rotation of −90 and a translation (3, 4). The previous examples deal with Figure 2.4. a rotation as a linear transformation, and Figure 2.5. a scaling, and Figure 2.6. a shear. Here are the different linear transformations that you can perform:. Figure 2.4 The rotation of the point A( x, y ) with an α angle gives the point A' ( x ' , y ' ) .. Figure 2.5 The scaling is a transformation that enlarges or diminishes objects.. 20.

(31) Figure 2.6 A shear leaves fixed all points on one axis and other points are shifted. parallel to the axis by a distance proportional to their perpendicular distance from the axis. 21.

(32) Chapter 3 Hybrid ICP Algorithm and Affine Transformation (ICPAT) Hybrid ICP Algorithm and Affine Transformation (ICPAT) There are many kinds of medical imaging, including Radiography、 Computed Tomography (CT)、 Magnetic Resonance Imaging (MRI)、 Ultrasound、 Positron Emission Tomography (PET) 、 Single Photon Emission Computed Tomography (SPECT) …etc. In this paper we research Computed Tomography (CT).. 3.1 Introduction In this chapter, we will introduce the main method in this paper. We hybridized ICP [4] and Affine Transformation [22], and we named ICPAT. First of all, we use erosion and dilation [2, 3] to eliminate the platform of the image. Then we use the ICP algorithm to let the images do rotation and translation. Finally, we use the affine transformation to compare with the original image. As all the above methods are integrated, I will show the flow chart below.. 22.

(33) Figure 3.1 The flow chart for ICPAT. 3.2 The Experiment Steps We do the registrations among the three patients by using only ICP algorithm, only affine transformation, and the ICP algorithms coupled with affine transformation. We will describe in detail in the following steps.. 3.2.1 Erosion and Dilation We have to deal with some parts first that we do not want in the images as we get the CT images. Therefore, we must remove these parts (the platform which the patients lying on) before doing the CT image registration. Here, the method that we use is Erosion and Dilation. The standard structuring element (SE) N4 we used that can give a 3 × 3 matrix to scan the 512 × 512 image on hand and calculate it’s pixel as it will turn into a new image. The structuring element gives a 3 × 3 matrix that we will show below. As result, 23.

(34) we will get a best effect after 10 selected trials.. Figure 3.2 The standard structuring elements (SE) N4. The easy rules for dilation and erosion show below.. Table 3.1. The easy rules for dilation and erosion. Operation. Rule. Dilation. The value of the output pixel is the maximum value of all the pixels in the input pixel's neighborhood. In a binary image, if any of the pixels is set to the value 1, the output pixel is set to 1.. Erosion. The value of the output pixel is the minimum value of all the pixels in the input pixel's neighborhood. In a binary image, if any of the pixels is set to 0, the output pixel is set to 0.. 1. 11. 1. Structuring Element. 11. 001. 00. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. Input Image. 1. Output Image. Figure 3.3 The dilation process for a binary image.. 24.

(35) Figure 3.4 The dilation process for a grayscale image. Figure 3.5 The erosion process for a binary image.. Figure 3.6 The erosion process for a grayscale image 25.

(36) And the dilation is the contrary. Next we show a processing image below.. Figure 3.7 From top to bottom the images are original image、erosive image with. denoise and dilative image order.. 26.

(37) The algorithm description we divide into three parts from Figure 3.8. to Figure 3.10.. Input: Output:. A set of images; A set of images with erosion. Steps: 1. for each image (g1 , g2, g3, …) Read the image file name; 2. Define the standard structuring elements (SE) N4; 3. 4. Define an empty matrix EX; 5 Put the data of the image to EX; 6. for i = 1 to 13 7. EX = erode (EX, SE); 8. end 9. end Figure 3.8 The procedure for erosion. Input: Output:. A set of images with denoise; A set of images with dilation.. Steps: 1. for each of LX Define the standard structuring elements (SE) N4; 2. 3. for i = 1 to 13 4. LX = dilate (LX, SE); 5 end 6. end Figure 3.9 The procedure for dilation.. 27.

(38) Input: Output:. A set of images with erosion; A set of images with denoise.. Steps: 1. for each of EX 2. for y = 1 to 512 3. for x = 1 to 512 4. if EX (y, x) < 150 5 EX (y, x) = 0 ; 6. end 7. end 8. end 9. Define a array LX; 10. LX = X ∩ EX; 11. end Figure 3.10 The procedure for denoise.. Input: Output:. A set of images with denoise; A set of images with dilation.. Steps: 1. for each of LX Define the standard structuring elements (SE) N4; 2. 3. for i = 1 to 13 4. LX = dilate (LX, SE); 5 end 6. end Figure 3.11 The procedure for dilation.. 28.

(39) 3.2.2 ICP Algorithm. Figure 3.12 The flow chart for ICP algorithm. The flow chart for ICP algorithm is show above. This step is processing rotation and translation with the images. The algorithm description for ICP is described in Figure 3.13.. 29.

(40) Input : Output:. A set of images with two patients; A set of images with ICP process.. Steps: 1. for each data image (d1 , d2, d3, …) 2. for each model image (m1 , m2, m3, …) 3. define the threshold (t); 4. compute the closest points; 5 compute the registration; 6. apply the registration; 7. if the change in mean square error < t 8. stop and exit; 9. else 10. run next iteration; 11. end 12. end 13. end Figure 3.13 The procedure for ICP algorithm.. 3.2.3 Control Point Selection In the control point selection method, we use the center of gravity as a major. The first step is to find out the center of gravity of the image. Then we divide the image into four parts based on the center of gravity, and finding the centers of gravity in the four parts respectively. Repeat the step once. Finally, we will get twenty-one centers of gravity. The algorithm description we divide into two parts from Figure 3.14 and Figure 3.16.. 30.

(41) Input: Output:. A set of images with two patients; A set of images with marking the barycenters.. Steps: Define ComputeCentroid (M, x0, y0, x1, y1) 1. 2. for each of the image 3. Set Xc = 0, Yc = 0, N = sum (M (x0 : y0 , x1 : y1) 4. for i = x0 to x1 5 for j = y0 to y1 6. Xc = Xc + i*M (i , j) / N 7. Yc = Yc + j*M(i , j) / N 8. loop 9. loop 10. end Figure 3.14 The procedure for control point selection. We can show a graph for example below.. Figure 3.15 The procedure for control point selection. 31.

(42) 3.2.4 Affine Transformation. Input: Output:. Twenty one control points of images with two patients; A set of images with marking the barycenters.. Steps:. 1. 2. 3.. for all of the control points run affine transformation function; end. Figure 3.16 The procedure for affine transformation. 3.2.5 Example Give a simple example to show all the methods from section 3.2.1 to 3.2.4 by graphic. We take a head in the image as an example. The left side image is Patient 1 as control, and the right side image is Patient 3 who we want to compare with. The first level of picture is the original image. The second level is the image after doing erosion method. The picture that we eliminated the noise is the third level. The fourth level is doing dilation with the image that we had processing by erosion, and restoring into the original image but does not contain the platform. The fifth level is doing ICP process to make figure A more similar to figure B. The final level is doing control point selection in two images respectively. We eventually get a final result based upon the selected control points by doing AT method.. 32.

(43) 33.

(44) Figure 3.17 An example with all steps of the experiment.. 34.

(45) Chapter 4 Experiment and Discussion Experiment and Discussion In this chapter, we evaluate the performance of the proposed algorithms for image registration of two-dimensional computed tomography between different patients with ICP algorithm, affine transformation, and ICP plus affine transformation (ICPAT). All experiment were performed on a Intel® Core™ 2 Duo CPU E8300 @ 2.83GHz personal computer with 2 GB main memory and 250 GB hard disk running on Windows XP Service Pack 3. And we feel an immense gratitude to National Cheng Kung University Hospital to give us lots of CT images with three patients In section 4.1, we will compare and show the images for using ICP, AT, and ICPAT methods between patient 1 and patient 3. And section 4.2 shows the same methods among patient 1, patient 2, and patient 3.. 4.1 The Outcomes of the ICP, AT and ICPAT Methods between Patient 1 and Patient 3. 4.1.1 The Results Using ICP The results of all the pictures we will divide them into three parts: a whole body, a head and a body only, respectively. The methods that we used to estimate the accurate rate of image registration are described below. OM =. T ∩F T ∪F 35. (4.1.1-1).

(46) Where OM [23] is the Jaccard overlap measure, and T and F be the set of voxels corresponding to the target and the registered source structures, respectively. The images with experiment are show below. According to the order of figures are the best case of the whole body (it’s also the best case of the body), the worst case of the whole body (it’s also the worst case of the head), the best case of head, and the worst case of the body. For the next four pictures, top left is the original image of Patients 1. Top right is the original image of Patients 3, and the bottom of the picture is the image which we got from using ICP. The so-called ‘best case’ is the highest value is got after computing OM between the processing image and the original image. Inversely, ‘the worst case’ is the lowest OM value.. Figure 4.1 The best case of the whole body. 36.

(47) Figure 4.2 The worst case of the whole body. Figure 4.3. The best case of the head. 37.

(48) Figure 4.4 is similar as Figure 4.2 because we numbered the heads No. 1 to No. 78, the rest of the images is set to the body. Figure 4.2 is just the No. 78 image, and Figure 4.4 is the No. 79 image.. Figure 4.4. The worst case of the body. From the four Figures above, we get a conclusion that ICP is useful for large images.. 4.1.2 The Results Using AT The bottom is the images we experimented. According to the order of images, they are the best part of the whole body (also the best part of the head), the worst part of the whole body (also the worst part of the body), the best part of the body, and the worst part of the head, respectively.. 38.

(49) Figure 4.5. The best case of the whole body. Figure 4.6. The worst case of the whole body. 39.

(50) Figure 4.7. The best case of the body. Figure 4.8. The best case of the head. 40.

(51) Figure 4.6 is the worst one because the images of the Patient1 have extra part of the shoulder. Therefore, the original image has been expanded, and causes inefficient result when calculating OM value. For the above four pictures, top left is the original image of Patient 1, top right is the original image of Patient 3, and the bottom is the image experiencing AT process. From the four Figures above, we get a conclusion that AT is useful for small images.. 4.1.3 The Results Using ICPAT The bottom is the images we experimented. According to the order of images, they are the best part of the whole body (also the best part of the head), the worst part of the whole body (also the worst part of the body and head), the best part of the body.. Figure 4.9. The best case of the whole body. 41.

(52) The picture after doing ICP. Original picture. The picture for doing ICPAT. Figure 4.10. The worst case of the whole body. Figure 4.11. The worst case of the head. 42.

(53) Figure 4.12. The best case of the body. For the four pictures above, top left is the original image of Patients 1. Top right is the original image of Patients 3, and the bottom of the picture is the image which we got from using ICPAT. From the four Figures above, we get a conclusion that ICPAT is useful for any of images.. 4.2 The Results of the ICP, AT, and ICPAT among Patient 1, Patient 2 and Patient 3. 43.

(54) 4.2.1 The Results Using ICP. The two groups of pictures are described below; the first group includes the original image of the Patient 1 and Patient 2 in top left and top right pictures, respectively. The second group of the top left is the original image of the Patient 3, and the top right is the same with the first group of Patient 2. The images we used ICP processes that are shown below in two groups of pictures.. Figure 4.13. The pictures with ICP between the patient 1 and the patient 2. 44.

(55) Figure 4.14. The pictures with ICP between the patient 3 and the patient 2. The images of Patient 2 are different from Patient 1 and Patient 3. The arms of Patient 1 and Patient 3 were placed when taking CT images, but Patient 2 lifted his arm. This would cause difficult when interpreting based upon the information This would cause misleading information when interpreting it. So, we use the manual method to remove arms in order to eliminate the problem that is derived from the extra part of arms. Perhaps this factor will lead to errors when doing ICP and it would impact on the OM value.. 4.2.2 The Results Using AT. 45.

(56) Figure 4.15. The pictures with AT between the patient 1 and the patient 2. Figure 4.16. The pictures with AT between the patient 3 and the patient 2. 46.

(57) For the two groups of pictures, top left is the original image of Patients 1 in the first group. Top right is the original image of Patients 2. Top left is the original image of Patients 3 in the second group. Top right is the original image of Patients 2. And the bottoms of the picture in two groups are the images which we got from using AT. Next, the two images shown in section 4.2.3 are similar to the images in section 4.2.1 and section 4.2.2. It is different when using ICPAT method, but otherwise the pictures are placed as the same with previous two sections.. 4.2.3 The Results Using ICPAT. Figure 4.17. The pictures with ICPAT between the patient 1 and the patient 2. Top left is the original image of Patients 1 in the first group. Top right is the 47.

(58) original image of Patients 2. Top left is the original image of Patients 3 in the second group. Top right is still the original image of Patients 2. And the bottoms of the picture in two groups are the images which we got from using ICPAT.. Figure 4.18. The pictures with ICPAT between the patient 3 and the patient 2.. 4.3 Conclusion After section 4.1 and section 4.2, we see the images transformation and understand the advantages and disadvantages. Then we arrange the tables in Table 4.1 and Table 4.2. The values with red bold are the best and the values with blue italic are the worst. In addition to we may have male or female patients; therefore, we have to distinguish between the parts of the head and body.. 48.

(59) Table 4.1. A comparison of three methods (ICP, AT, ICPAT) among three patients using. the overlap measure (OM). patient 1 vs. patient patient 1 vs. patient patient 2 vs. patient 3 2 3 original. 0.7211 ± 0.0889. 0.4229 ± 0.0111. 0.4882 ± 0.0127. ICP. 0.6894 ± 0.0953. 0.1769 ± 0.0060. 0.1712 ± 0.0065. AT. 0.7554 ± 0.1017. 0.7566 ± 0.0092. 0.7444 ± 0.0051. ICPAT. 0.8010 ± 0.0708. 0.7698 ± 0.0105. 0.7440 ± 0.0059. original(head). 0.7128 ± 0.0767. original(body). 0.7293 ± 0.0923. ICP(head). 0.6374 ± 0.0973. ICP(body). 0.7068 ± 0.0882. AT(head). 0.8774 ± 0.0628. AT(body). 0.7147 ± 0.0766. ICPAT(head). 0.7789 ± 0.0814. ICPAT(body). 0.8084 ± 0.0653. 49.

(60) Table 4.2. The mean values with max and min for original 、 ICP 、 AT and ICPAT. between patient 1 and patient 3 patient 1 vs. patient 3 (total) Original (max). 0.8489. Original (min). 0.4052. ICP (max). 0.8702. ICP (min). 0.3324. AT (max). 0.9513. AT (min). 0.4251. ICPAT (max). 0.9473. ICPAT (min). 0.8201. patient 1 vs. patient 3 (max) Original (head). 0.8309. Original (body). 0.8489. ICP (head). 0.8676. ICP (body). 0.8702. AT (head). 0.9513. AT (body). 0.8642. ICPAT (head). 0.9473. ICPAT (body). 0.9438 patient 1 vs. patient 3 (min). Original (head). 0.4052. Original (body). 0.4052. ICP (head). 0.3324. ICP (body). 0.343. AT (head). 0.7414. AT (body). 0.4251. ICPAT (head). 0.8201. ICPAT (body). 0.8201. 50.

(61) Chapter 5 Conclusions and Future Works Conclusions and Future Work 5.1 Conclusions With advances in medical technology and the rapid development of electronic products, the researchers recently offer various ways to help medical personnel to diagnose specific diseases easily. Therefore, the treatment can be accurately provided to patients. In the field of medical imaging, the calibration is a very important issue. Through some suitable calibration methods, the medical images can be normalized to a same reference template before medical personnel determine or make more analyses. In this research paper, we emphasized on the determination of the CT images in the medical pictures. We can compare the differences between the processed images and the target images. The image registration can help us to easily compare the different parts. We used the ICP and AT to increase the degree of image registration. In conclusion, the experiment tells that the accuracy of this method is better than in either solely ICP or AT.. 5.2 Future Work The result has been accomplished in this thesis is only a preliminary study of image registration. Much of work is needed to be done in the future.. (1) Using the Nonlinear Methods The algorithms we used in this paper are all linear functions, which have many limitations. Thus, we can move toward the non-linear method in future (ex: bilinear 51.

(62) interpolation [24, 25, 26]).. (2) Towards 3D Images In this paper, we use 2D images. But a person's body or head are 3D. Therefore, we can learn how to convert 2D into 3D [27] in the future first. We can also study the brain [1] [28] first because the brain is a rigid image, which can easily help us to do 3D imaging registration.. (3) Computational Speed We spent a lot of time in running these experimental data when doing the image registration. Therefore, we must think of a new method to minimize the time spending.. 52.

(63) References [1] 蔡明倫，民 91，“三度空間腦部結構校準”, 國立交通大學資訊科學研究所碩士 論文。. [2] R. C. Gonzalez and R. E. Woods, Digital Image Processing Second Edition. New Jersey: Prentice Hall, 2002. [3] R. C. Gonzalez and R. E. Woods, Digital Image Processing Third Edition. New Jersey: Prentice Hall, 2008. [4] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Patt. Anal. Machine Intell., vol. 14, no. 2, pp. 239-256, Feb., 1992.. [5] D. Arthur and S. Vassilvitskii, “Worst-case and smoothed analysis of the ICP algorithm, with an application to the k-means method,” FOCS, pp.153-164, 2006. [6] S. Y. Du, N. N. Zheng, S. H. Ying, Q. B. You, and Y. Wu, “An extension of the ICP algorithm considering scale factor,” IEEE International Conference on Image Processing (ICIP), pp.193-196, 2007.. [7] T. Jost and H. Hugli, “A multi-resolution scheme ICP algorithm for fast shape registration,” Proc 1st Int. Conf. 3D Data Processing Visualization and Transmission, 2002, pp. 540-543.. [8] T. Zinsser, J. Schmidt, and H. Niemann, “A refined ICP algorithm for robust 3-D correspondence estimation,” IEEE International Conference on Image Processing (ICIP), Sept. 2003.. [9] C. V. Stewart, C. L. Tsai, and B. Roysam, “The dual-bootstrap iterative closest point algorithm with application to retinal image registration,” IEEE Trans. Med. Imag., Nov, 2003.. [10] N. Gelfand, L. Ikemoto, S. Rusinkiewicz, and M. Levoy, “3-D digital imaging and modeling,” Proc 4th Int. Conf. 3DIM 2003. 53.

(64) [11] R. F. Hashimoto, “An extension of an algorithm for finding sequential decomposition of erosions and dilations,” Comput. Vision Graphics Image Processing, 1998.. [12] J. Gil and R. Kimmel, “Efficient dilation, erosion, opening, and closing algorithms,” IEEE Trans. Patt. Anal. Machine Intell., vol. 24, no. 12, Dec, 2002. [13] N. Desikachari and R. M. Haralick, “Recursive binary dilation and erosion using digital line structuring elements in arbitrary orientations,” IEEE Trans. Image Processing, vol. 9, no. 5, May, 2000.. [14] S. Chen and R. M. Haralick, “Recursive erosion, dilation, opening, and closing transforms,” IEEE Trans. Image Processing, vol. 4, no. 3, March, 1995. [15] A. M. Siddiqi, M. Saleem and A. Masud, “A local transformation function for images,” International Conference Electrical Engineering (ICEE), April 2007. [16] N. Y. Lee, “Automatic generation of 3D vessels model using vessels image matching based on adaptive control points,” Proc. 6th International Conf. Advanced Language Processing and Web Information Technology, 2007.. [17] L. M. G. Fonseca and C. S. Kenney, “Control point assessment for image registration,” Comput. Vision Graphics Image Processing, pp.125-132, 1999. [18] W. E. Hart and M. H. Goldbaum, “Registering retinal images using automatically selected control point pairs,” Proc. IEEE Int. Conf. Image Processing, pp. 576–580, 1994. [19] L. Zagorchev and A. Goshtasby, “A comparative study of transformation functions for nonrigid image registration,”, IEEE Trans. Image Processing, vol. 15, no. 3, March,. 2006. [20] X. Chen, J. Yang, J. Zhang, and A. Waibel, “Automatic detection of signs with affine transformation,” Proc. 6th IEEE Workshop Conf. Applications of Computer Vision, pp. 32 – 36, Dec. 2002. 54.

(65) [21] Y. Zhao and B. Yuan, “A new affine transformation: Its theory and application to image coding,” IEEE Trans. Circuits Syst . , vol. 8, no. 3, 1998. [22] O. Camara, G. Delso, O. Colliot, A. Moreno-Ingelmo, and I. Bloch, “Explicit incorporation of prior anatomical information into a nonrigid registration of thoracic and abdominal CT and 18-FDG whole-body emission PET images,” Medical Imaging, IEEE Trans., vol. 26, no. 2, pp. 164-178, Feb, 2007.. [23] K. T. Gribbon and D. G. Bailey, “A novel approach to real-time bilinear interpolation,” Proc. 2nd IEEE Workshop Conf. DELTA 2004, pp. 126 – 131, Jan, 2004. [24] D. G. Bailey, A. Gilman, and R. Browne, “Bias characteristics of bilinear interpolation based registration,” Proc. IEEE Region 10 Conf. TENCON 2005, pp. 1-6, Nov, 2005. [25] M. C. Tsai and P. Y. Huang, “Design of scan converter using the locally 2-D bilinear interpolation,” Proc IEEE Int. Conf. SMC '06, vol. 5, pp.3961-3966, Oct, 2006. [26] S. Gefen, L. Bertrand, N. Kiryati, and J. Nissanov, “Localization of sections within the brain via 2D to 3D image registration,” Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, March, 2005, vol. 2, pp.733-736.. [27] G. Li, T. Liu, G. Young, L. Guo, and S. T. C. Wong, “Deformation invariant attribute vector for 3D image registration: method and validation,” 3rd IEEE Int. Symposium. Biomedical Imaging: Nano to Macro, 2006. pp.442 - 445, April, 2006.. [28] Y. Lo and S. Lee, “Affine transformation and its application to antenna arrays,” IEEE Trans. Antennas and Propagation, vol. 13, no. 6, Nov, 1965.. [29] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least square fitting of two 3-d point sets,” IEEE Trans. Patt. Anal. Machine Intell., pp.698 - 700, 1987. [30] B. K. P. Horn, “Closed-form solution of absolute orientation using unit 55.

(66) quaternions,” Journal of the Optical Society of America A, pp.629 - 642, April 1987. [31] B. K. P. Horn, H. M. Hilden, and S. H. Negahdaripour, “Closed-form solution of absolute orientation using orthonormal matrices,” Journal of the Optical Society of America A, pp.1127 - 1135, July 1988. [32] M. W. Walker, L. Shao, and R. A. Volz, “Estimating 3-d location parameters using dual number quaternions,” CVGIP: Image Understanding, pp.358 - 367, November 1991.. 56.

(67)

相關文件

Suzie Silver－ From Silly Symphonies to the

有效期限一年 多次使用 (* 需要 ICP 批准 ). 不同國家 /

◦ 金屬介電層 (inter-metal dielectric, IMD) 是介於兩 個金屬層中間，就像兩個導電的金屬或是兩條鄰 近的金屬線之間的絕緣薄膜，並以階梯覆蓋 (step

使我們初步掌握了電壓、電流和電阻三者之間的關係。我

避免不一致 的行為處理方法, 包括…. 不同人/同一人在不同時間對同樣行為

Keywords: Mobile ad-hoc network, Cluster manager electing, Fuzzy inference rule, Workload sharing, Backup manager... 致謝 致謝

經濟不景，如何抗逆 同一屋簷，各自修行 天倫之樂，不在爭辯 人際相處，尊重體諒 創意思考，營造空間 陶鑄心靈，激發想像

危機事件 後果 可預測性 持續性 震撼程度 估計危機 影響程度 一對小四及小. 二的兄妹，居 於學校同邨的