• 沒有找到結果。

使用形態學、影像分割與K-means方法由胸部斷層掃描圖自動產生胸部精簡地圖

N/A
N/A
Protected

Academic year: 2021

Share "使用形態學、影像分割與K-means方法由胸部斷層掃描圖自動產生胸部精簡地圖"

Copied!
70
0
0

加載中.... (立即查看全文)

全文

(1)國立高雄大學資訊工程學系研究所. 碩士論文. 使用形態學、影像分割與 K-means 方法由胸部斷層掃 描圖自動產生胸部精簡地圖 Automatically Generate a Simplified Chest Atlas from the Chest Computed Tomography Using Morphology, Image Segmentation and K-means. 研究生:蔡岳洋 撰 指導教授:殷堂凱 博士. 中華民國九十八年七月.

(2) 使用形態學、影像分割與 K-means 方法由胸部斷層掃描 圖自動產生胸部精簡地圖 指導教授:殷堂凱 博士 國立高雄大學資訊工程所. 學生:蔡岳洋 國立高雄大學資訊工程所. 摘要. 隋著科技的進步,電腦可以為我們人類所做的事情也越來越多。不論是生活、娛樂 工作方面都帶給我們許多方便。甚至是關於人體健康的醫療方面,也有針對不同需要 所開發的軟體給醫生作判讀使用。 本論文針對不同病人間胸部斷層掃描圖去做解剖學上部位的標示。由於拍攝斷層掃 描圖都會包含到我們做處理不希望留下的平台部份,所以先是利用類似斷開的技術也 就是連續的侵蝕加上連續的膨脹將我們不需要的平台跟一些雜訊部份去除掉,之後使 用 Sobel 邊緣偵測將此影像的大概重要輪廓取出來,針對不同的斷層掃描圖利用 Otsu's 方法自動設定不同的灰階度的臨界值將灰階值較低的肌肉跟臟器部分去除掉,留下部 分腔室與骨骼的輪廓做 K-means 演算法作分群並將每一群的中心點依解剖學標示出 所位在的部位。 最終其結果,我們所取的部位點,跟我們所想要標示的位置,在我們實驗的所有病 人最好的情況以像素為單位可以達到總平均 10.5582 的歐幾里得距離。. i.

(3) 關鍵字:侵蝕、膨脹、斷開、K-means、Sobel 邊緣偵測、Otsu。. ii.

(4) Automatically Generate a Simplified Chest Atlas from the Chest Computed Tomography Using Morphology, Image Segmentation and K-means Advisor(s): Dr. (Professor) Tang-Kai Yin Institute of Computer Science and Information Engineering National University of Kaohsiung. Student: Yue-Yang Tsai National University of Kaohsiung. Abstract With the improvement of technology, computers have given human beings a lot of improvement. These improvements include in our lifestyles and entertainment. Especially in the medical field, computers have given doctors a much more accurate diagnosis of patients. This paper deals with different patient’s chest CT (Computerized Tomography), which will give an analysis according to the human anatomy. When CTs are taken there will be outside influences, for example platforms will be photographed as well. Using a method similar to the method of opening which is a recursive erosion and dilation techniques, this would allow outside influences to be eliminated. After this, Sobel edge detection is used to get the outline of the pictures. Use Otsu’s method to automatically get the gray level threshold, and eliminate muscles or organs which have a lower gray level. This is due to iii.

(5) that muscles and organs have lower gray levels. This would leave cavities and bone cavities to appear on the CT. Finally, use K-Means to cluster.. Each cluster will be a. center point. Thus, the center points will give a clear view of the locations of all parts. In conclusion, the mean of total Euclidean distances between the labeled body parts and the ideal body positions is 10.5582 pixels.. Keywords: Erosion, dilation, opening, K-means, Sobel, Otsu.. iv.

(6) 致謝 這篇論文的產生,首先最感謝的是指導教授 殷堂凱博士,在他的諄諄教誨下,我的 論文才能在兩年內順利的完成。在這兩年中,殷教授給我們學生壓力不會很大,雜 務事情也不會很多,讓我們可以專心做研究。 另外要感謝的還有撥空前來參加論文口試的兩位口試委員,陳佳妍跟黃文禎老 師,本篇論文的產生,多謝他們的寶貴的意見跟指教。 當然家人跟朋友的支持也是此篇論文完成的重要一塊拼圖。父母的栽培還有大哥 岳洲跟姐姐可鈺的一路相挺。好友蘇銘傑一家人跟女友許爵蘭每當因為進度落後而 心情低落,總是會適時的幫我加油打氣帶給我許多精神上的支持,實在是非常感激 他們。. 蔡岳洋. 於高雄大學資訊工程研究所 中華民國九十八年七月. v.

(7) Contents 摘要 ......................................................................................................................................i Abstract.............................................................................................................................. iii 致謝 .....................................................................................................................................v Contents..............................................................................................................................vi List of Figures.................................................................................................................. viii List of Tables ......................................................................................................................ix Chapter 1 Introduction.........................................................................................................1 1.1 Motivation .................................................................................................................1 1.2 Contribution...............................................................................................................1 1.3 Organization ..............................................................................................................2 Chapter 2 Background and Related Work ...........................................................................3 2.1 Mathematical Morphology for Erosion, Dilation and Opening ................................4 2.1.1 Dilation ...............................................................................................................6 2.1.2 Erosion................................................................................................................9 2.1.3 Opening ............................................................................................................12 2.2 Sobel Edge Detection ..............................................................................................13 2.3 Thresholding of Image Segmentation......................................................................15 2.4 K-means of Clustering of Data Mining ...................................................................17 Chapter 3 Automatically Generate a Simplified Chest Atlas from the Chest Computed Tomography Using Morphology, Image Segmentation, and K-means .............................19 3.1 Introduction .............................................................................................................19 3.2 The Experiment Steps..............................................................................................21 3.2.1 Similarities to the Opening Method .................................................................22 3.2.2 Sobel Edge Detection .......................................................................................30 3.2.3 Thresholding.....................................................................................................32 3.2.4 K-means Clustering ..........................................................................................34 3.2.4 Ordering the Centroid from K-means...............................................................37 3.5.5 Proposed Algorithm..........................................................................................39 Chapter 4 Experiment and Discussion ..............................................................................40 vi.

(8) 4.1 The Results Using the Proposed Method to Ten Patients........................................41 4.1.1 The Results Using the Proposed Method .........................................................41 4.1.2 The Results Using the Proposed Method in Choosing Six Body Parts. ...........42 4.1.3 The Results Using the Proposed Method in Choosing Five Body Parts. .........45 4.2 Special Case: The Invasion of Metallic Items .........................................................48 4.3 Conclusion ...............................................................................................................54 Chapter 5 Conclusion and Future Work ............................................................................56 5.1 Conclusions .............................................................................................................56 5.2 Future Works ...........................................................................................................56 References .........................................................................................................................58. vii.

(9) List of Figures Figure 2.1 The process of dilation (a) A is dilated by B (b) SE movement path(c) Dilation range (d) Results of dilation ..............................................................................7 Figure 2.2 The process of erosion (a) A is erosion by B (b) SE movement path (c) Erosion range (d) Results of erosion ...............................................................10 Figure 2.3 The standard structuring elements (a) N4 and (b) N8. .....................................13 Figure 2.4 Sobel edge detection (a) A 3x3 region of an image (p’s are gray-level values)(b) Gx mask(c) Gy mask......................................................................15 Figure 2.5 The flowchart for K-means algorithm.............................................................18 Figure 3.1 The flow chart for proposed method ...............................................................21 Figure 3.2 (a) Typical chest CT (b) Platform used in image capture................................23 Figure 3.3 The standard structuring elements (SE) N4 .....................................................23 Figure 3.4 The dilation process for a binary image ..........................................................25 Figure 3.5 The dilation process for a grayscale image .....................................................26 Figure 3.6 The erosion process for a binary image ..........................................................27 Figure 3.7 The erosion process for a grayscale image......................................................28 Figure 3.8 Delete the platform (a) Original patient 1 chest ct (b) Result of our method .29 Figure 3.9 The flow chart for Sobel algorithm .................................................................30 Figure 3.10 Use Sobel detection on different theshold,T (a) T=0 (b) T=10000...............31 Figure 3.11 Reasons for the usage of thresholding (a) The edge of soft tissue that we want to eliminate (b) The result of propose method we want .......................33 Figure 3.12 Sobel’s result and thresholding’s result through AND operation..................34 Figure 3.13 Different clustering numbers, K-means’s result (a) K=3 (b) K=4 ................35 Figure 3.14 Different number so cluster, K-means results (a) K=5 (b) K=6....................36 Figure 3.15 Ordering 5 centroid .......................................................................................37 Figure 3.16 Ordering 6 centroid .......................................................................................38 Figure 4.1 Best case of six body parts chosen (a) Original (b) Labeled image ................43 Figure 4.2 Worst case in choosing six body parts. (a)Original image (b) Image after labeling ............................................................................................................44 Figure 4.3 Best case in choosing five body parts (a) Original image (b) Image after labeling ..............................................................................................................................46 Figure 4.4 Worst case in choosing five body parts (a) Original image (b) Image after labeling ............................................................................................................47 Figure 4.5 An unknown item invasion in Patient 8 (a) Original (b) Results from choosing 5 body part. (c) Results from choosing 6 body parts.......................................50 Figure 4.6 Patient 1 choosing six body part’s Euclidean distance histogram...................54 Figure 4.7 Patient 1 choosing five body part’s Euclidean distance histogram. ................54 viii.

(10) List of Tables Table 3.1 A simple dilation and erosion rule of using N4 as SE.......................................23 Table 3.2 The results of body parts according to the ordering points...............................39 Table 4.1 Obtaining five body parts, using pixel as an unit to calculate Euclidean distance ........................................................................................................................................ 51 Table 4.2 Obtaining six body parts, using pixel as an unit to calculate Euclidean distance. ...........................................................................................................................................52 Table 4.3 Obtaining five body parts and eliminating the special case..............................53 Table 4.4 Obtaining six body parts and eliminating the special case. ..............................53. ix.

(11) Chapter 1 Introduction 1.1 Motivation In recent years, with the improvement of technology, medical technology has also improved. Information technology has also been used in the medical field [21] but has not yet matured to gain usage. Computerized Tomography is abbreviated as CT. Most of the time in the medical field, CTs are embedded into DICOM (Digital Image and Communication in Medicine) for these file types are easily stored and transferred. Nuclear medicine physicians make a diagnosis by inspecting these images and searching for abnormal lesion activity. Our method would cut down on the time and energy taken for doctors to judge on the numbers of CT. This paper is to give an easy system in order to assist doctors by automatically labeling locations of meaningful parts.. 1.2 Contribution Computer-aided diagnosis (CAD) system applications for nuclear medicine whole body bone scans are quite limited in number. Huang proposed computer-aided diagnosis system applications for us. In his thesis, he used the fuzzy theory to separate the soft tissue and bone. After that, use the whole body bone at the body’s relative location to separate the body into 23 parts. According to different parts and using different methods to find bone lesion. Yin and Chiu [30] employed a characteristic-point-based fuzzy inference system (CPFIS) to locate bone lesions. In this paper, we provided the method, which can finish some CAD preparations: z. Find the important body part.. z. By our chosen body part, we can separate it into section or use the chosen boy parts 1.

(12) to find extra location points. z. Efficiently separating the soft tissue and bone, the remaining bones will be processed further.. z. According to different sections and setting, provide a method to find the bone lesion. We can also use a further image processing:. z. Our final results of labeled parts can be taken as the control points for the ICP (Iterative Closest Point) image registration between two different patient’s chest CTs.. 1.3 Organization This paper is organized as the follows.. In Chapter 2, it describes the background. and related work. In Chapter 3, it describes the main method about this paper. And the experiment result and discussion are in Chapter 4. Finally in Chapter 5, the conclusion and future work will be described.. 2.

(13) Chapter 2 Background and Related Work ICPAT (Iterative Closest Point and Affine Transformation) is an image registration method [5]. Lu used control points and then used ICP algorithm and Affine method for image registration. However, in his research paper, obtaining the control points is using the entire pixel’s center of gravity in each two images. Although obtaining the center of gravity is simple and it’s easy to be used, however, when choosing two images the center of gravity is useless, as we choose the body’s meaningful parts. In medical image, this center of gravity is also meaningless, and obtaining center of gravity points is using pixel value distribution. There will be a difference due to the pixel value distribution. The points that we wish to gain are meaningful points such as organs or any body cavity. We should get the points that are useful and important to us. Lin and Huang provided a kind of image registration method [10, 16]. Lin used 3D brain images. Huang used 2D or 3D medical images, where she didn’t narrow her limit to only the brain. Their method uses drastically geometric transformation, for example rotating and displacement. After geometric transformation they calculated the transformed image and the reference image’s MI value (Mutual information). When MI is the biggest value, this transformation image is the result that we wanted. Although this result is acceptable, however, the negatives are that the calculation is huge, every time after transformation MI value has to be calculated. If the convergence condition is not set, it would be time consuming until MI turns into the biggest value. Our proposed method is efficient to find the important location in the chest area. Use two same organ location points from two different medical images, and then calculate the corresponding relationship of these two same locations. This will achieve registration. 3.

(14) All the methods used in this paper are a collection of mathematical morphology, image segmentation and data mining using clustering K-means. This system will be able to give a clear chest CT with clear location points. In conclusion, the CT will have a clear location and description which will become a definite atlas. In this chapter, we will introduce the methods and background information. It will be classified into four main parts: (1) Erosion, Dilation and Opening: mathematical morphology (2) Sobel edge detection: edge detection technology (3) Otsu's method: finding the appropriate threshold’s technique (4) K-means method: clustering technology.. 2.1 Mathematical Morphology for Erosion, Dilation and Opening Before introducing erosion and dilation, mathematical morphology’s history should be discussed first.. Mathematical morphology [23, 32] abbreviated as ‘MM’ is both a technique and a theory used to analyzed and process geometrical structures. Its basis is from set theory, lattice theory, topology and random functions. Digital images use mathematical morphology and it can be used on graphs, surface meshes, solids and other spatial structures.. Topological and geometrical continuous space concepts employing size, shapes, convexity, connectivity and geodesic distance can all be related to mathematical morphology. It is also the base of morphological image processing, which includes operators that turns the images according to the items mentioned above. 4.

(15) Mathematical morphology was first developed to use on binary images. It was not until years later that its first use was applied to grayscale functions and images. Complete lattices are today widely accepted as a foundation of mathematical morphology.. Mathematical morphology was invented in the year of 1964 by a collection of work by George Matheron and Jean Serra at France. Matheron revised the thesis of Serra and devoted to the quantification of mineral characteristics from thin cross sections. This work resolved into novel practical approach and theoretical advancements in integral geometry and topology. During the 1960’s and 70’s mathematical morphology was used on binary images, treated as sets, and generated a large number of binary operators and techniques. These techniques include hit-or-miss transform, dilation, erosion, opening, closing, granulometry, thinning, skeletonization, ultimate erosion, conditional bisector, and others. During this time a random approach was developed using image models. All these work was invented by Fontainebleau.. During the times of 1980 and 1990, mathematical morphology was more recognized as there were more countries that began using it. They used large numbers to apply on image problems and applications.. In 1986, Jean Serra further made mathematical morphology popular by using theoretical framework based on complete lattices. This gave the theory flexibility as its application was used on larger numbers of structures, which included color images, video, graphs, meshes, etc. Then we define the parameters: f ( x, y ) and g ( x, y ) are digital image functions,. where f ( x, y ) is the input image and g ( x, y ) is a structuring element. If Z denotes the set of real integers, the assumption is that ( x, y ) are integers from 5.

(16) Z × Z and that f and b are functions that assign a gray-level value (a real number from the set of real numbers, R ) to each distinct pair of coordinates ( x, y ) . If the gray levels also are integers, Z replaces R .. 2.1.1 Dilation Dilation is a kind of binary image of an object to expand in size [19]. This change on how thick and how big the object will be. The size of the change is due to the SE (structuring element). Mathematically dilation is according to a set operation, the dilation of A by B, denoted as A ⊕ B .. ⎧. ⎫. A ⊕ B = ⎪⎨ c ( B ) c ∩ A ≠ φ ⎪⎬ ^. ⎩⎪. (2-1). ⎭⎪. The φ is the empty set, where B is structuring element, so A uses B to expand by the displacement of c. How B moves in according B has to have an overlap in A.. 6.

(17) (a). (b). B. A. (c). (d). Figure 2.1 The process of dilation (a) A is dilated by B (b) SE movement path(c) Dilation range (d) Results of dilation. For example, let’s see Figure 2.1(a), A is being dilated by B. B is structuring element. Figure 2.1(b) clearly shows B movement around A’s outline, dark green’s portion describes B’s center location. B’s center point is moving around the area outside of A. 7.

(18) The red section shows the overlap of A. Figure 2.1(c) red portion describes the dilation the final is seen in Figure 2.1(d). Next we will use dilation to use gray level, the mathematic definition is different in gray level [31, 32]. Gray-scale dilation of f by g , denoted f ⊕ g , is defined as. [ f ⊕ g ](i, j ) = max { f (i − x, j − y ) + g ( x, y )} ( x, y )∈ g. (2-2). Keep in mind that f and g are functions, rather than sets, as is the case in binary morphology. The condition that (i − x) and ( j − y ) have to be in the domain of f , and x and. y have to be in the domain of b , is analogous to the condition in the binary definition of dilation, where the two sets have to overlap by at least one element. Note also that the form of Eq. (2-2) is similar to 2-D convolution, with the max operation replacing the sums of convolution and the addition replacing the products of convolution. We illustrate the notation and mechanics of Eq. (2-2) by means of simple 1-D functions. For functions of one variable, Eq. (2-3) reduces to the expression. [ f ⊕ g ](i ) = max{ f (i − x) + g ( x)} ( x)∈ g. (2-3). Mathematical morphology what the convolution kernel is to linear filter theory. Recall from the discussion of convolution that f (− x) is simply f ( x) mirrored with respect to the origin of the x axis. As in convolution, the function f (i − x) moves to the right for positive s and to the left for negative i .The requirements that the value of (i − x) has to 8.

(19) be in the domain of f and that the value of x has to be in the domain of g imply that f and b overlap. Eq. (2-2) could be written so that g undergoes translation instead of f . However, if the domain of g is smaller than the domain of f (a condition almost always found in practice), the form given in Eq. (2-2) is simpler in terms of indexing and achieves the same result. Conceptually, f sliding by g is really no different than g sliding by f . The. general. effect. of. performing. dilation. on. a. gray-scale. image. is. twofold: (1) If all the values of the structuring element are positive, the output image tends to be brighter than the input. (2) Dark details either are reduced or eliminated, depending on how their values and shapes relate to the structuring element used for dilation.. 2.1.2 Erosion Erosion is an object in binary image that calculates how the shrink or becoming thinner [19]. This is similar to dilation. Erosion and how much erosion is used in structuring element to control. According to Z 2 set A and B, A is being eroded by B becoming A  B , is defined as A  B = {c ( B)c ⊆ A}. (2-4). In other words, this equation shows A being eroded by B, B is moving according the c, overlapping the c in A sets.. 9.

(20) (a). (b). B. A. (c). (d). Figure 2.2 The process of erosion (a) A is erosion by B (b) SE movement path (c) Erosion range (d) Results of erosion. For example, let’s look at Figure 2.2(a), A is being dilated by B, thus, B is so called structuring element and so is SE. Figure 2.2(b) clearly shows B is on the outline moving around A. The center of B is moving around section A. In this figure, B’s center is 10.

(21) marked green. The section marked red is where B overlaps A. Figure 2.2(c) the section marked red is where A is eroded by B. The results are seen in Figure 2.2(d). Next, it is the same as before where erosion is used on the gray level images, and the definition is different [31, 32]. The erosion method is the same as dilation, the difference between them both is to narrow the objects in the images, here we use the basic part. We have a detailed description as above. Gray-scale erosion, denoted f  g , is defined as. [ f ⊖ g ] (i, j ) = min { f (i + x, j + y ) + g ( x, y )} ( x, y ) ∈ g. (2-5). The condition that (i + x) and ( j + y ) have to be in the domain of f , and x and. y have to be in the domain of b , is analogous to the condition in the binary definition of erosion, where the structuring element has to be complete contained by the set being eroded. Note that the form of Eq. (2-5) is similar in form to 2-D correlation, with the min operation replacing the sums of correlation and subtraction replacing the products of correlation. We illustrate the mechanics of Eq. (2-6) by eroding a simple 1-D functions. For functions of one variable, the expression for erosion reduces to. [ f ∗ g ](i ) = min{ f (i + x) + g ( x)} ( x)∈ g. (2-6). As in correlation, the function f (i + x) moves to the left for positive i and to the right for negative i . The requirements that (i + x) have to be in the domain of f and 11.

(22) x have to be in the domain of g imply that the range of g is completely contained. within the range of the displaced f . Finally,. unlike. the. binary. definition. of. erosion,. f. rather. than. the. structuring element g , is shifted. Eq. (2-5) could be written so that g is the function translated, resulting in a more complicated expression in terms of indexing. Because f sliding past g conceptually is the same as g sliding past f . The general effect of performing erosion on a gray-scale image is twofold: (1) If all the elements of the structuring element are positive, the output image tends to be darker than the input image. (2) The effect of bright details in the input image that is smaller in area than the structuring element is reduced, with the degree of the reduction being determined by the gray-level values surrounding the bright detail and by the shape and amplitude values of the structuring element itself.. 2.1.3 Opening As previously discussed, dilation is expanded on an object in the image. On the other hand, erosion is shrinking on an object in the image. A is being opening by B is defined on a mathematically morphology. This is the defined as A 。 B [19].. A 。 B = ( A  B) ⊕ B. (2-7). This is A being eroded by B, and it results in the dilation of B. Opening is from mathematical morphology. Dilations often expand on an image and erosions shrinks. However, it is important to note that opening generally smoothes the contour of an object and eliminates thin protrusion. The two most common structuring elements (given a Cartesian grid) are the 12.

(23) 4-connected and 8-connected sets, N4 and N8, seen in Figure 2.3. Of course there are several different kinds of structuring elements. Different situations will be analysis to use different kinds of structuring elements.. (a). (b). 1 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. Figure 2.3 The standard structuring elements (a) N4 and (b) N8.. 2.2 Sobel Edge Detection Image segmentation divides images into different numbers of sub-images, area and objects. This step is the most important step in image processing. Image segmentation algorithms are based on two main properties: discontinuity and similarity [19, 31, 32]. The first properties are to partition an image based on its abrupt changes in intensity. This is seen in edges in an image. The second property is on partitioning an image into regions that is a set of predefined criteria. Thresholding, region growing, and region splitting and merging are examples of methods in both these properties. Next we will deal with the topic of Sobel edge detection’s method. Edge detection is a method that uses difference of adjoining pixels to find out the edge [2].The difference is bigger, the edge is clearer. Otherwise, the difference is small, the edge is not clear. 13.

(24) Edge detection is by far the most common approach for detecting meaningful discontinuities in gray level .In this section we discuss approaches for implementing first-and second-order digital derivatives for the detection of edges in an image [19]. In image processing we have to calculate first order derivatives equals the calculation called gradient. The gradient of an image f(x, y) at location (x, y) defined as the vector. ⎡ ∂f ∂f ⎤ ∇f = ⎣⎡Gx G y ⎦⎤ = ⎢ ⎥ ⎣ ∂x ∂y ⎦. (2-8). It is well known from vector analysis that the gradient vector points in the direction of maximum rate of change off at coordinates(x, y). An important quantity in edge detection is the magnitude of this vector, denoted ∇fm , where. ∇fm =mag ( ∇f )= [Gx2 + G y2 ]1/2. = [(∂f / ∂x) 2 + (∂f / ∂y ) 2 ]1/2. (2-9). For simplifying the calculation, sometimes the equations underneath is used as replacements.. ∇fm ≈ Gx2 +G y2. (2-10). ∇fm ≈ Gx + G y. (2-11). According to R. C. Gonzalez and R.E. Wood’s book “Digital Image Processing” [31, 14.

(25) 32], meaningful edge is depended on the changeable gray level is bigger than the background. Therefore, we have to define on an image’s edge point, looking at its two dimensional one order derivatives value is bigger than a specified threshold. The Sobel Edge Detection being introduced today belongs to a technology in image segmentation. Simply said, in Figure 2.4, (a) is a 3x3 region of an image where p’x are the gray level values and (b), (c) is a calculation of near absolute one order derivatives Gx and Gy mask. The gradient can be computed as seen below.. G = [Gx2 + Gy2 ]1/2 = {[( p7 + 2 p8 + p9) − ( p1+ 2 p2 + p3)]2 +[( p3 + 2 p6 + p9) − ( p1+ 2 p4 + p7)]2}1/2 (2-12). If a pixel is at the (x, y) coordinate and G ≥ T , T will be the threshold. We will use the pixel as an edge point.. (a). (b). (c). p1 p2 p3. -1. -2. -1. -1. 0. 1. p4 p5 p6. 0. 0. 0. -2. 0. 2. p7 p8 p9. 1. 2. 1. -1. 0. 1. Figure 2.4 Sobel edge detection (a) A 3x3 region of an image (p’s are gray-level values) (b) Gx mask(c) Gy mask. 2.3 Thresholding of Image Segmentation Due to its intuitive properties and simplicity of the implementation image thresholding has a central position in the applications in image segmentation. We have to get the object 15.

(26) from the background and the rest is left over. Extracting the object from background is to select a threshold T that separates object points and background points. The object any point 2D’s coordinate (x, y) for which any point. f ( x, y ) > T. is the object point.. Otherwise the point is called a background point. In this paper, we will use Nobuyuki Otsu’s method to calculate the threshold [9, 11, 25]. The method is a non-parametric and unsupervised method of automatic threshold selection for picture segmentation. An optimal threshold is selected by the discriminate criterion, namely, so that it maximizes the separability of the resultant classes in gray levels. The procedure is very simple, utilizing only the zeros and the first-order cumulative moments of the gray-level histogram. For this method gray level histograms can be seen as a probability density function seen underneath [19]:. pr (rq ) =. n. nq n. , q=0, 1, 2…, L-1. (2-13). is the total number of pixels, nq is the amount from the rq which is the pixel value.. L represents the total number of choices. Let’s assume you choose threshold value k, and using two sets C0 where the range is collection are [0, 1... k-1] and the second set C1 where its range is [k, k+1… L-1] . Otsu’s method is find k value where k is a threshold, which separates two classes: C0 and C1 . Then after obtaining the two classes, calculate the between-class variance. The chosen k value will make σ B2 the biggest value.. σ B2 =ω0 ( μ0 − μT )2 + ω1 ( μ1 − μT ) 2. 16. (2-14).

(27) Where. k −1. ω0 = ∑ pq (rq ) q =0 L −1. ω1 = ∑ pq (rq ) q =k. k −1. μ0 = ∑ qpq (rq ) / ω0 q =0. L −1. μ1 = ∑ qpq (rq ) / ω1 q =k. L −1. μT = ∑ qpq (rq ) q =0. 2.4 K-means of Clustering of Data Mining Data mining is also called Knowledge Discovery in Databases (KDD), which is defined: Extracting the unknown valuable information which is hidden from the data [14]. ”Data mining is also obtained by automatic or semi-automatic data collection and massive data analysis for establishment of a valid model” [13]. Data mining commonly involves four classes of task: 1. Classification, 2. Clustering, 3. Regression, and 4. Association rule learning. In this research paper we use the technique of clustering, which is also what is introduced below as K-means algorithm. The K-means algorithm assigns each point to the cluster whose center (also called centroid) is nearest. The center is the average of all the points in the cluster — that is, its coordinates are the arithmetic mean for each dimension separately over all the points in the cluster. The algorithm steps are [1]: Step1: Specify the number of cluster, k. 17.

(28) Step2: Randomly generate k clusters and determine the cluster centers .Assign each point to the nearest cluster center.. Step3: Use the leftover to assign each point to nearest cluster center to classify into a cluster.. Step4: With every present cluster information, recomputed to find the new cluster center. Step5: If the convergence situation is not reached repeat step 3 to step 5, until convergence situation is met. (Convergence situation is met when the k group does not change.). Figure 2.5 The flowchart for K-means algorithm 18.

(29) Chapter 3 Automatically Generate a Simplified Chest Atlas from the Chest Computed Tomography Using Morphology, Image Segmentation and K-means Medical image is used for medical research and medical treatment. Using imaging, this would allow medical staff to get a view of the interior of human without opening. In 1895 a German physician, Wilhelm Roentgen, discovered X – rays and opened a new era in medical field [15]. From then on, doctors would able to get a view of the interior without doing operations. Before X – rays, risks were high as touching and operating on the human was very common. Medical imaging is developing until today, apart from angiography, cardiac angiography, computerized tomography, dental radiography, fluoroscopy, mammography radiography, there are sill position emission tomography and single photon emission tomography. This paper’s research is conducted through CT’s DICOM files provided by cooperating with doctors.. 3.1 Introduction In this chapter we use different kinds of image processing methods and collected data mining method called K-means. This would gradually allow us to achieve chest CT atlas. Firstly use the method of similar to the opening method which is also continuously 19.

(30) eroding and dilating. We can eliminate the platform in the CT. Then two steps are used, Sobel edge detection is used for giving the outline. After that we use Otsu’s method to gain gray threshold’s value. This would allow us to eliminate the unwanted body parts and get the body parts we need in the CT. Next using the two photos for binary image and operate AND in morphology. In other words, we just need the edges and the bone portion. After that we calculate the non-zero coordinates as an input. Use K-means algorithm to group the coordinates, and get each centroid’s location as an output on the location of the image. Then order the centroids, based on the book about anatomy [8, 12, 17], add the locations and the description of each location. In conclusion we will achieve the chest CT image as an atlas.. 20.

(31) Figure 3.1 The flow chart for proposed method. 3.2 The Experiment Steps According to the research steps mentioned above, we used the chest CT provided by the doctors. A total of 10 patients, there is six males and four females. The biggest gray level is 216 − 1 . We took the clearness of the images into consideration and location of the organ in the body’s symmetry. After that after taking all these into consideration, we than would understand how many clusters we need. We needed the cluster number were six 21.

(32) and five clusters. The program we used were MATLAB to simulation the results and displaying the results.. 3.2.1 Similarities to the Opening Method The CT provided by the doctors were patients lying on a platform. This can be seen in figure 3.2(a). This would alternate the results. So we would have to eliminate the platform, so that an accurate result would be gained. All images were unable to be conducted so only one patients image were provided to show the result. We used a technique similar to the opening [28], first using several erosion than the method several dilation to eliminate the platform. Before using the technique similar to opening, a threshold of 150 was used to use the pixel that has a smaller value as a background. This would allow pixels values that are smaller or other outlier pollutants to be ignored. This would speed up the proposed technique and getting a faster result. This paper used N4 as structuring element to be tested.. 22.

(33) (a). (b). Figure 3.2 (a) Typical chest CT (b) Platform used in image capture (Source: http://en.wikipedia.org/wiki/File:64_slice_scanner.JPG). 1 1. 1. 1. 1 Figure 3.3 The standard structuring elements (SE) N4 Table 3.1 A simple dilation and erosion rule of using N4 as SE. Operation. Rule. Dilation. Easily said the output is the input of pixel neighboring (up, down, left, right and itself) of the biggest pixel value which replaces the original pixel. If binary image, the neighboring points, if a pixel is not the background, if it isn’t zero, the operated pixel will be replaced by 1.. Erosion. Easily said the output is the input of pixel neighboring (up, down, left, right and itself) is the smallest pixel value which replaces the original pixel. If binary image, the neighboring points, if a pixel is the background, if it is zero, the operated pixel will be replaced by 0.. 23.

(34) According to binary image’s dilation, as seen in Table 3.1, this provides a prompt discussion. The three tables in Figure 3.4 show the dilation process. We use a pixel for this discussion. This pixel is marked by arrow heads. In Figure 3.4(a) we can see this pixel is neighbored by green marked pixels. The red marked is neighbored by foreground pixel, thus the pixel that is being treated (marked red) will be transformed into a foreground pixel. In Figure 3.4(b) after treating all the pixels in the image, the results can be seen in 3.4(c). According to gray level imaging we will use Figure 3.5. We use a pixel for this discussion. This pixel is marked by black arrow heads. In Figure 3.5(a), in the table seen above, the pixel’s value will be overtaken by the biggest value in its neighboring pixels as seen in Figure 3.5(b). In Figure 3.5(c) are the results after treating all the pixels in the image. Next, we will be discussing about erosion. Figure 3.6 shows erosion in the binary image, the black arrow head shows the pixel. The neighboring pixels include background pixels, so the black arrow head pixels will be background pixels as seen in Figure 3.6(b). The results can be seen in Figure 3.6(c). According to the erosion on gray level image, the black arrow head show the pixel will be overtaken by the smallest pixel value in the neighboring pixels. As seen in Figure 3.7(b). The results can be seen in Figure 3.7(c).. 24.

(35) (a). (b). 1. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 0. 0. 0. 1. 1. 1. 1. 0. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 0. 1. (c). Figure 3.4 The dilation process for a binary image:. (a) The original binary image, according to the arrow heads pixel for dilation process. (b) As the neighboring pixel is foreground pixel, so the value 0 is overtaken by 1. (c) Results.. 25.

(36) (a). (b). 191 134 164 122 3 21. 23. 187 4. 10. 9. 11. 133 27. 69. 35. 17. 144 21. 42. 78. 222 61. 134 187 122. 29. 187. 73. (c) 191 191 187 164 122 191 187 187 187 29 69. 35. 187 144 133. 69. 78. 222 144 144. 78. 222 222 222 73. Figure 3.5 The dilation process for a grayscale image:. (a) The original gray level image, according to the arrow head pixel for a dilation process. (b) As the neighboring pixel value, the biggest value will overtake the arrow head pixel. (c) Results.. 26.

(37) (a). (b). 1. 1. 1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. (c). Figure 3.6 The erosion process for a binary image:. (a) The original binary image, the arrow head pixel is being used in an erosion process. (b) As the neighboring pixel is a background one, 1 will be overtaken by 0. (c) Results.. 27.

(38) (a). (b). 191 134 164 122 3 21. 23. 187 4. 10. 9. 11. 133 27. 69. 35. 17. 144 21. 42. 78. 222 61. 73. 21. 23. 122 3. 3. 10. 9. 4. 4. 3. 9. 9. 9. 4. 21. 10. 9. 11. 17. 21. 42. 35. 17. 61. 21. 134 187 122. 29. 187. (c). Figure 3.7 The erosion process for a grayscale image:. (a) The original grayscale image, the pixel marked by the arrow heads is being used in the erosion process. (b) According to the neighboring pixel value, the smallest value will be overtaken. (c) Results.. 28.

(39) Due to the erosion process was not conducted thoroughly the platform in the image would not be able to be deleted. However, if the erosion process was conducted was conducted over thoroughly, after dilation the details in the image will still is lost. How many times we repeat the process is a topic that can be further discussed. After testing numerous times, and the optimal number of process is 13 times. This means doing a process of erosions 13 times and straight after doing the process of dilation 13 times. The results can be seen in Figure 3.8.. (a). (b). Figure 3.8 Delete the platform (a) Original patient 1 chest ct (b) Result of our method. In Figure 3.8 we can see the original patient 1 CT and the results of the method similar to opening. The experimental images in this chapter is all according to patient 1 unless noted otherwise.. 29.

(40) 3.2.2 Sobel Edge Detection. g ≥T?. Figure 3.9 The flow chart for Sobel algorithm. Eliminating the platform, thus will leave the parts where will be processed, this is also the body. The flow chart for Sobel algorithm is shown above. This is a edge detection methos which uses the calculation of a gradient, and we choose a threshold to see if the gradient is bigger than the threshold. This will show if this is an edge point. Before this we will need a Threshold, T’s value to determine a Sobel algorithm’s process result. After numerous experiments on a number of T values, as seen in Figure 3.10, we have decided to use the T value of 100000. This value might be a big threshold, however, we don’t need all the detail edge in the image. So the Sobel process shouldn’t be conducted too sensitive. ... 30.

(41) (a). (b). (c). Figure 3.10 Use Sobel detection on different theshold,T (a) T=0 (b) T=10000. (c)T=100000. 31.

(42) 3.2.3 Thresholding The above section, we get the after edge detection processed image. This image before it went through the edge detection process has already eliminated the platform. We used this image in the steps afterwards; however, the results were not acceptable. Use the results mentioned above seen in Figure 3.11(a), the arrow heads shows the outer outline, this is a very visible edge however, and it is helpless to us. This is due to that the outline changes very much, which means it might get taller, shorter, fatter or thinner. Thus, it is not suitable to be registration. As seen in Figure 3.11(b), the numbers are the regions where we want to mark. These regions all have special features; these are all on the bone or nearby it. Thus, the pixel’s value around these special features are big, these must be around a pixel gray values will change in a big range. According to above, we are finding the pixel apart from edge point, its gray level has to overcome a value, and thus we have designed a method. We want to keep the Sobel edge detection; afterwards this point will be the edge point. This point has to have a high pixel value in the original image. So we want to choose a threshold which has a high gray level pixel and thus, the results from the Sobel edge detection and original image after thresholding. After both the Sobel edge detection and thresholding images goes through a process of morphology which is called the AND operation [31, 32]. Thus, we thought this would be the results that we wanted.. 32.

(43) (a). (b). Figure 3.11 Reasons for the usage of thresholding (a) The edge of soft tissue that we want to eliminate (b) The result of propose method we want. According to thresholding, the technique that we used is Otsu method, which provided a method to search for a suitable threshold’s value. Using the method to calculate a suitable threshold and afterwards, the pixel as an object point, thus the pixel gray level would not be changed. Where the pixel’s gray leve is lower than the threshold, than it would become the background point, this is where the gray level is zero. The first step is the method that is similar to the opening and than the thresholding. At the end the results are saved , we have already done similar to the opening which is Otsu and Sobel edge detection. Both results are than gone through AND operation which is obtaining two images interstectons. The results from these two are a binary image seen in Figure 3.12.. 33.

(44) Figure 3.12 Sobel’s result and thresholding’s result through AND operation.. It is visible about the differences between Figure 3.11(a) and Figure 3.12. The outer line of the body is eliminated and some organs and muscles have been eliminated too. The leftovers are some bone edge and these are the parts are the input of the K-means algorithm.. 3.2.4 K-means Clustering We take the previous results which have gone through the AND operation, and gain a binary image. The pixel’s value which is not zero, than we take this pixel’s coordinates as a K-means algorithm’s input. Than different cluster’s centroid’s coordinates are the coordinates that we need. Then according to the coordinates it is than marked on the images. After several experimental tries, we have used different K values. However, the results we were not satisfied. As seen in Figure 3.13 shows. Figure 3.13(a) (b) is shows 34.

(45) K-means in different numbers of cluster. Red arrow heads are the ideal location. The yellow arrow heads pointing to the yellow star is the results from the K-means algorithm. This is also the centroid of each cluster. (a). (b). Figure 3.13 Different clustering numbers, K-means’s result (a) K=3 (b) K=4 35.

(46) Lastly we use two different K value, where K = 5 and K= 6. The clustering results were closer to the one we expected, as seen in Figure 3.14.. (a). (b). Figure 3.14 Different number so cluster, K-means results (a) K=5 (b) K=6 36.

(47) 3.2.4 Ordering the Centroid from K-means After K-mean algorithm is finished, we get all the centroids coordinate. Although the coordinates were gained, we need to order the results so that we can use the results from this ordering to mark the body parts. If you wanted to choose 5 this means that K would be set as five in the K-means algorithm. Than this would gain the centroid coordinates. This can be used x coordinates to order in 2 dimensional. The smallest x’s coordinates would be the first point. This is ordered from smallest to largest. The last x coordinate would be the biggest, seen in Figure 3.15.. Figure 3.15 Ordering 5 centroid. If there are six body parts, the K value would be six in the K-mean process. It is the same process as mentioned above with arranging the x coordinates from smallest to largest. The third and fourth coordinates would be arranged according to the y coordinates. The biggest coordinates of the third and fourth coordinates would be arranged as the final third coordinate. The smallest y coordinates between the third and 37.

(48) fourth coordinates would be the final fourth coordinates. This is seen in Figure 3.16.. Figure 3.16 Ordering 6 centroid. Lastly we arrange the results according to the anatomy’s names. The first point would be arranged to the left upper limb and etc…These body parts are from anatomy books, as seen in Table 3.2. For example, as seen in the image above (Figure 3.15) the yellow number 1’s location, we would confirm it with the table, where 1 is the ‘left upper limb.. 38.

(49) Table 3.2 The results of body parts according to the ordering points.. 1st. 2nd. 3rd. 4th. 5th. 6th. k=6(Six body parts). Left Subscapularis Body of Spinal canal Subscapularis Right upper muscle vertebra upper muscle or Head of lib limb limb or Head of lib. k=5(Five body parts). Left Subscapularis Spinal canal upper muscle limb or Head of lib. Subscapularis Right upper muscle or Head of lib limb. 3.5.5 Proposed method Algorithm My proposed method algorithm is summarized as follows: Step1: Use the image through recursive erosion, dilation, to eliminate the platform chest image. Step2: Use Step1’s result, through Sobel edge detection to get binary image. Step3: Use Step1’s result, through thresholding to obtain an image. Step4: The results from Step2 and Step3 through AND operation in mathematically morphology, obtain an binary image. Step5: Step4’s image results, nonzero value pixel’s coordinate as an input, specify a starting a k value, use K-means algorithm to cluster, to obtain clustering results. This result includes the centroid and labeling on the image. Step6: Use Step 5 of centroid coordinates, and take use of our specified ordering criteria sorting. The result is sorting. Each centroid is referenced from anatomy; use the definitions and labeling the definitions on the images.. 39.

(50) Chapter 4 Experiment and Discussion In this chapter, we will estimate how useful the methods were. During testing we used NCKU hospital doctors, chest CT file. The useful criteria, we used anatomy [8, 12, 17] according to the pictures and where the points were marked and compare the images we made by the book’s. The coordinates on the images where we used k-means algorithm are the centroid’s coordinates. Then it was compared to the points of the body parts in the book about anatomy. The comparison is done by using the Euclidean distance [7]. We used a total of ten patient’s CT image to do experiments. Due to the different size of body shape or the difference in position, the patients CT image had different numbers. Patient 1 has 31 images, patient 2 has 28 images, patient 3 has 28 images, patient 4 has 36 images, patient 5 has 32 images, patient 6 has 35 images, patient 7 has 31 images, patient 8 has 33 images, patient 9 has 27 images and patient 10 has 28 images. All together there were 309 images. We arranged all the information according to sex and the other group was arranged ignoring the difference in sex. These three groups chest CT images were get five to six body parts. According to the above Euclidean distance, we calculated our location of the body parts in different images to the book. Due to the difference images, some body parts were not clear or were not taken. So if we estimated the location of these unclear body parts, the results would not be evident. All experiments were performed on a Intel® Core™ 2 Duo CPU E8300 @ 2.83GHz personal computer with 2 GB main memory and 250 GB hard disk running on Windows XP Service Pack 3. In section 4.1, we will compare and show the images for using proposed method.. 40.

(51) 4.1 The Results Using the Proposed Method to Ten Patients. 4.1.1 The Results Using the Proposed Method Before going into the subject we will introduce Euclidean distance. In the area of mathematics, the definition of the Euclidean distance or otherwise known as the Euclidean metric is the ordinary distance between two points. These two points are often measured with a ruler. This can be proven by repeating by using the Pythagorean theorem. Using this formula as a distance, Euclidean space will turn into a metric space. The associated norm is called the Euclidean norm. Previous literature says that this metric as Pythagorean metric. However, it is important to note that this technique has been rediscovered numerous times in history and it has a logical extension of the Pythagorean Theorem. The Euclidean distance between points U = (u1 , u2 ,..., un ) and V = (v1 , v2 ,...vn ) , in Euclidean n-space, is defined as. (u1 − v1 ) 2 + (u2 − v2 ) 2 + ... + (un − vn ) 2 =. n. ∑ (u − v ) i. i. 2. (4-1). i=1. For example, for two 2D points, U= (u1 , u2 ) and V= (v1 , v2 ) , the distance is computed as:. 41.

(52) (u1 − v1 ) 2 + (u2 − v2 ) 2. (4-2). In this research paper we use Euclidean distance were all on 2D data, similar to the above chapter that introduced K-means algorithm. In this algorithm, we calculated the sample points and the centroid’s distance. This is done by using the Euclidean distance. Than this is one main criterion in accessing if the results from this experiment.. 4.1.2 The Results Using the Proposed Method in Choosing Six Body Parts. In this section, we use the original CT to automatically get six body parts to be labeled with organ definition. These six labels are body of vertebra, spinal canal, right and left upper limb, left and right subscapularis muscle or head of rib. In this section, due to the enormous data image so we picked images from patient 1. The images chosen were the best case and the worst case to display and describe.. 42.

(53) (a). (b). (b). Figure 4.1 Best case of six body parts chosen (a) Original (b) Labeled image. In Figure 4.1, it is the most typical CT image that we want to process. The image of bones and organs are clearly visible. The results of this typical image would be fair. Lastly through calculating our ideal coordinates, the differences were not bad. The 43.

(54) Euclidean distance was only 4.0181.. (a). (b). (b). Figure 4.2 Worst case in choosing six body parts. (a)Original image (b)Image after labeling. As seen in Figure 4.2, the first is the worst case of six body parts chosen. This picture 44.

(55) clearly shows that the points chosen are invalid. This is not what we had expected. The Euclidean distance mean is 42.0539. The cause of this is simply that the body of vertebra was not taken clearly. It’s pixel’s gray level and the surrounding muscle and tissue’s gray level is about the same. As seen in Figure 4.2(a) where the red arrow head points.. 4.1.3 The Results Using Proposed Method in Choosing Five Body Parts. In this section, we will use the original CT to automatically five body parts. After that the definitions of human organs are labeled. The five labels are spinal canal, right and left upper limbs, right and left subscapularis muscle or head of rib. This can be seen in Figure 4.3. (a). 45.

(56) (b). Figure 4.3 Best case in choosing five body parts (a) Original image (b) Image after labeling. As seen in Figure 4.3 it shows the best case of choosing five body parts. The red arrow head in Figure 4.3(a) shows the body of vertebra. The body of vertebra in this image was not taken clearly. It will be eliminated by thresholding because it wasn’t taken clearly. This would not affect the result of choosing five body parts. Thus the five body parts chosen are much more accurate. This last image’s Euclidean distance mean is 3.4748.. 46.

(57) (a). (b). Figure 4.4 Worst case in choosing five body parts (a) Original image (b) Image after labeling. As seen in Figure 4.4 it is the worst case in choosing five body parts. It is the opposite of the image above, where the body of vertebra is taken clearly. In Figure 4.4(b) the yellow arrow head shows that. The spinal canal’s original location is marked by the 47.

(58) red arrow head. Due to the K-means clustering was influenced by pixel around the body of vertebra. The chosen spinal canal will be deviated upwards. This would also deviate other chosen body parts. This image’s Euclidean distance mean is 37.6147.. 4.2 Special Case: The Invasion of Metallic Items According to the patient’s chest CT, during the experiments there will be numerous special images that would appear. These would influence the results of the experiments .Special situations are introduced below. During experiments, some numbers were not acceptable, so images were closely monitored to see what the problem was. We discovered some CT images had human tissues and some non human tissues. These non human tissues were like metallic items or pipes that were inserted for medical use. This situation is seen in patient 8 and patient 9.. 48.

(59) (a). (b). 49.

(60) (c). Figure 4.5 An unknown item invasion in Patient 8 (a) Original (b) Results from choosing 5 body part. (c) Results from choosing 6 body parts.. As seen in Figure 4.5(a) the red arrow head shows an invasion of an unknown item. Due to unable to eliminate the unknown item, this influenced the results greatly. Choosing five body part’s Euclidean distance mean were 45.0139 and the body parts where 6 were chosen had a Euclidean distance mean of 23.5105. There were some images that had unknown items invasion, had a mean over 100, this could influence greatly on the results.. 50.

(61) Table 4.1 Obtaining five body parts, using pixel as an unit to calculate Euclidean distance.. Best case. Worst case. Mean. Patient1. 3.4748. 37.6147. 13.1242. 9.2494. Patient2. 4.3778. 40.4950. 23.6917. 13.4649. Patient3. 2.7944. 42.1344. 26.2717. 15.1543. Patient4. 6.2081. 45.7356. 27.0521. 16.5821. Patient5. 3.3231. 39.2526. 15.7397. 11.7670. Patient6. 4.6041. 47.3880. 33.9643. 12.2842. Patient7. 6.3187. 54.8082. 40.6728. 9.9252. Patient8. 4.5551. 125.0787 41.1615. 38.9842. Patient9. 22.7687. 43.0277. 33.7533. 5.3916. Patient10 27.2089. 45.3313. 38.5092. 3.5560. Standard deviation. Male. 4.5551. 125.0787 35.6390. 19.5601. Female. 2.7944. 42.1344. 13.4802. All patients. 2.7944. 125.0787 29.3781. 51. 19.4075. 19.12263.

(62) Table 4.2 Obtaining six body parts, using pixel as an unit to calculate Euclidean distance.. Best case. Worst case. Mean. Patient1. 4.0181. 42.0539. 21.9965. 11.3501. Patient2. 3.1319. 12.2472. 6.6272. 2.4241. Patient3. 3.7466. 16.0136. 7.7499. 3.3473. Patient4. 4.6648. 21.8054. 7.8267. 2.9325. Patient5. 4.5572. 39.3277. 15.4894. 10.8374. Patient6. 4.8097. 14.6193. 7.8810. 2.6144. Patient7. 3.0581. 16.9723. 9.9050. 5.1280. Patient8. 4.7910. 119.4405. 35.2707. 39.4599. Patient9. 3.9855. 46.4590. 13.9921. 13.4462. Patient10 3.8667. 10.9022. 6.5793. 1.5851. Standard deviation. Male. 3.0581. 119.4405. 13.6347. 19.9956. Female. 3.1319. 42.0539. 13.2783. 10.3227. All patients. 3.0581. 119.4405. 13.6354. 17.1642. 52.

(63) Table 4.3 Obtaining five body parts and eliminating the special case. Best case. Worst case. Mean. Male. 4.6041. 21.8054. 35.6390. 19.5601. Female. 2.7944. 42.1344. 19.4075. 13.4802. All patients. 2.7944. 42.1344. 27.3544. 15.2300. Standard deviation. Table 4.4 Obtaining six body parts and eliminating the special case.. Best case. Worse case. Mean. Male. 3.0581. 16.9723. 8.0682. 3.4776. Female. 3.1319. 42.0539. 13.2783. 10.3227. All patients. 3.0581. 42.0539. 10.5582. 7.9870. 53. Standard deviation.

(64) 4.3 Conclusion. Figure 4.6 Patient 1 choosing six body part’s Euclidean distance histogram.. Figure 4.7 Patient 1 choosing five body part’s Euclidean distance histogram.. Lastly, we will use the image’s data to organize into tables as seen in Table 4.1 to Table 4.4. Table 4.1 represents the results of five body parts chosen and its average. Table 4.2 represents results of six body parts chosen and its average. Figure 4.6 and Figure 4.7 represent patient 1 choosing five and six body part’s Euclidean distance histogram. 54.

(65) Horizontal axis represents the number of images; the vertical axis represents the Euclidean distance. There are total 10 patient’s individual’s best and worst case and each individual’s average. The best and worst case is in each image’s five or six marks and the ideal five or six’s marks Euclidean distance’s mean and this mean is the biggest or the smallest. The data is also separate into female and males. Data analysis shows that most chest CT images, six body parts are much accurate. However, there are some patient’s images that some muscles or organs were not taken clearly. Most of these examples are body of vertebra was not taken clearly, this shows it would be best if five body parts are chosen. As seen in Figure 4.6 and 4.7, arrowheads represents 31 image, due to the body of vertebra was not taken clearly, so choosing five body parts is much more accurate than in choosing six. Out of the 10 patients, Patient 8 and Patient 9 show outside invasions. After analyzing the results, outside invasions could probably is medical intravenous drip or medical instruments used in aiding the patient. This would affect the results. Eliminate patient 8 and nine to calculate the results as seen in Table 4.3 and Table 4.4. If eliminating patent 8 and 9 the average would much more acceptable. Thus, if five body parts are chosen, its Euclidean distance’s mean is 27.3544. If six body parts are chosen, its Euclidean distance’s mean is 10.5582. These results are much more acceptable. It is important to note that the patient’s sex and our method also influence our data as seen in Table 4.3 and Table 4.4.. 55.

(66) Chapter 5 Conclusion and Future Work 5.1 Conclusions With unsteady improvement of technology, lifestyles of humans are getting more and more improvements. With industrialization shaping our society life, more and more diseases are becoming often visible. With improvements of food, the nutrients are lost during the improvements and the mass of pollution from industries, the chances of getting cancer is growing at a high rate. Although technology is improving, judgment in medical fields are still using X rays or CT to help doctor’s judgments. In this paper, a simple method is introduced to automatically analysis chest CT and locating each organ to an improved atlas.. 5.2 Future Works The result accomplished in this thesis is only a preliminary of the image registration. Much of work is needed to be done in the future. Some of the future works are:. (1) Further atlas In this paper, due to the lack of ability and time, CT is only concentrated on six locations on the chest or organs. Using this method, more research can be conducted on more different areas.. (2) Choosing Chest photos and getting the automatic K value. In this paper, it’s an automatically producing CT atlas, however, some parts is done 56.

(67) by human power. This is seen in choosing chest’s CT, using anatomy books as references to choose chest CT tools. Using K-means mathematically, automatically getting the K value [3] done with the photographs. These are not automatically done but done by humans.. (3) Three dimensionally images, Whole body atlas In this thesis, the images we use are 2D. However, human organs are 3D. This would be lost as the images are only 2D. The properties will be lost during the transformation. Wishing that juniors could transform the 2D into 3D [18, 21, 27, 29] for further research, and produce the whole body atlas. These atlases could be given to medical staffs for future judgments.. 57.

(68) References [1] 丁一賢,陳牧言,資料探勘,台中:滄海書局,2005。 [2] 王精忠,民 94,車牌辨識系統之研究,大同大學通訊工程研究所碩士論文。 [3] 李賜遠,民96,結合K-means及粒子演算法進行動態資料分群,大同大學資訊經 營學系研究所碩士論文。. [4] 李明澤,民96,雷達影像於山區線型萃取與分析,中央大學太空科學研究所碩 士論文。. [5] 呂竑錤,民 98,不同病人之間的電腦斷層之二維影像重合,高雄大學電機工程 學系碩士論文。. [6] 林松柏,傳統、小波理論與動態輪廓模式之波浪影像邊緣偵測處理,成功大學 系統及船舶機電工程學系碩士論文。. [7] 林志安,民 92,運用於安全系統之指型辨識,中正大學工學院通訊工程學碩士 論文。. [8] 周明加、柯妙華、陳淑姿、傅毓秀、簡基憲編譯,T.Weston 著,2000,彩色圖 解剖學,台北:藝軒圖書出版社 。. [9] 黃柏翰,民 91,以亮度梯度實現三維顏面模型之自動網格分割,成功大學醫學 工程學系碩士論文。. [10] 黃曉玲,民 92,以最大交互訊息進行醫學影像對位,中原大學電機工程學系碩 士論文。. [11] 陳靜怡,民94,影像處理及類神經網路於微細胞核自動計數之應用,元智大 學資訊管理學系碩士論文。. [12] 陳金山、徐淑媛編譯,L. H. Blackbourne, J. Antevil and C. Moore 著,2001,回 憶解剖學,195~199,台北:藝軒圖書出版社。. [13] 張豫雯,民 92,應用資料探勘挖掘潛在電子商務客戶群,大同大學資訊工程學 系碩士論文。 58.

(69) [14] 張云濤,龔玲著,資料探勘原理與技術,台北:五南圖書出版股份有限公司, 2007。. [15] 張榮華,民 93,X 光射束跟效應下醫學影像改善之探討,中原大學醫學工程 學系碩士論文。. [16] 廖元麟,民 92,腦部功能影像之三維對位與分析,成功大學資訊工程學系碩士 論文。 [17] 鄭麗菁、馬國興、鄭澄意、陳建行編譯,J. Weir and P. H. Abrahams 著,2005 ,圖解人體解剖影像學,92~99,台北:合計圖書出版社。. [18] 蔡明倫,民 91,“三度空間腦部結構校準", 交通大學資訊科學研究所碩士論 文。. [19] 繆紹綱,數位影像處理活用 MATLAB,1999,全華科技股份有限公司。 [20] A. T. Joseph, “Understanding the Standerdized Uptake Value, Its Methods, and Implications for Usage,” The Journal of Nuclear Medicine, Vol. 45, No. 9, pp. 1431-1434, 2004. [21] G. Li, T. Liu, G. Young, L. Guo and S. T. C. Wong. “Deformation invariant attribute vector for 3D image registration: method and validation,” Biomedical Imaging: Nano to Macro, 2006. in 3rd IEEE International Symposium, pp. 442-445, 2006. [22] J. Y. Huang, P. F. Kao and Y. S. Chen, “A Set of Image Processing Algorithms for Computer-Aided Diagnosis in Nuclear Medicine Whole Body Bone Scan Image,” IEEE Trans. on Nuclear Science, Vol. 54, No. 3, 2007. [23] J. Serra, “Image Analysis and Mathematical Morphology,” Academic Press, London, 1982. [24] M. Gudmundsson, E. A. E1-Kware and M. R. Kabuka, " Edge Detection in Medical Images Using a Genetic Algorithm Future,” IEEE Trans. on Medical Image, Vol.17, No.3, 1995. 59.

(70) [25] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, No. 1, pp. 62-66, 1979. [26] O. Demirkaya, “ Lesion Segmentation in Whole-Body Images of PET,” 2003 IEEE Nuclear Science Symposium Conference Record, Vol. 4, pp. 2873-2876, 2003. [27] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Trans. Patt. Anal. Machine Intell., Vol. 14, No. 2, pp. 239-256, 1992. [28] S. Chen and R. M. Haralick, “Recursive Erosion, Dilation, Opening, and Closing Transforms,” IEEE Trans. Image Processing, Vol. 4, No. 3, 1995. [29] S. Gefen, L. Bertrand, N. Kiryati, and J. Nissanov, “Localization of Sections Within the Brain Via 2D to 3D Image Registration,” Acoustics, Speech, and Signal Processing, 2005. Proceedings. IEEE International Conference, Vol. 2, pp.733-736, 2005. [30] T. K. Yin and N. T. Chiu, “A computer-aided diagnosis for locating abnormalities in bone scintigraphy by a fuzzy system with a threestep inimization approach,” IEEE Trans. Med. Imag. , vol. 23, pp. 639–654, 2004. [31] R. C. Gonzalez and R. E. Woods, Digital Image Processing Second Edition. Prentice Hall, 2002. [32] R. C. Gonzalez and R.E. Woods, Digital Image Processing Third Edition. Prentice Hall, 2008. [33] R. M. Haralick, S. R. Stemberg, and X. Zhuang , “Image analysis using mathematical morphology,” IEEE Trans. Pattern Anal. Machine Intell. , Vol. 9, No. 4, pp. 532-550, 1987.. 60.

(71)

參考文獻

相關文件

建議多協助學生進 行運用工具實作的 機會,亦可嘗試將 部分概念以圖像化 (如流程圖、太陽 圖等)的形式呈現

[7] C-K Lin, and L-S Lee, “Improved spontaneous Mandarin speech recognition by disfluency interruption point (IP) detection using prosodic features,” in Proc. “ Speech

由學生大使分組帶領體驗 Zentangle 繪畫

總圖 1 樓、2 樓與 4 樓、社科圖 1 樓及醫圖 1 樓均設 有圖書滅菌機,方便讀者就近自助使用。操作容易,.

動畫設計師常需利用電腦來繪製 3D 圖形,當他覺得螢幕上所呈現的影... 大部分的組織及個人都必須經由

D5.1 應用1個具體圖像代表 1個單位,製作象形圖 D5.2

1.列舉不同投影法的地圖數幅 相同地區,採用不同的投影法所繪製的 地圖,用以呈現,在不同投影下同一地 區有面積、方向、形狀上的不同 2.臺灣地區 1/25000 的地形圖