• 沒有找到結果。

肺炎胸部X光表徵評估之電腦輔助診斷系統

N/A
N/A
Protected

Academic year: 2021

Share "肺炎胸部X光表徵評估之電腦輔助診斷系統"

Copied!
77
0
0

加載中.... (立即查看全文)

全文

(1)國立交通大學 資訊工程學系 碩士論文. 肺炎胸部X光表徵評估之 電腦輔助診斷系統. Computer-Aided Diagnostic Evaluation of Chest X-Ray Manifestations of Pneumonia. 研 究 生:吳昭翰 指導教授:陳永昇 博士. 中 華 民 國 九 十 四 年 六 月.

(2) Computer-Aided Diagnostic Evaluation of Chest X-Ray Manifestations of Pneumonia. A dissertation presented by. Chao-Han Wu. to. Department of Computer Science and Information Engineering. in partial fulfillment of the requirements for the degree of Master of Science in the subject of. Computer Science and Information Engineering. National Chiao Tung University Hsinchu, Taiwan 2005.

(3) Computer-Aided Diagnostic Evaluation of Chest X-Ray Manifestations of Pneumonia. c 2005 Copyright by Chao-Han Wu.

(4)

(5) 肺炎胸部X光表徵評估之電腦輔助診斷系統. 指導教授:陳永昇. 學生:吳昭翰. 國立交通大學資訊工程學系﹙研究所﹚碩士班. 摘. 要. 胸部X光因能顯現肺部表徵,故在肺炎的臨床診斷上有很重要的 地位。肺炎在早期通常只有輕微的肺部X光表徵,容易被肉眼所忽略 而影響診療。在這項研究中,我們發展了一套肺炎胸部X光之電腦輔 助診斷系統。這套電腦輔助診斷系統由兩個部分所構成。其中一部分 利用邊界偵測及二維小波轉換等技術以擷取肺炎胸部X光表徵。另一 部分則利用二維非線性影像扭轉技術以減少X光影像間因拍攝參數 及病患姿勢不同所造成空間結構之差異。將同一病患不同時間所拍攝 之胸部X光影像對齊可應用在長期觀察之上。將不同病患的胸部X光 影像對齊則可提高病患間相互比較之可能性。跟據我們的實驗結果, 所發表的電腦輔助診斷系統可量化浸潤現象。且所擷取之肺炎表徵隨 病情演進之變化和臨床上插管,拔管之時間點能夠互相印證。.

(6) Abstract. Chest X-ray (CXR) is important for clinical diagnosis of pneumonia due to its ability to reveal pulmonary manifestation. In the early stage of pneumonia, pneumonic findings are subtle on plain films and can easily escape detection by visual inspection. In this work, we develop a computer-aided diagnosis (CAD) system in chest radiography for pneumonia. There are two major components in this CAD system. The first component comprises image processing techniques, such as edge detection and 2-D Haar wavelet transform, for the feature extraction of pneumonic manifestations in CXR images. The other one consists of 2-D nonlinear image warping techniques and can reduce the structural displacements in CXR images due to the variations of acquisition conditions and postures across examinations. Registering CXR images of the same subject can be useful for longitudinal study. Moreover, registering CXR images across subjects provides the possibility of inter-subject comparison. According to our experiments, the proposed CAD system can extract the infiltration in the lung field of CXR images. Furthermore, the progress of the extracted features is coherent to the diagnostic decisions of the timing of intubation and extubation for SARS patients.. i.

(7) 誌. 謝. 匆匆兩個寒暑,碩士生涯即將結束,成果終於呈現。首先我要感 謝. 陳永昇老師及 陳麗芬老師帶領我進入醫學影像處理的領域,提. 供完備的研究環境及實驗工具,以及研究過程中不斷的鼓勵及意見, 讓我的研究得以順利完成。此外在老師的循循善誘之下,許多為人處 事的態度都因此成長不少。其次也要感謝口試委員 葉子成大夫,不 論是在口試當中所給予我專業的意見,以及實驗資料的提供,對於我 的研究都有很大的幫助;在此一併致上我最誠摯的謝意。 在兩年的研究生活中,實驗室的同學們不論在研究及娛樂方面, 都給了我很大的幫助。嘉修學長總是沉穩而深思,時常提點我一些疑 問。同屆的同學們在研究上所提供的幫助也所在多有。其他學弟妹們 也都是生活中笑料的提供者,讓研究生活不再苦悶。謝謝你們讓我在 這兩年間的研究生活中,能有許多快樂的回憶。 最後感謝我的家人對我的栽培及包容。人生的道路不停向前延 伸,不論是家人、師長及朋友,總是有聚有散,但你們在這兩年間所 給予我不斷成長的養份,卻是我一生很大的寶藏。再一次,由衷地謝 謝你們。 昭翰 2005.9.30.

(8) Contents List of Figures. iv. List of Tables. vi. 1. . . . . . . .. 1 2 5 5 5 10 10 12. 2. 3. 4. Introduction 1.1 Background . . . . . . . . . . . . . 1.2 Thesis Scope . . . . . . . . . . . . 1.3 Related Works . . . . . . . . . . . . 1.3.1 Computer-Aided Diagnosis 1.3.2 Image Features . . . . . . . 1.3.3 Image Registration . . . . . 1.4 Thesis Organization . . . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. Image Features 2.1 ROI Selection . . . . . . . . . . . . . . . . . . 2.2 Feature Calculation . . . . . . . . . . . . . . . 2.3 Feature Selection . . . . . . . . . . . . . . . . 2.4 Linear Discriminant Analysis . . . . . . . . . . 2.5 Feature Calculation of Discriminative Features 2.6 Discriminant Function . . . . . . . . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. 13 15 16 16 18 20 20. Image Registration 3.1 Correspondence Constraints . . . . . 3.2 Basis Function Splines . . . . . . . . 3.3 Coefficients of Basis Function Splines 3.4 Basis Functions . . . . . . . . . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. . . . .. 23 25 25 27 27. Results 4.1 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Image Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 29 30 36 41. ii. . . . .. . . . .. . . . .. . . . .. . . . ..

(9) 4.4 5. Retrospective Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42. Conclusions. 61. Bibliography. 63. iii.

(10) List of Figures 1.1 1.2 1.3 1.4 1.5. Pneumonia is one of the leading causes of death in Taiwan and its order is arising from the year 2000 to the year 2004. . . . . . . . . . . . . . . . . CXR images overlaid with geometric feature. . . . . . . . . . . . . . . . An example of 2-D image registration. . . . . . . . . . . . . . . . . . . . Flow chart of the proposed CAD system. . . . . . . . . . . . . . . . . . . The distributions of four CXR patterns with the texture measures. . . . .. . . . . .. 2 4 6 7 9. 2.1 2.2 2.3 2.4 2.5 2.6. Flow chart of image features method. . . . . . . . . . . . . . . . . . Flow chart of feature training in feature extraction. . . . . . . . . . . Flow chart of feature calculation in feature extraction. . . . . . . . . . The discriminability evaluated with t-statistic. . . . . . . . . . . . . . The direction to be projected from multi-dimension to one dimension. The discriminant function in two-category case. . . . . . . . . . . . .. . . . . . .. 14 14 15 17 19 21. 3.1. Flow chart of the image registration method. . . . . . . . . . . . . . . . . . 24. 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8. . . . . . .. . . . . . .. Experiment data of normal subjects. . . . . . . . . . . . . . . . . . . . . Experiment data of subject A. . . . . . . . . . . . . . . . . . . . . . . . Experiment data of subject B. . . . . . . . . . . . . . . . . . . . . . . . . Experiment data of subject X. . . . . . . . . . . . . . . . . . . . . . . . Experiment data of subject Y. . . . . . . . . . . . . . . . . . . . . . . . . Experiment data of subject Z. . . . . . . . . . . . . . . . . . . . . . . . . A result of the image features method. . . . . . . . . . . . . . . . . . . . ROIs selected from the lower region of lung field of both pneumonia and normal groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 The selected features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Distribution of two group of ROIs. . . . . . . . . . . . . . . . . . . . . . 4.11 The selected control points in the target image and the source image. . . . 4.12 The deformation field transformed from a source image to the stereotaxic space of the target image. . . . . . . . . . . . . . . . . . . . . . . . . . .. iv. . . . . . . .. 30 31 32 33 34 35 36. . . . .. 37 38 40 42. . 43.

(11) 4.13 The adapted result of the image features method and the image registration method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Comparing the trend of the progress for including or excluding LDA. . . . 4.15 Experiment result of normal subjects. . . . . . . . . . . . . . . . . . . . 4.16 Experiment result of subject A. . . . . . . . . . . . . . . . . . . . . . . . 4.16 Experiment result of subject A. (con’t) . . . . . . . . . . . . . . . . . . . 4.17 Experiment result of subject B. . . . . . . . . . . . . . . . . . . . . . . . 4.17 Experiment result of subject B. (con’t) . . . . . . . . . . . . . . . . . . . 4.18 Experiment result of subject X. . . . . . . . . . . . . . . . . . . . . . . . 4.18 Experiment result of subject X. (con’t) . . . . . . . . . . . . . . . . . . . 4.18 Experiment result of subject X. (con’t) . . . . . . . . . . . . . . . . . . . 4.18 Experiment result of subject X. (con’t) . . . . . . . . . . . . . . . . . . . 4.19 Experiment result of subject Y. . . . . . . . . . . . . . . . . . . . . . . . 4.19 Experiment result of subject Y. (con’t) . . . . . . . . . . . . . . . . . . . 4.20 Experiment result of subject Z. . . . . . . . . . . . . . . . . . . . . . . . 4.20 Experiment result of subject Z. (con’t) . . . . . . . . . . . . . . . . . . .. v. . . . . . . . . . . . . . . .. 44 45 46 48 49 50 51 52 53 54 55 56 57 58 59.

(12) List of Tables 4.1 4.2 4.3 4.4. The discriminability of selected features. . . . . . . . . . . . . . . . . . . The discriminability of selected features for different ROI size. . . . . . . The discriminability of selected features for different location of ROIs of normal group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the computation time for the calculation of the image features.. vi. . 39 . 41 . 41 . 44.

(13) Chapter 1 Introduction.

(14) 1.1 Background. 2. 5. 6. Order 7 8. 9 2000. 2001. 2002. 2003. 2004. Year. Figure 1.1: Pneumonia is one of the leading causes of death in Taiwan and its order is arising from the year 2000 to the year 2004.. 1.1. Background. According to Anderson [1], the combination of influenza and pneumonia is the seventh leading cause of death for U.S. people, and the sixth leading cause for the Asian or Pacific Islander population. In Taiwan, pneumonia is the sixth leading cause of death and its order is arising as shown in Figure 1.1 [2]. Chest radiology is important for clinical diagnosis of pulmonary lesions. Pneumonic findings can be revealed in chest X-ray (CXR) images of pneumonia patients in clinical diagnoses. Although the resolution and information of chest radiology is less than that of high resolution computed tomography (HRCT), chest radiology has the advantages of low cost, convenience, and less radiation. The longitudinal changes in the chest X-ray patterns are very important for the clinical.

(15) 1.1 Background. diagnosis and treatment. Where the interstitial pattern, alveolar consolidation, and the fibrosis are clues to the transitions of stages of pneumonia. In the early stage of pneumonia, pneumonic findings are unfortunately subtle on plain films and can easily escape detection by visual inspection despite of considerable upper respiratory complaints and symptoms, though there will be some abnormality in HRCT. In this work, we aim to retrospectively analyze the pneumonic pattern of plain chest X-ray films by using image processing techniques and to develop a computer-aided diagnosis (CAD) system in chest radiography to provide more information which are revealed in plain chest X-ray films. As shown in Figure 1.2, the pneumonic pattern is extracted and it is overlaid in the CXR image. In computer analysis of chest radiographs, there are various kinds in modality, acquisition conditions, and posture of patient, causing the geometric distortions of body structure in the scanned chest radiographs. The modalities may be standard frontal chest radiograph or portable chest X-ray where the views, distances of beam, and even posture of patient are different. The shadow of the heart in an anterior-posterior (AP) film, that is, film behind patient and beam in front of patient, is larger than that in an posterior-anterior (PA) film, that is, film in front of patient and beam behind patient. In a supine film, the diaphragm is higher and the lung volumes is less than that in a standing patient. The posture of a patient may affect the shape and position of deformable organs, as well as the projecting shadows of structures in the chest. Although the modality and acquisition conditions can be controlled carefully to be identical for each examination, the posture of patient is hardly ever the same, especially for patients, children, and elders that can not hold the same posture. For inter-subject analysis, the causes of geometric variations between chest radiographs may also include the variations of fat and the variations of size, shape, and position of. 3.

(16) 1.1 Background. Figure 1.2: CXR images overlaid with geometric feature. The left part is a CXR image of normal subject. The right part is a CXR image of pneumonia subject. The positions with the color of red have more differences of intensity. The edges are thus highlighted with red in both images while some non-edge parts are highlighted in the CXR image of pneumonia subject because the pneumonic pattern is extracted there. heart, skeletal structures, and other structures contained in the chest. Because of the structural variations due to various conditions mentioned above, comparing chest radiographs on the same position can be difficult in computer analysis. We introduce a 2-D image registration method that can reduce the structural displacements thus can provide a possibility for both of the longitudinal and inter-subject comparison by using their CXR images normalized in the same stereotaxic space.. 4.

(17) 1.2 Thesis Scope. 1.2. Thesis Scope. In this work, we aim to develop a CAD system in chest radiology for helping radiologists in the evaluation of CXR manifestations of pneumonia. This CAD system is comprised of two major parts, image features and image registration methods. We develop a feature extraction method for quantitative analysis of the pneumonic pattern of chest X-ray films. An image registration method is applied for registering chest X-ray images in intrasubject and inter-subject analyses as shown in Figure 1.3. The structural displacements due to variations in acquisition conditions and postures across examinations can then be reduced for intra-subject longitudinal study and inter-subject analyses. The extracted feature information can be revealed in the registered chest X-ray images by applying both of these two techniques, as illustrated in Figure 1.4.. 1.3 1.3.1. Related Works Computer-Aided Diagnosis. Computer-aided diagnosis is developed more than thirty years [3] and people have been looking to the time when computers would output a diagnosis with a set of input images since it was possible to process images on computers. Among the CAD applications, including mammography, chest radiography, chest CT, neuroradiology, and virtual colonography, the first Food and Drug Administration (FDA) -approved CAD products were mammography. Where the FDA approval processes contains a requirement that the systems should be effective. It is expected that only occassional reviews are need for radiologists with the aid of CAD system. Radiologists thus can focus on the more difficult cases and it. 5.

(18) 1.3 Related Works. 6. Target Image. Warped Image. Source Image. Figure 1.3: An example of 2-D image registration. The source image is registered to the stereotaxic space of the target image. The red grids on the images are the deformation fields..

(19) 1.3 Related Works. 7. T raining Im ages. Feature Extrac tion Feature T raining. Input Im age. Feature Extrac tion Feature Calc ulation. Im age Registration. T arget Im age. Lung Field Segm entation. Final Result. End. Figure 1.4: Flow chart of the proposed CAD system. The system inputs are training images in positive and negative classes, an input image, and a target image. From the input of training images, discriminative features are extracted in the step of feature training. Then unique feature of each pixel within the input image is computed according to the extracted features. By applying image registration method, the computed unique features of input image is transformed to the coordinate system of target image..

(20) 1.3 Related Works. can provides a more accurate and efficient practice of radiology. Radiograph interpretation is divided into three parts, detection, description, and various kinds of diagnosis in radiology [3]. These essential parts are the goals of CAD in chest radiography. There are three techniques, general processing, segmentation, and analysis involved in the field of computer analysis in chest radiography [4]. The general processing contains image enhancement and subtraction techniques. The targets of segmentation are lung field, rib cage, heart, and other structures. The analysis procedure contains size measurements, lung nodule detection, and texture analysis. For the detection of abnormalities, computerized screening examinations are particularly promising because the examination procedure is very tedious and can fatigue to radiologist. After being detected, imaging abnormalities must be characterized with the anatomic extent of a lesion, the size of a structure, and the texture of its density. The quantitative measurements are particularly important for better description of disease evolution. The diagnosis is the most difficult part among the three essential tasks of image interpretation because it involves the combination of imaging information with medical knowledge. In the field of diagnostic radiology, evaluation of interstitial disease in radiographs is difficult due to the following three reasons. First, the involved patterns and variations are numerous and complex. Second, the correlation between pathologic and radiologic findings is lacked. Third, the terms used to describe CXR patterns are varied among radiologists. Katsuragawa [5] proposed a Fourier specturm based textural analysis method for detection and characterization of interstitial disease. The authour use the root-mean-square and the first moment of the visual system response filtered 2-D Fourier spectrum as two quantitative measurements to analysis radiographs of abnormal lungs. Four groups of radiographs. 8.

(21) 1.3 Related Works. Figure 1.5: The distributions of four groups of CXR patterns with the texture measures. The four groups of CXR patterns are normal, nodular, reticular, and honeycomb patterns. The texture measures are root-mean-square and first moment of the visual system response filtered 2-D Fourier spectrum. (This figure is referenced from [5].) are inclucded in the analysis, they are radiographs of normal lungs, abnormal lungs with nodular pattern, abnormal lungs with reticular pattern, and abnormal lungs with honeycomb pattern. In the analysis of the four groups of radiographs with the texutre measurements, it shows a good separation of the four groups of radiographs as shown in Figure 1.5. A CAD scheme has been developed [6] by the same group using artificial neural networks (ANNs) of the textural analysis method. The CAD scheme classifies the input CXR images as normal or abnormal images. Instead of the classification of the input images, we provide a trend value that indicate the evolution of pneumonia. In the other words, in place. 9.

(22) 1.3 Related Works. 10. of a diagnosis result, we provide the indexes of the progress of pneumonia with quantitative features fused in the CXR images.. 1.3.2. Image Features. Texture analysis is an important technique in many fields of image analysis and CAD system [7] [8] [9] [10]. These approaches for extracting texture features can be divided to three groups, geometric features, statistical features, and spatial/spatial-frequency features. The geometric features are lines, edges, and spots. The statistical features can be entropy, contrast, and correlation. The spatial/spatial-frequency features can be Gabor filtering features, Fourier spectrums, and wavelet transforms. In the application of interstitial disease, many methods are consists of the procedure for selecting regions of interest (ROIs), the procedure for computing one of the texture features stated above, and the procedure for classifying these texture features [11].. 1.3.3. Image Registration. Image registration is the process of registering images taken at different times, from different viewpoints, from different sensor devices, or even from different modalities [12]. The image to be aligned is the target or reference image and the images aligning to target image are the source images. The function aligning the source image to the target image is the mapping function. A registration method is a combination of four major steps, including feature detection, feature matching, transform model estimation, and image resampling and transformation [13]. In the step of feature detection, salient objects, also called control points when they.

(23) 1.3 Related Works. 11. are represented as points, such as landmark points, curves, and surfaces, are manually or automatically detected. The correspondence between detected features in the target image and those in the source image are established by means of feature descriptors and similarity metrics in feature matching step. In the transform model estimation step, the parameters of mapping functions are computed directly from the available data or are determined by optimizing some function defined on the parameter space, according to the established feature correspondence. Finally, in image resampling and transformation step, the source image is transformed by using the estimated mapping functions. Interpolation [14] can be applied after transformation for the nature of discrete image coordinate. In the application of medical imaging [15] [16] [17], landmarks are not easily identified in images, especially for intrinsic methods based on patient related image properties. The exact locations of these landmarks are more subjective and correspondence between them may not exist. This is the reason that registration techniques usually use information from all pixels within the whole images instead of landmarks only. By using the extrinsic methods based on artificial objects, registration is more easy, fast, non-labor-intensive for manual extraction of features, and has no need for complex optimization algorithms. Registration results can be visually checked if the artificial objects are well designed to be well visible and accurately detectable. But extrinsic methods, comparing to intrinsic methods, have the disadvantages that they can not be used in retrospect and that they are less patient-friendly. For inter-subject registration, high cross-population variability causes the complexity. Even intra-subject registration may be complex because of various pathologies resulting in abnormal structures and highly-deformable tissues contained in the chest..

(24) 1.4 Thesis Organization. 1.4. 12. Thesis Organization. This thesis is organized as follows. We describe our image features method in Chapter 2. The 2-D image registration method applied in this work is then described in Chapter 3. The experiment results are shown in Chapter 4. Finally, conclusions are stated in Chapter 5..

(25) Chapter 2 Image Features.

(26) Image Features. 14. Input Im age. T raining Im age. Feature Extrac tion Feature T raining. Feature Extrac tion Feature Calc ulation. Unique Feature. Figure 2.1: Flow chart of image features method. Discriminative features are trained with images of two categories on training process. Then unique feature of each pixel within an input image is calculated according to the discriminative features. T raining Im ages. ROI Selec tion. Feature Calc ulation. Feature Selec tion. Disc rim inative Features. Figure 2.2: Flow chart of feature training. The system inputs train images of two categories and then output discriminative features between these two categories. Two groups of ROIs are selected during the ROI selection step. Textural features are calculated during the feature calculation step for all selected ROIs. Discriminative features are finally selected from calculated features. In this chapter we describe the image features method that discriminative features are extracted and an desired unique feature is calculated according to the extracted features, as illustrated on Figure 2.1. This feature extraction method has two major blocks, discriminative features are extracted in the first step and unique feature of each pixel within an input image is calculated in the second step. Each of these two blocks contains more procedures. During the process of feature training, there are three consecutive steps, ROI selection, feature calculation, and feature selection involved, as illustrated on Figure 2.2. During the process of the second step in feature extraction, edge masking, feature calculation, feature space reduction, and feature quantization are involved, as illustrated on Figure 2.3. By ap-.

(27) 2.1 ROI Selection. 15. Disc rim inative Features. Input Im age. Feature Calc ulation. Feature Spac e Reduc tion. Feature Quantization. Unique Feature of Eac h Pixel. Edge Masking. Figure 2.3: Flow chart of feature calculation. The system inputs are discriminative features extracted in the feature training step and an input image. The system output are unique feature of each pixel within the input image. First, discriminative features of each pixel within the input image is calculated. Then dimension of the discriminative features of each pixel are reduced to 1-D. Finally, the unique feature of each pixel is calculated. Edge masking is applied to assure that the discriminative features are not calculated for the pixels within edges. plying this feature extraction process, the unique feature calculated for each pixel within an input image can thus be used for post-processing purposes, including visualization. In the following sections, the major procedures, ROI selection, feature calculation, feature selection, feature space reduction, and quantization, involved in the feature extraction method are stated.. 2.1. ROI Selection. The first step in the training process is ROI selection. A ROI is a subregion within an image selected for further processing. For convenient, we select square ROIs with size R × R. We select two pneumonia group and normal group of ROIs because we hope the.

(28) 2.2 Feature Calculation. 16. difference between CXR images of pneumonia and normal subjects could be stood out. Pneumonia group of the ROIs is formed by N1 ROIs selected from the lung field in CXR images of pneumonia subjects. Normal group is formed by N2 ROIs selected from the lung field in CXR images of normal subjects.. 2.2. Feature Calculation. After the two groups of ROIs are selected, we calculate the features of each ROI. The image features are calculated by using 2-D Haar wavelet transform. By applying 2-D Haar wavelet transform, there are R × R spatial-frequency components for each ROI. Each set of spatial-frequency components of each ROI selected from pneumonia subjects is denoted as X1j , (j = 1, . . . , N1 ), and the set of these sets of spatial-frequency components is denoted as class 1. Similarly, the set of spatial-frequency components of each ROI selected from normal subjects is denoted as X2j , (j = 1, . . . , N2 ), and the set of these sets of spatial-frequency components is denoted as class 2.. 2.3. Feature Selection. After the spatial-frequency components are calculated for all ROIs, we select discriminative features according to the result of t-statistic. The selected discriminative spatial-.

(29) 2.3 Feature Selection. 17. 0.8. 0.8. 0.8. 0.6. 0.6. 0.6. 0.4. 0.4. 0.4. 0.2. 0.2. 0.2. 0. 0. 1. 2. 3. 4. 5. 6. 7. 1. 2. 3. (a). 4. 5. 6. 7. 8. 0. 1. 2. (b). 3. 4. 5. 6. 7. 8. (c). Figure 2.4: The discriminability evaluated with t-statistic. By comparing (a) and (b), we know that the larger the difference between means of the two distributions the better the discriminability of these two categories. By comparing (b) and (c), we know that the less the variances of the two distributions the better the discriminability of these two categories. frequency components thus can significantly differentiate the pneumonic pattern of CXR images most accurately, as shown in Figure 2.4. Before performing t-statistic on the two classes of CXR patterns, the highest order spatial-frequency components are rejected initially. The components with the highest order are always the noises or lung markings which are not actually the discriminative components for the two classes of CXR patterns. Assume Xi1 , . . . , XiNi are independent random variables for i = 1 or i = 2. First, we calculate sample mean Xi and sample variance Zi for class 1 and class 2 separately, where they are defined as Xi =. Xi1 + . . . + XiNi , i = 1, 2, Ni N. Zi =. i 2 1 X Xij − Xi , i = 1, 2. Ni − 1 j=1. Then we can calculate the t-value between class 1 and class 2, r t=.

(30)

(31)

(32) X1 − X2

(33) N1 N2 r . N1 + N2 (N1 − 1)Z1 + (N2 − 1)Z2 N1 + N2 − 2. To select the discriminative features, we first reject the spatial-frequency components.

(34) 2.4 Linear Discriminant Analysis. 18. whose t-values are smaller than the critical value t( α2 ,ν) . Where α is the significance level and ν is the degree of freedom, (Z1 /N1 + Z2 /N2 )2 ν= . (Z1 /N1 )2 (Z2 /N2 )2 + (N1 − 1) (N2 − 1) After some spatial-frequency components are rejected according to the t-values, a second screening process is performed to assure that the number of selected features is not too large. Assume m spatial-frequency components are left after the rejection stated above is performed. We sort the spatial-frequency components according to corresponding t-values in descending order. The sorted components are denoted as fk , k = 1, . . . , m, and the corresponding t-values are denoted as tk . Then the set of selected features after performing the second screening process is denoted as (

(35) l ) m

(36) X X

(37) f = fl

(38) ti ≤ γ × ti , 0 < γ ≤ 1.

(39) i=1. i=1. After all, d spatial-frequency components with that the corresponding t-values are in top d values among all the R × R t-values are selected. We denote these selected components as xij for each Xij , xij ⊂ Xij . They can differentiate pneumonic pattern of CXR images and normal pattern of CXR images most accurately.. 2.4. Linear Discriminant Analysis. After the d most discriminant spatial-frequency components are selected, we project the d dimension space to one dimension space by Fisher linear discriminant analysis (LDA) for the purpose of computation efficiency and the projected unique value is the most separable feature between class 1 and class 2, as illustrated in Figure 2.5..

(40) 2.4 Linear Discriminant Analysis. 19. Figure 2.5: The direction to be projected from multi-dimension to one dimension. For the two lines to be projected, the line on the right part has a greater separation between the red and black points. (This figure is referenced from [18].) By applying LDA, we first calculate the sample mean mi for each class and scatter matrix Si and SW , Si =. Ni X. (xij − mi ) (xij − mi )t ,. j=1. SW = S1 + S2 . Then we calculate the direction w that can separate class 1 and class 2 best, w = S−1 W (m1 − m2 ) . Finally, we calculate the value y that are the value projected from x to the direction w, y = wt x, where y is a unique value that can easily address whether a ROI has pneumonic CXR pattern or not..

(41) 2.5 Feature Calculation of Discriminative Features. 2.5. 20. Feature Calculation of Discriminative Features. For each pixel X in a CXR image, a square region centered on X with size R × R are obtained and the spatial-frequency components X are obtained by using 2-D Haar wavelet transform with the framework for multilevel structure construction [19]. Using the result of t-statistic as described in section 2.3, the d discriminative components x are obtained from X. Then the unique feature y is calculated using w calculated in section 2.4. Before further processing, we apply Canny edge detection to the input CXR image because we assume the image features are affected by edge components. Thus we mask out edges and only do further process on pixels which are not masked out by the edge mask.. 2.6. Discriminant Function. Assume that each class is Gaussian distribution, the posterior probability that y is in class i is defined as p(y|ωi )P (ωi ) , j=1,2 p(y|ωi )P (ωi ). P (ωi |y) = P. where ω1 is the pneumonic pattern of CXR images and ω1 is the normal pattern of CXR images. We can divide P (ω1 |y) by P (ω2 |y) as P (ω1 |y) p(y|ω1 )P (ω1 ) = . P (ω2 |y) p(y|ω2 )P (ω2 ) If the ratio is larger than one, we can say that it is more probably that y has the pneumonic pattern of CXR images. Since the ratio may approach to limit, we calculate the natural log of this ratio,  ln. P (ω1 |y) P (ω2 |y).  = (ln p(y|ω1 ) + ln P (ω1 )) − (ln p(y|ω2 ) + ln P (ω2 ))..

(42) 2.6 Discriminant Function. 21. Figure 2.6: The discriminant function in two-category case. A decision boundary for classifying the two categories is formed by applying the discriminant function. We use the value of the discriminant function as the quantitative measurement. Let gi (y) = ln p(y|ωi ) + ln P (ωi ), then  g(y) ≡ g1 (y) − g2 (y) = ln. P (ω1 |y) P (ω2 |y).  ,. where g(y) is the discriminant function. If the value of g(y) is larger than zero, the probability of y in Class 1 is larger than the probability of y in Class 2, as shown in Figure 2.6.. As we assumed, these two classes both are Gaussian distributions. That is, p(y|ωi ) =. 1 (2π)1/2 |Σ|1/2. .  1 −1 exp − (y − µi )Σ (y − µi ) , 2. where µi and Σi are the sample mean and sample variance of the Ni y’s projected from xij to the direction w. Then gi (y) can be calculated as 1 1 1 gi (y) = − (y − µi )Σ−1 (y − µi ) − ln 2π − ln |Σi | + ln P (ωi ). 2 2 2.

(43) 2.6 Discriminant Function. 22. After the computation of each g(y), we can show the probability that this point has the pneumonic CXR pattern when i = 1 and the probability that this point has the normal CXR pattern with i = 2 for each points within this image. For a CXR image, a quantitative measurement is calculated. First, a single feature value for each pixel without masked out by an edge mask is computed. Then the feature values within the lung field of this CXR image are averaged to be the trend value. The trend value is the quantitative measurement for the image features method..

(44) Chapter 3 Image Registration.

(45) Image Registration. 24. T arget Im age. Control Points Selec tion. Control Points of T arget Im age. Calc ulate Coeffic ients of T ransform ation Func tion. Control Points Selec tion. Co efficien t s o f T ran sfo rm at io n Fun ct io n. T ransform ation. Deform ation Field. Control Points of Sourc e Im age. Sourc e Im age. Figure 3.1: Flow chart of the image registration method. Control points are first selected from the source image and target image. Coefficients of the transformation function are calculated according to the selected control point pairs. Deformation field is transformed from the source image to the stereotaxic space of the target image. In this chapter, we introduce an image registration method that produces a deformation field of a image pair, as illustrated in Figure 3.1. The deformation field is needed for mapping information within an image, that is, the source image, to the stereotaxic space of the other image, that is, the target image. In the feature detection step of the registration process, we select some salient control point pairs of the source and target image as the correspondence constraints. In next step, we use the correspondence constraints to solve the coefficients of the basis function splines. Finally, the deformation field is constructed by applying the basis function splines with the correspondence constraints of control points to every pixel within the source image. According to the deformation field, information contents of each pixel within the source image can be mapped to the stereotaxic space of the target image with geometric constraints..

(46) 3.1 Correspondence Constraints. 25. In the following sections, the main parts of introduced image registration method are stated, including correspondence constraints, basis function splines, coefficients of the basis function splines, and the basis functions.. 3.1. Correspondence Constraints. Thin-plate splines registration technique [20] [21] is based on an assumption that a set of corresponding control points can be identified in the source and in the target images. At these control points, spline-based transformations either interpolate or approximate the displacements which are necessary to map the location of the control point in the source image into its corresponding counterpart in the target image. Between control points, they provide a smoothly varying displacement field. The interpolation condition can be written as T(φi ) = φ,i , i = 1, . . . , n,. (3.1). where φi = (φi1 , . . . , φid ) denotes the location of the control point in the d dimension source image, φ,i = (φ,i1 , . . . , φ,id ) denotes the location of the corresponding control point in the d dimension target image, and n denote the number of control point pairs.. 3.2. Basis Function Splines. Thin-plate splines are based on radial basis functions and they have been widely used for image registration. Radial basis function splines can be defined as a linear combination.

(47) 3.2 Basis Function Splines. 26. of n radial basis functions θ(r). The general form can be formulated as f (s) =. d+1 X. aj gj (s) +. j=1. n X. bj θ (|φj − s|).. (3.2). j=1. In Equation 3.2, s = (s1 , . . . , sd ) denotes the location of an arbitrary point in the d dimension source image space and gj (s) is defined by    1, if j = 1, gj (s) =   sj−1 , if j ≥ 2. The value of f (s) represents the deformation of s in respect to a respective orientation. Coefficients a’s characterize the affine part of the spline-based transformation which control the global translation and rotation while the coefficients b’s characterize the non-affine part of the transformation which assure that the deformation is localized. For 2-D image registration purpose, two linear combination equations are needed for guiding the deformations of X and Y orientations respectively. By extending a’s and b’s in (3.2) to d dimension, the resulted d dimensional deformation is defined as f (s) =. d+1 X j=1. aj gj (s) +. n X. bj θ (|φj − s|),. (3.3). j=1. where the coefficients aj = (aj1 , . . . , ajd ), j = 1, . . . , d + 1, and bj = (bj1 , . . . , bjd ), j = 1, . . . , n, are extended from a and b in (3.2). The value of f (s), also denoted as (f1 (s), . . . , fd (s)), is the location of the corresponding point of s in the resulting stereotaxic space of registration..

(48) 3.3 Coefficients of Basis Function Splines. 3.3. 27. Coefficients of Basis Function Splines. Following the constraints of correspondence stated on (3.1), we get n equations with the same form as (3.3). The other d + 1 equations required for defining the coefficients a and b are given by n X. bi = 0,. (3.4). i=1. and.  Pn     i=1 φi1 bi = 0,   .. .     P   n φid bi = 0, i=1. (3.5). Above equations guarantee that the summation of coefficients bi is zero and the cross products of bi with every coordinates of the points φi are zero. Thus the coefficients a’s and b’s are defined by resolving the matrix equation       Θ P   B  Φ     =  , 0 PT 0 A. (3.6). where Φij = φij , Θij = θ(|φi − φj |), Pij = gj (si ),. 3.4. i = 1, . . . , n,. j = 1, . . . , d,. i, j = 1, . . . , n, i = 1, . . . , n,. j = 1, . . . , d + 1,. Aij = aij ,. i = 1, . . . , d + 1,. j = 1, . . . , d,. Bij = bij ,. i = 1, . . . , n,. j = 1, . . . , d.. (3.7). Basis Functions. There are a wide number of choices for radial basis functions including multiquadrics and Gaussians [22]. The kriging covariance [23] is another alternative form of the basis.

(49) 3.4 Basis Functions. 28. functions and it is formulated as. θ(r) =.    r2α ,. α not an integer, (3.8).   r2α log r, α an integer, where α is a smoothing parameter and thin-plate splines is a special case of the kriging covariance with α = 1 in 2-D case..

(50) Chapter 4 Results.

(51) 4.1 Materials. N1. 30. N2. N3. N4. Figure 4.1: Experiment data of normal subjects. They are labeled as N1, N2, N3, N4 from left to right. The ROIs in negative group are selected in these images. In this chapter, the experiment results are described. The materials of experiment data are stated. Then the verification of the image features method and the image registration method are stated respectively. Finally, we retrospectively analyze series of chest x-ray images of some pneumonia subjects by using the proposed CAD system.. 4.1. Materials. The training and testing images are chest X-ray images collected from Taipei Veteran General Hospital. They are divided into two groups, the normal group and the pneumonia group. The normal group contains 4 CXR images from 4 subjects as shown in Figure 4.1. These images are labeled as N1, N2, N3, N4 from left to right in Figure 4.1. The pneumonia group contains 53 chest X-ray images from 5 severe acute respiratory syndrome (SARS) patients. We label the five subjects as A, B, X, Y, and Z as shown in Figure 4.2, 4.3, 4.4, 4.5, and 4.6 respectively.. For each subject, the CXR images are ordered from left to. right and from top to bottom by taken time. The first CXR image for each subject is the.

(52) Figure 4.2: Experiment data of subject A. 10 CXR images for this subject.. 4.1 Materials 31.

(53) it is taken before the infection of SARS of the subject .. Figure 4.3: Experiment data of subject B. 9 CXR images for this subject. The first image is a normal image of this subject and. 4.1 Materials 32.

(54) Figure 4.4: Experiment data of subject X. 17 CXR images for this subject.. 4.1 Materials 33.

(55) Figure 4.5: Experiment data of subject Y. 8 CXR images for this subject.. 4.1 Materials 34.

(56) Figure 4.6: Experiment data of subject Z. 10 CXR images for this subject.. 4.1 Materials 35.

(57) 4.2 Image Features. 36. H aar wavelet transform. D iscrim inative features. Q uantization g(y) y. Figure 4.7: A result of the image features method. The left part is the original image and the right part is the image overlaid with image features. The middle part shows the processing steps of the unique feature for each pixels. target image for that series of CXR images. We use the CXR images of SARS patients as experiment data due to the rapid changes of the chest X-ray images which are obtained in just a few days for each subject. That implies these chest X-ray images of SARS patients provide a suitable data set for longitudinal analysis.. 4.2. Image Features. In this section, the verification of the image features method are performed by first training discriminative features, then classifying the training data according to the unique feature computed by applying trained discriminative features. The sensitivity and specificity of the trained discriminative features are thus computed. Where the effect of different ROI sizes and different locations of the ROIs of normal group are compared according to the sensitivity and specificity. We also show a rough result of our image features applied to one chest X-ray image of a SARS patient in Figure 4.7..

(58) 4.2 Image Features. Pneum onia. 37. N orm al. Figure 4.8: ROIs selected from the lower region of lung field of both pneumonia and normal groups. To train the features which are discriminative to the pneumonic CXR pattern and normal CXR pattern, the ROIs with the size of 32×32 are selected from chest X-ray images of SARS patients or normal subjects as shown in Figure 4.8. Pneumonic patterns are contained in the subregions for pneumonia group while no pneumonic patterns are contained in the subregions for normal group. Both group of ROIs are selected from the lower region of lung field. The spatial-frequency components of each ROIs are obtained by applying 2-D Haar wavelet transform. Afterwards, these abundant components of each ROI are examined according to the result of t-statistic with the parameters α = 0.05 and γ = 0.4. The components with large discriminant capability thus can be determined. An example of such selected components are shown in the left plot of Figure 4.9. Besides, we also illustrated the results of these components in spatial domain after performing inverse Haar wavelet transform in the right plot. To classify the training data, we apply the discriminant function to all selected ROIs. Besides, additional normal ROIs selected from upper region of lung field are included in the normal group. A ROI is assigned positive when the result of discriminant function is.

(59) 4.2 Image Features. 38. Inverse Haar W avelet Transfo rm. Figure 4.9: The selected features. The image on the left side shows the selected spatialfrequency components with green points. The image on the right side shows the extracted pneumonic CXR pattern. greater than zero; in contrast, a negative assignment is given when the result is less than zero. Here we define the sensitivity and specificity as follows, sensitivity =. true positive , (true positive) + (f alse negative). specif icity =. true negative . (true negative) + (f alse positive). In table 4.1, we give three analysis results of the training data represented for case 1 to case 3 respectively. For the case 1, we assess the specificity and sensitivity of each selected feature. After examining all the selected features, we can obtain the maximum, the minimum, and the averaged results of this analysis. In case 2 and case 3, all selected features are involved to assess the specificity and sensitivity. The difference between case 2 and case 3 is that whether the process of LDA is performed or not. For case 3, we calculate the quantitative feature values of every training ROIs as shown in Figure 4.10. We can see that the two group of ROIs are seperated by the quantitative feature value. For the decision of ROI size in our image feature method, we compare the classification result with the ROI size of 32 × 32 to the classification result with the ROI size of 16 × 16. Where the 16 × 16 ROIs are splitted from the 32 × 32 ROIs, that is, four 16 × 16 ROIs.

(60) 4.2 Image Features. 39. Table 4.1: The discriminability of selected features. ROI size is 32 × 32. 64 ROIs are selected from the lower region of lung field for pneumonic group and 118 ROIs are selected from the lower region of lung field for normal group. Additional 100 ROIs selected from the upper region of lung field are included in the normal group while performing the testing procedure. In Case 1, each selected feature is assumed to be the only one discriminative feature. In Case 2, all selected features are involved in the process of classification. In Case 3, it has the same setting of the Case 2 except that the process of LDA is included. specificity Case 1. sensitivity. (mean). 78.13%. 39.44%. (max). 86.70%. 68.75%. (min). 61.01%. 29.69%. Case 2. 99.54%. 100%. Case 3. 83.49%. 93.75%. are splitted from each 32 × 32 ROI. In table 4.2, the classification result of both cases are shown. The sensitivity and specificity in the case of 32 × 32 ROIs are both larger than the sensitivity and specifictiy of 16 × 16 ROIs. Therefore, we choose 32 × 32 as the ROI size in this work. To compare the difference of the effect of the location of ROIs in normal group, we perform the ROI selection step in the image feature method for three different configurations. For the first configuration, the ROIs of normal group are selected from upper region of lung field in CXR images of normal subjects. For the second configuration, the ROIs of normal group are selected from lower region of lung field in CXR images of normal subjects. For the third configuration, the ROIs of normal group are selected from both upper and lower regions of lung field in CXR images of normal subjects. The selected features.

(61) 4.2 Image Features. 40. Figure 4.10: Distribution of two group of ROIs. X-axis is the quantitative feature value of a ROI. Y-axis is the number of ROIs. with each configurations are thus applied to do classifications for the same set of ROIs. The classification results are shown in table 4.3. By comparing the sensitivity of three different configurations, we can see that the configuration of lower region of lung field provides the best sensitivity. Although the false alarms in this configuration are more than that in the configuration of both regions of lung field, it is more probably to justice the false alarm than to reveal a misdetection. Therefore, we selected normal ROIs in lower region of lung field in this work..

(62) 4.3 Image Registration. 41. Table 4.2: The discriminability for selected features with different ROI size. The ROI sizes are 32 × 32 and 16 × 16. ROI size. specificity. sensitivity. 32 × 32. 83.49%. 93.75%. 16 × 16. 76.72%. 71.88%. Table 4.3: The discriminability of selected features for different location of ROIs of normal group. The locations are upper region of lung field, lower region of lung field, and both regions.. 4.3. location. specificity. sensitivity. Upper. 81.65%. 92.19%. Lower. 83.49%. 93.75%. Both. 91.28%. 90.63%. Image Registration. In this section, we show a result of the image registration method. Pairs of control points are selected in the target image and the source image, as shown in Figure 4.11. For registration of the lung field in the source image and that in the target image, the control points are selected in the center of spines, the lateral part of the ribs, and the clavicles. After the estimation of the coefficients of the transformation function, the deformation field from the source image to the stereotaxic space of the target image is transformed as shown in Figure 4.12. We can see that the control points in the source image is exactly mapped to the position of the corresponding counter parts in the target image. The lung field in the source image is warped to map the shape of the lung field of the target image and it is our.

(63) 4.4 Retrospective Analysis. Target Im age. 42. So urce Im age. Figure 4.11: The green points are the selected control points in the target image and the source image. These points are selected for mapping the lung field in the target image and that in the source image. purpose for adapting the image registration method in the proposed CAD system.. 4.4. Retrospective Analysis. In this section, a series of chest X-ray images of each SARS patient is analyzed by using the proposed CAD system because the chest X-ray images of SARS patients provide a suitable data set for longitudinal analysis of pneumonia. We first show the adapted result comprising the image features method and the image registration method. The performance and computation efficiency are stated for both the case that LDA is performed and the case that LDA is not performed. Then we show the results of normal subjects. Retrospective analyses of five subjects are stated finally. The result of the adaptation of the two major components in the proposed CAD system.

(64) 4.4 Retrospective Analysis. 43. Target Im age W arped Im age. So urce Im age. Figure 4.12: The deformation field transformed from a source image to the stereotaxic space of the target image. The red grid is the deformation field and the green points are the control points. The lung field in the source image is mapped to the lung field in the target image. is shown in Figure 4.13. The unique feature for each pixel within the CXR image and the deformation field from the CXR image to the stereotaxic space of the target image are computed by applying the image features method and the image registration method respectively. Then the image features are transformed to the stereotaxic space of the target image and are masked with the lung field in the target image. The transformed and masked image features are thus averaged to a value that indicates the trend of the progress of SARS.

(65) 4.4 Retrospective Analysis. T arget Im age. 44. R egistered Im age. T rend. R egistered Im age Features. Source Im age. Im age Features. Figure 4.13: The adapted result of the image features method and the image registration method. The resulted image is masked by the lung field of the target image. Table 4.4: Comparing the computation time for the calculation of the image features. Calculations are based on the same CXR image.. computation time. LDA is used. LDA is not used. 28sec. 2hrs, 8min. for that patient. The performance and the computation efficiency for both the case that LDA is performed and the case that LDA is not performed are illustrated in Figure 4.14 and Table 4.4. We can see that the trend of the progress for subject A are almost the same except that the scales are different while the computation time of one image running the image features method for the case that LDA is performed is much less than that for the case that LDA is not performed..

(66) 4.4 Retrospective Analysis. 45. 80 60 40 20 0. LD A is not included 60 450 30 150 0. LD A is included. Figure 4.14: Comparing the trend of the progress for including or excluding LDA. Both are the trends of the progress for subject A. Before performing the restrospective analysis of each pneumonia subjects, we show the experiment results of all the four normal subjects in Figure 4.15. The result of the first image of subject A is also shown for comparing the image features of normal subjects and pneumonia subject. In the image of pneumonia subject, the image features with larger scale spread out larger area in lung field. All the trend values of normal subjects are smaller than that of pneumonia subject. We conclude that the image features and trend value are different in normal and pneumonia CXR images. The experiment results of subject A, B, X, Y, and Z are shown in Figure 4.16, 4.17, 4.18, 4.19, and 4.20 respectively. For subject A, three important signs are noticed. First, the maximum average of the image features occurred on the last CXR image taken before intubation. Second, the average of the image features for the first CXR image taken after intubation is much less than the average of the image features for the last CXR image taken before intubation. Third, The last three images have the least average of the image features while they are taken around the time of extubation. By these signs, we can say.

(67) N1. N2. N3. N4. trend value of each image listed on the first row.. first row from left to right. The image features masked by the lung field are listed on the second row. The third row shows the. Figure 4.15: Experiment result of normal subjects. The original CXR images of subject A, N1, N2, N3, N4 are listed on the. 0. 1. 2. 3. 4. Subject A (30, Apr). 4.4 Retrospective Analysis 46.

(68) 4.4 Retrospective Analysis. 47. that the trend of the progress for subject A calculated by using the proposed CAD system is close to the history of diagnosis. For subject B, the average of the image features for the first image taken after intubation is less than the average of the image features for the last one taken before intubation. The maximum average of the image features occurred on the image that has a largest area of infiltration among all the CXR images of subject B. Because we don’t have the films taken after extubation, we can not verify the trend of progress of SARS for subject B. We can only conclude that the information of infiltration in CXR image is extracted for this subject. For subject X, the average of the image features for the first image taken after intubation is less than the average of the image features for the last image taken before intubation. The average of the image features for the last seven images are the least among all the average of the image features for images of this subject. The only exception in these seven images is the first image taken after extubation and some misclassifications are revealed in the border of lung field in this exception. For this subject, the trend calculated by using this CAD system is close to the history of the diagnosis except one image mentioned above. For subject Y, the average of the image features for the first image taken after intubation is less than the average of the image features for the last image taken before intubation. The average of the image features for the last two images are both less than that for all the previous ones. We assume that the trend calculated by using this CAD system is close to the diagnosis of the doctor. Since there are no diagnostic information for subject Z, we can’t verify the trend for this subject. With above results, we have observations that the trend value changes in the following conditions. First, it becomes larger when the area of infiltration becomes larger. Second, it becomes smaller after intubation. Third, it becomes smaller after extubation with the ex-.

(69) 01, M ay. 02, M ay. 03, M ay. 03, M ay (After Intubation). image is listed on the second row. The third row shows the trend of the progress for subject A.. target image for registration is the first CXR image. The registered image features masked by the lung field of the first CXR. Figure 4.16: Experiment result of subject A. The registered image is listed on the first row from left to right by time while the. 0. 5. 10. 15. 20. 30, Apr. 4.4 Retrospective Analysis 48.

(70) 0. 5. 10. 15. 20. 04, M ay. 06, M ay. 12, M ay (After Extubation). Figure 4.16: Experiment result of subject A. (con’t). 09, M ay. 16, M ay. 4.4 Retrospective Analysis 49.

(71) 0. 5. 10. 15. N orm al. 26, Apr. 02, M ay. Figure 4.17: Experiment result of subject B. This figure has the same setting of Figure 4.16.. 18, Apr. 04, M ay (Intubation). 4.4 Retrospective Analysis 50.

(72) 0. 5. 10. 15. 05, M ay. 07, M ay. 12, M ay. Figure 4.17: Experiment result of subject B. (con’t). 09, M ay. 4.4 Retrospective Analysis 51.

(73) 0. 5. 10. 15. 25, Apr. 28, Apr (After Intubation). 29, Apr. Figure 4.18: Experiment result of subject X. This figure has the same setting of Figure 4.16.. 27, Apr. 01, M ay. 4.4 Retrospective Analysis 52.

(74) 0. 5. 10. 15. 02, M ay. 03, M ay. 05, M ay. Figure 4.18: Experiment result of subject X. (con’t). 04, M ay. 06, M ay. 4.4 Retrospective Analysis 53.

(75) 0. 5. 10. 15. 07, M ay. 08, M ay 11, M ay (After Extubation). Figure 4.18: Experiment result of subject X. (con’t). 09, M ay. 12, M ay. 4.4 Retrospective Analysis 54.

(76) 0. 5. 10. 15. 13, M ay. 14, M ay. Figure 4.18: Experiment result of subject X. (con’t). 4.4 Retrospective Analysis 55.

(77) 0. 1. 2. 3. 4. 5. 05, M ay. 07, M ay. 09, M ay. Figure 4.19: Experiment result of subject Y. This figure has the same setting of Figure 4.16.. 06, M ay (Intubation). 11, M ay. 4.4 Retrospective Analysis 56.

(78) 0. 1. 2. 3. 4. 5. 12, M ay. 19, M ay. Figure 4.19: Experiment result of subject Y. (con’t). 16, M ay (Extubation). 4.4 Retrospective Analysis 57.

(79) 30 25 20 15 10 5 0. 23, Apr. 28, Apr. 29, Apr. Figure 4.20: Experiment result of subject Z. This figure has the same setting of Figure 4.16.. 24, Apr. 30, Apr. 4.4 Retrospective Analysis 58.

(80) 30 25 20 15 10 5 0. 02, M ay. 05, M ay. 12, M ay. Figure 4.20: Experiment result of subject Z. (con’t). 07, M ay. 18, M ay. 4.4 Retrospective Analysis 59.

(81) 4.4 Retrospective Analysis. 60. ception that it becomes larger after extubation for subject X. According to the observations, we can conclude that the trend value is affected by the area of infiltration and the progress of trend value is corresponding to the timing of intubation and extubation..

(82) Chapter 5 Conclusions.

(83) Conclusions. 62. We have developed a CAD system in chest radiography to provide the radiologist the trend of the progress of a series of CXR images for a pneumonia patient. The CAD system consists of two major components: image features method and image registration method. The image features method is used to find out the geometric features of pneumonia. By classifying the pneumonic and normal patterns with the discriminative spatial-frequency features in CXR images, this method provides the information that whether pneumonic pattern is revealed in an area. The computation time for extracting image features is much less with LDA while the extracted image features have slightly lower specificity and sensitivity. The image registration method can reduce the structural displacements due to various kinds of acquisition conditions and postures across examinations. The lung field in a image is mapped to the shape of the lung field in the target image. It provides the comparison between the image features extracted by using the image features method in the corresponding area in two or more CXR images. Since the rapid changes in CXR images for SARS of a patient, we have used the CXR images of SARS patients for training and analyzing the pneumonic CXR pattern. Retrospective analyses for SARS patients have been done by using the proposed CAD system. According to our experiments, we conclude that the infiltration in the lung field in CXR images are extracted by using the proposed CAD system and the trend of the progress of SARS for a patient is corresponding to the diagnostic information of the timing of intubation and extubation..

(84) Bibliography [1] Robert N. Anderson, Betty L. Smith, and B.S. Ed. Deaths: Leading causes for 2002. National Vital Statistics Reports 53(17), National Center for Health Statistics, March 2005. [2] Taiwan area main causes of death(2004). Cause of death statistics, Statistics Office, Department of Health, Executive Yuan, ROC, June 2005. [3] Bradley J. Erickson and Brian Bartholmai. Computer-aided detection and diagnosis at the start of the third millennium. Journal of Digital Imaging, 15(2):59–68, 2002. [4] Bram van Ginneken, Bart M. ter Haar Romeny, and Max A. Viergever. Computeraided diagnosis in chest radiography: a survey. IEEE Transactions on Medical Imaging, 20(12):1228–1241, 2001. [5] Katsuragawa S, Doi K, Chan HP, and Takamiya M. Image feature analysis and computer-aided diagnosis in digital radiography: detection and characterization of interstitial lung disease in digital chest radiographs. Medical Physics, 15(3):311–319, 1988. [6] Monnier-Cholley L, MacMahon H, Katsuragawa S, Morishita J, Ishida T, and Doi K..

(85) BIBLIOGRAPHY. 64. Computer-aided diagnosis for detection of interstitial opacities on chest radiographs. Radiology, 171(6):1651–1656, 1998. [7] Robert M. Haralick. Statistical and structural approaches to texture. Proceedings of the IEEE, 67(5):786–804, 1979. [8] L. Van Gool, P. Dewaele, and A. Oosterlinck. Survey: texture analysis anno 1983. Computer Vision, Graphics, and Image Processing, 29:336–357, 1985. [9] Todd R. Reed and J. M. Hans du Buf. A review of recent texture segmentation and feature extraction techniques. CVGIP: Image Understanding, 57(3):359–372, 1993. [10] G. D. Tourassi. Journey toward computer-aided diagnosis: role of image texture analysis. Radiology, 213:317–320, 1999. [11] Bram van Ginneken, Shigehiko Katsuragawa, Bart M. ter Haar Romeny, Kunio Doi, and Max A. Viergever. Automatic detection of abnormalities in chest radiographs using local texture analysis. IEEE Transactions on Medical Imaging, 21(2):139–149, 2002. [12] Lisa Gottesfeld Brown. A survey of image registration techniques. ACM Computing Surveys, 24(4):325–376, 1992. [13] Barbara Zitova and Jan Flusser. Image registration methods: a survey. Image and Vision Computing, 21(11):977–1000, 2003. [14] Thomas M. Lehmann, Claudia G¨onner, and Klaus Spitzer. Survey: interpolation methods in medical image processing. IEEE Transactions on Medical Imaging, 18(11):1049–1075, 1999..

(86) BIBLIOGRAPHY. 65. [15] Petra A. van den Elsen, Evert-Jan D. Pol, and Max A. Viergever. Medical image matching: a review with classification. IEEE Engineering in Medicine and Biology, 12(1):26–39, 1993. [16] Hava Lester and Simon R. Arridge. A survey of hierarchical non-linear medical image registration. Pattern Recognition, 32(1):129–149, 1999. [17] J.B. Antonie Maintz and Max A. Viergever. A survey of medical image registration. Medical Image Analysis, 2(1):1–36, 1998. [18] Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification, page 118. John Wiley & Sons, Inc., New York, 2001. [19] Yong-Sheng Chen. Fast Algorighm for Block Matching, Nearest Neighbor Search, and DNA Sequence Search. PhD thesis, National Taiwan University, Taipei, Taiwan, June 2001. [20] Ardeshir Goshtasby. Rigistration of images with geometric distortions. IEEE Transactions on Geoscience and Remote Sensing, 26(1):60–64, 1988. [21] Fred L. Bookstein. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(6):567–585, 1989. [22] J. A. Little, D. L. G. Hill, and D. J. Hawkes. Deformations incorporating rigid structures. Computer Vision and Image Understanding, 66(2):223–232, 1997. [23] K. V. Mardia and J. T. Kent.. Kriging and splines with derivative information.. Biometrika, 83(1):207–221, 1996..

(87)

數據

Figure 1.1: Pneumonia is one of the leading causes of death in Taiwan and its order is arising from the year 2000 to the year 2004.
Figure 1.2: CXR images overlaid with geometric feature. The left part is a CXR image of normal subject
Figure 1.3: An example of 2-D image registration. The source image is registered to the stereotaxic space of the target image
Figure 1.4: Flow chart of the proposed CAD system. The system inputs are training images in positive and negative classes, an input image, and a target image
+7

參考文獻

相關文件

Reading Task 6: Genre Structure and Language Features. • Now let’s look at how language features (e.g. sentence patterns) are connected to the structure

 Promote project learning, mathematical modeling, and problem-based learning to strengthen the ability to integrate and apply knowledge and skills, and make. calculated

Students are asked to collect information (including materials from books, pamphlet from Environmental Protection Department...etc.) of the possible effects of pollution on our

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most

This kind of algorithm has also been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems [15, 16, 20–22], symmetric