Precise Segmentation Rendering for Medical
Images Based on Maximum Entropy Processing
Tsair-Fwu Lee1,2, Ming-Yuan Cho1, Chin-Shiuh Shieh4, Pei-Ju Chao3, and Huai-Yang Chang2
1 Department of Electrical Engineering, National Kaohsiung University of Applied
Science, Kaohsiung, Taiwan 807, ROC
2 Department of Radiation Oncology, Chang Gung Memorial
Hospital-Kaohsiung, 83305, Taiwan, ROC
3 Department of Radiation Oncology, Kaohsiung Yuan’s General
Hospital, Kaohsiung, 800, Taiwan, ROC
4 Department of Electronic Engineering, National Kaohsiung University of Applied
Science, Kaohsiung, Taiwan 807, ROC
Abstract. Precision is definitely required in medical treatments,
how-ever, most three-dimensional (3-D) renderings of medical images lack for required precision. This study aimed at the development of a precise 3-D image processing method to discriminate clearly the edges. Since con-ventional Computed Tomography (CT), Positron Emission Tomography (PET), or Magnetic Resonance Imaging (MRI) medical images are all slice-based stacked D images, one effective way to obtain precision 3-D rendering is to process the sliced data with high precision first then to stack them together carefully to reconstruct a desired 3-D image. A recent two-dimensional (2-D) image processing method known as the en-tropy maximization procedure proposed to combine both the gradient and the region segmentation approaches to achieve a much better re-sult than either alone seemed to be our best choice to extend it into 3-D processing. Three examples of CT scan data of medical images were used to test the validity of our method. We found our 3-D renderings not only achieved the precision we sought but also has many interest-ing characteristics that shall be of significant influence to the medical practice.
Keywords: segmentation, wavelet, edge detection, entropy
maximiza-tion.
1
Introduction
Physicians need 3-D renderings to help them to make diagnosis, conduct surgery, and provide other treatments that 2-D images and other conventional test meth-ods cannot offer. Without precise segmentation, renderings could lead to mis-leading results. The aim of this study is to provide a precise 3-D rendering method to achieve what physicians demand. Two basic approaches in existing works on image segmentation are: the gradient-based approach and the region-based approach. Gradient-region-based edge detection methods [1,2] rely on the local R. Khosla et al. (Eds.): KES 2005, LNAI 3683, pp. 366–373, 2005.
c
Precise Segmentation Rendering for Medical Images 367 differences in the gray scale values of an image. They focus on the differences and transitions of the intensity of an image. The disadvantage in these methods is that they almost always result in broken and false edges. Region-based segmen-tation techniques include region-growing, region-splitting, and region-merging algorithms,etc. focused on the homogeneity of spatially dense localized features and other pixel statistics. They have a common problem of over-segmented, and hence produces poorly localized region boundaries. To resolve their weaknesses and to combine their strengths of the gradient-based approach and the region-based approach, Staib and Duncan proposed an idea to combine both of them in the maximum entropy manner to achieve a better result [3,4]. And recently Duncan et al. [5] have some successful applications in its extented works. More-over, some authors paid more attention in this field with different methods in progress [6].
In this study, our objective is to apply the Staib and Duncan method [3,4] to combine various segmentation approaches with an entropy maximization proce-dure and extended the idea into 3-D area. This allows us to utilize all available information to achieve the most robust segmentation results for 3-D image pro-cessing. We then apply our combined segmentation method to medical images to test the validity of our method. We aim to show our combined 3-D segmentation method is indeed superior in terms of required precision, also the sliced-base approach we proposed is quite efficient, and many features generated by our 3-D segmentation method shall be of referential values to the physicians.
2
Wavelet Edge Detector
No doubt the Wavelet method is known to be one of the best gradient segmenta-tion methods due to its multi-scale and multi-resolusegmenta-tion capabilities. We briefly discuss some of its property in this section. We shall name S2j[.] and D2j[.]
as the low pass signal (or the approximated signal) and the high pass signal (or the detailed signal) of f (x) at resolution 2j, respectively. And S2j[n] is the projection coefficient of f (x) on subspace Vj, D2j[n] is the projection coefficient
of f (x) on subspace Oj.We can define an orthogonal complement subspace of Vj as Oj, in space Vj+1. The scaling function φ (x) and wavelet function ϕ (x) have the orthogonal properties. From the properties of multiresolution analysis, we can easily see that signals can always be decomposed into higher resolutions until the desired result is obtained. A 2-D filter for edge detection is generated by a 2-D discrete periodic wavelet transform (2-D DPWT) [4], applying separa-ble algorithms, the 2-D DPWT can be written in a matrix form. And we now extend the wavelet transform into two dimensional manners. So we can define the four operators [4,7] of 2-D DPWT as follows: (Reader can refer to the detail description in the reference [8] which proposed by Mallat in 1989.)
WLL = [h(i) · h(j)]i,j∈Z (1)
WLH = [(−1)3−jh(i) · h(3 − j)]i,j∈Z (2) WHL= [(−1)3−jh(3 − i) · h(j)]i,j∈Z (3) WHH = [(−1)i+jh(3 − i) · h(3 − j)]i,j∈Z (4)
368 Tsair-Fwu Lee et al.
where, WLL, WLH, WHL and WHH are the four subband filters; and the ⊗ denoted a convolution operation; and h(i) =< φ2−1(u) · φ (u − i) >. Clearly, as the length of the coefficients of the filter is d, the operator of 2-D DPWT formed a d × d matrix. We now use the coefficients of the four filters given by Eq. 1 to Eq. 4 to generate a wavelet edge detector. Let fh(i, j) be the horizontal high-pass filter function and fv(i, j) be the vertical high-high-pass filter function. These two high-pass filters are obtained from the four operators of 2-D DPWT
fh(i, j) = WLL(i, j) ⊗ WLH(i, j) (5) fv(i, j) = WLL(i, j) ⊗ WHL(i, j). (6) We then apply the different length coefficients of Daubechies wavelet transform to generate the multi-scale masks (filters)[4,7,8].
Therefore, let the original image pass through these masks to produce a series of multi-scale images with different gradient strength scales. In order to avoid distortions caused by noise and to define exact edge points, an edge thinning technique is then used to make effective combinations of the images [9].
3
Concepts of Maximum Entropy Processing
By boundary estimation we meant to find optimum values of the boundary pa-rameters as the information of image data were given. Let us define the optimiza-tion of the entropy funcoptimiza-tion by maximizing its a posteriori probability (MAP) [3,10]. Assuming that Ib(x, y) is the image that depicts some object and tp(x, y)ˆ is the image template corresponding to the parameter vector ˆp. We maximize P (tˆp|Ib), the conditional probability of the template given the image, to obtain the best estimate of ˆp. By Baye’s rule, the function P (tpˆ|Ib) can be written as follows: arg max ˆ p P (tˆp|Ib) = arg maxpˆ P (Ib|tˆp)P (tp)ˆ P (Ib) (7)
where P (tp) is theˆ a priori probability of the template and P (Ib|tˆp) is the condi-tional probability of image Ib, which depicts some object with template ˆp. The denominator of Eq. 7 is not a function of ˆp and hence can be ignored. Taking logarithm of Eq. 7 we have:
arg max ˆ
p M (Ib, tp) = arg maxˆ pˆ [lnP (tpˆ) + lnP (Ib|tˆp)] (8) To estimate the parameter vector ˆp we maximize the entropy function M (Ib, tp).ˆ Clearly, the first term of Eq. 8 represents thea priori information and the second term represents the data-driven likelihood term. After rearranged the equation by Baye’s rule, we find the boundary estimation problem becomes
arg max ˆ
p ln[P (ˆp|Ig, Ir)]≡ arg maxpˆ [lnP (ˆp) + lnP (Ig|ˆp) + lnP (Ir|(Ig, ˆp)] (9) Clearly, the first term is thea priori information, the second term contains the gradient-based information Ig, and the last term is the region-based informationIr
Precise Segmentation Rendering for Medical Images 373 and (b) are the close look of the acetabulum. Fig. 6 (c) (d) shows the different angle of the femur. With the help of these 3-D renderings, physicians shall be able to make diagnosis more efficiently and effectively.
5
Conclusion
On all examples of medical images we processed, not only desired precision had been achieved, we are also able to create rotation of the objects to obtain its 3-D images of different angles. The 3-D renderings we created will allow physicians to conduct surgery or treatment much more accurately and effectively. Many images of interest that physicians unable to visualize, but have to compose a 3-D image by their imaginations, all become possible after our 3-D processing. Features are now clearly identified, locations pinned down exactly, and relative orientations are now well understood. These are all vital for medical treatments. Therefore we may conclude that our 3-D rendering method that combines the gradient-based and the region-based information in the maximum entropy sense, not only proved to be a superb image processing techniques but also very useful in practice for medical images. We believe that our precision 3-D renderings shall play its role in future medical applications.
References
1. John C. Russ. The Image Processing Handbook, Third ed. CRC Press & IEEE Press, 1999.
2. Rafael C. Gonzalez, Richard E. Woods. Digital Image Processing. Prentice Hall, 2nded. Edition, 2002.
3. L.H. Staib. “Boundary finding with parametrically deformable models.” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol.14, no.11,
pp.1061-1075, 1992.
4. Cheng-Tung Ku, King-Chu Hung and Mig-Cheg Liag. “Wavelet Operators for Multi-scale Edge and Corner Detection.” Department of Electrical Engineering, I-Shou University, Taiwan, 1998.
5. Jing Yang, James S. Duncan. “Joint Prior Models of Neighboring Objects for 3D Image Segmentation”, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04), 1063-6919/04,2004. 6. Hua Li, Abderr Elmoataz, Jalal Fadili, Su Ruan, Barbara Romaniuk. “3D Medical
Image Segmentation Approach Based on Multi-Label Front Propagation “, IEEE 2004 International Conference on Image Processing (ICIP), pp. 2925-2928, 2004. 7. Yih-Sheng Leu, Chao-Ji Chou. “Wavelet Edge Detection on Region-based Image
Segmentation” Department of Computer & Communication Engineering, National Kaohsiung First University of Science and Technology, Taiwan, 2000.
8. S.G. Mallat, “A Theory for Multiresolution Signal Decomposition: The Wavelet Representation.” IEEE Transactions on Analysis and Machine Intelligence, vol. 11. no. 7, 1989.
9. Gabriele Lohmann. Volumetric Image Analysis. Wiley & Teubner, 1998.
10. A. Chakraborty, “Feature and Module Integration for Image Segmentation.” Ph.D thesis, Yale University, 1996.
11. Shu-Yen Wan, William E. Higgins. “Symmetric Region Growing.“ IEEE