• 沒有找到結果。

Fast and Accurate Registration Techniques for Affine and Nonrigid Alignment of MR Brain Images

N/A
N/A
Protected

Academic year: 2021

Share "Fast and Accurate Registration Techniques for Affine and Nonrigid Alignment of MR Brain Images"

Copied!
20
0
0

加載中.... (立即查看全文)

全文

(1)

Fast and Accurate Registration Techniques for Affine and Nonrigid

Alignment of MR Brain Images

J

IA

-X

IU

L

IU

,

1

Y

ONG

-S

HENG

C

HEN

,

1

and L

I

-F

EN

C

HEN2

1

Department of Computer Science, National Chiao Tung University, 1001 University Road, Hsinchu 300, Taiwan; and2Institute

of Brain Science, National Yang-Ming University, No. 155, Sec. 2, Linong Street, Taipei 112, Taiwan

(Received 6 March 2009; accepted 6 November 2009; published online 20 November 2009)

Abstract—Registration of magnetic resonance brain images is a geometric operation that determines point-wise corre-spondences between two brains. It remains a difficult task due to the highly convoluted structure of the brain. This paper presents novel methods, Brain Image Registration Tools (BIRT), that can rapidly and accurately register brain images by utilizing the brain structure information estimated from image derivatives. Source and target image spaces are related by affine transformation and non-rigid deformation. The deformation field is modeled by a set of Wendland’s radial basis functions hierarchically deployed near the salient brain structures. In general, nonlinear optimization is heavily engaged in the parameter estimation for affine/non-rigid transformation and good initial estimates are thus essential to registration performance. In this work, the affine regis-tration is initialized by a rigid transformation, which can robustly estimate the orientation and position differences of brain images. The parameters of the affine/non-rigid trans-formation are then hierarchically estimated in a coarse-to-fine manner by maximizing an image similarity measure, the correlation ratio, between the involved images. T1-weighted brain magnetic resonance images were utilized for perfor-mance evaluation. Our experimental results using four 3-D image sets demonstrated that BIRT can efficiently align images with high accuracy compared to several other algorithms, and thus is adequate to the applications which apply registration process intensively. Moreover, a voxel-based morphometric study quantitatively indicated that accurate registration can improve both the sensitivity and specificity of the statistical inference results.

Keywords—Affine/non-rigid image registration, Difference of Gaussian, Radial basis functions, Correlation ratio, Voxel-based morphometry.

INTRODUCTION

Registration is an essential geometric operation in medical image analysis. In longitudinal studies, for

instance, images acquired repeatedly over a period of time can be used to observe the temporal changes of

brain structures by subtracting the registered

images32,53 or by analyzing the structure deforma-tion.72,59 Customized or standard brain templates can be constructed by spatially normalizing and then averaging the brain images in a stereotaxic space.28,29,51,52 Registration of an individual brain to the template can bridge the individual brain to the Talairach space70and can also help to obtain the gross anatomical structures of the individual brain with template-based segmentation methods.24,26

When registering images, the degree of structural variability determines the transformation model to be adopted. Rigid or affine transformation can only accommodate the global transformation and are ade-quate to register the images acquired for the same sub-ject. Many approaches were proposed for affine registration of brain images. Ashburneret al. incorpo-rated the prior knowledge of human brains into a Bayesian framework.10Wood et al. used a Newton-type method to iteratively optimize the values of transfor-mation parameters.79 Jenkinson et al. proposed a coarse-to-fine method that can optimize the transfor-mation parameters from multiple candidates.38,39

A transformation model with a high degree of freedom is indispensable to inter-subject registration, in which the anatomical difference is non-rigid. In the following, non-rigid registration methods are briefly introduced according to the adopted transformation model. Comprehensive surveys can be found in refer-ences.33,37,57,83

Elastic methods register images by compromising the deformation smoothness and the similarity mea-surement between images.11Because of the smoothness constraint, these kinds of methods may not sufficiently model highly localized deformation, such as the con-volution of cerebral cortex.45,62 Their variants, fluid models, relax the smoothness constraint to overcome this problem at a higher risk of false registration.45 Address correspondence to Li-Fen Chen, Institute of Brain

Sci-ence, National Yang-Ming University, No. 155, Sec. 2, Linong Street, Taipei 112, Taiwan. Electronic mail: jsliu@csie.nctu.edu.tw, yschen@cs.nctu.edu.tw, lfchen@ym.edu.tw

DOI: 10.1007/s10439-009-9840-9

0090-6964/10/0100-0138/02009 Biomedical Engineering Society

(2)

Fluid registration works can be found in refer-ences.16,21,32 Moreover, the Demons algorithm71 and its variant75 are also considered as an approximation of fluid registration.33

A finite element method (FEM) registers images by using the segmented objects, which are usually repre-sented by the meshes of tetrahedrons or hexahe-drons.6,27,30,65,66,77,81 It can deform objects in a more realistic way because different energy terms can be assigned to objects according to their physical prop-erties. However, the computational complexity of the FEM approach is generally high and the error of tis-sue segmentation contributes to the deviation of reg-istration.

Basis functions have been extensively applied to describe the spatial mapping relationship between images. These methods can be further divided into two sub-categories. The first kind of methods, referred as landmark-based approaches, establishes the spatial transformation from a set of corresponding control points or landmarks. Numerous basis functions have been used to model the spatial mapping, such as the thin-plate splines (TPS),13,47 Gaussian,2inverse multi-quadrics,64 multi-quadrics,47 and Wendland’s radial basis functions (RBFs).31 The major disadvantage of these kinds of methods is that the identification of landmarks is not only time-consuming but also prone to errors. To alleviate this problem, Likar and Pernus applied affine registration to the sub-regions of images and regarded the centers of the registered parts as the matching control points.46

Instead of the labor-intensive landmark selection, the second kind of methods regularly deploy basis functions in image volumes and calculate the coeffi-cients of basis functions by optimizing an objective function. Many basis functions have been applied to model the deformation fields, including wavelets,1

discrete cosine transform,7 and B-splines.44,63 How-ever, a large number of basis functions are required to model subtle deformation. In this case, the computa-tional complexity is high and it is usually difficult to obtain good results by searching in a large parameter space. To reduce the number of basis functions, Rohde et al. used the gradient magnitude of the normalized mutual information (NMI) to detect poorly aligned image regions and repeatedly deployed RBFs in these regions for registration refinement.61

Anatomical information has also been applied in non-rigid registration frameworks. Pluim et al. incor-porated image gradient into the similarity measure-ment, mutual information (MI), for utilizing the spatial information.55 Marsland et al. iteratively determined the poorly registered regions and deployed knot points to the strong edges for modeling the deformation field with the clamped-plate splines.50

Camara et al. registered the corresponding structures and used the obtained results to initialize the sub-sequent procedure for the multimodal registration of whole-body images.18More examples can be found in references.17,36,42,48,74Generally, these type of methods take advantage of both the structure and intensity information during the registration process.

Image registration is an important tool in

quanti-tative analysis using voxel-based morphometry

(VBM). VBM statistically reveals the structural dif-ferences between two image groups through a voxel-wise comparison of tissue volumes.8There have been numerous VBM studies presented in the literature, such as the examination of brain asymmetry and the effects of sex and handedness in brain structures,35the aging of brains,34 and the characterization of dis-eases.12,15,41 Before the statistical comparison, all images have to be spatially normalized into the same stereotaxic space such that the corresponding struc-tures are well aligned. Many registration techniques have been adopted in the VBM procedure and the accuracy of image registration can greatly affect the reliability of the analysis results.3,9,14,22 Moreover, VBM analysis usually applies registration procedure intensively. Therefore, an accurate and time efficient registration algorithm is generally considered prefera-ble for VBM.

In this work, we utilize the structure information of the brain extracted from image derivatives and develop affine and non-rigid methods, called the Brain Image Registration Tools (BIRT), for the accurate and time efficient registration of magnetic resonance (MR) brain images. The proposed affine method aligns brain images by estimating the orientation and position dif-ferences between brains followed by optimizing the similarity between images in a hierarchical manner. The non-rigid deformation of the brain is modeled by a set of Wendland’s RBFs, which are hierarchically deployed near the salient anatomical structures. In this way, a small amount of RBFs are sufficient to well represent the deformation field and thus are beneficial to the execution efficiency. A VBM study of inter-group structural analysis is also conducted to qualita-tively and quantitaqualita-tively investigate the effect of reg-istration accuracy upon the VBM analysis results. Software packages of the proposed registration algo-rithms are available at http://bsp.cs.nctu.edu.tw/ software.

METHODS

Image registration establishes the spatial mapping

T : p7! q; which transforms every point p in the source (or test) image to its corresponding point q in the target

(3)

(or reference) image, such that the same structures are well aligned. The mapping relation T generally consists of a global transformation Tg, which is usually an

af-fine transformation, and a local, non-rigid deformation Tl:

q¼ TðpÞ ¼ TgðpÞ þ TlðTgðpÞÞ: ð1Þ Notice thatTgmaps each point p to its corresponding

point Tg(p) while Tlrepresents the non-rigid

displace-ment vector field. This section first introduces how to extract the structure information of the brain from image derivatives and then presents the proposed affine and non-rigid registration methods based on the extracted information.

Structure Information of the Brain

Difference of Gaussian (DOG) performs image subtraction after convolution with two Gaussian ker-nelsG(r1) and G(r2), r1> r2:

DOGðI; r1;r2Þ ¼ Gðr1Þ  I  Gðr2Þ  I; ð2Þ whereIis a T1-weighted MR image and ‘‘*’’ denotes the convolution operator. We define the thresholded DOG (TDOG) value to be one (foreground) when the DOG value is larger than a threshold, and otherwise the TDOG value is zero (background). Instead of using the zero-crossings of DOG results to detect edge features, we utilize the foreground in the TDOG image to reveal those regions with relatively low intensity in I, such as the gray matter (GM) and cerebrospinal fluid (CSF), as shown in Fig. 1. Notice that the region between two hemispheres contains mostly TDOG foreground voxels. This phenomenon can be utilized to determine the mid-sagittal planes (MSPs) and then estimate the orientation of brains. Non-brain tissues may present in TDOG foreground and can be easily removed by applying the brain masks. Then the masked foreground can guide the deployment of RBFs to cortices and ventricles in the proposed non-rigid registration method.

Affine Registration Overview of the Affine Registration Method

The proposed affine registration method comprises four major steps, as shown in Fig.2, which determine the 12 degrees of freedom of the transformation

Tg(p) = Ap + b, where b is a translation vector and

A is the transformation matrix representing the rotation, scaling, and shearing of an image volume. Among the 12 parameters of affine registration, the rigid parameters for rotation and translation are highly coupled and hence the estimation accuracy is

critical to the whole affine registration process.39 We estimate the six parameters of the rigid-body trans-formation by locating the MSPs followed by aligning FIGURE 1. The TDOG foreground reveals the regions with relatively low intensities. (a) One axial slice of a T1-weighted MR brain image. (b) The pixel intensity profile on the scan line X. (c) An example of DOG kernel. (d) The results obtained by convolving DOG kernel to the scan line X. (e) Image overlay of the the 2D TDOG foreground (in red color) on the axial slice.

(4)

brain volumes on the overlapped MSPs, as described in sections‘‘Determination of Brain MSP’’ and

‘‘Alignment on the Overlapped MSPs’’. Subsequently, the six rigid parameters are further refined and the results provide a good set of initial values in the parameter estimation for affine transformation. The engaged optimization process for the refinement of rigid transformation Tr and affine transformation Tg

utilizes a hierarchical image structure and the Nelder–Mead downhill simplex method58to maximize

the correlation ratio (CR), SCR (It, Is, Tr) and SCR

(It, Is, Tg), between the spatially mapped source image

Isand target image It. Figure3d shows an example of the alignment results using our affine registration method. Detailed description about CR criterion will be given in section Correlation Ratio.

Determination of Brain MSP

The MSP of a brain is defined as the plane which best separates the two hemispheres.73 Locating brain MSPs requires the estimation of two rotation and one translation parameters among the six parameters of rigid transform. Some methods of automatic determi-nation of MSPs can be found in references.4,49,73 In this work, we first estimate the MSP in an analytical way. By utilizing the first order image moment as the work of Smith,68the brain centroid is calculated as a point on the initial MSP. The normal vector of the initial MSP is estimated by applying principal com-ponent analysis (PCA) to the TDOG image. As the example shown in Fig.4a, the TDOG foreground is planarly distributed nearby the MSP, whereas the distribution is more isotropic in other regions. Apply-ing PCA to the TDOG image, the two eigenvectors corresponding to the largest two eigenvalues roughly span the MSP and the eigenvector corresponding to the smallest eigenvalue provides an estimate for the normal vector of the MSP. Figure4b shows that the closed-form solution provides a good initial estimation of the MSP. This initial estimation is further refined toward the ‘‘relatively darkest’’ plane in the brain by maximizing the number of TDOG foreground voxels. This optimization process can improve the MSP determination, as shown in Fig.4c, as well as the overlapping of the MSPs in the source and target images, which is required in the proposed affine reg-istration method, as shown in Fig.3b.

Alignment on the Overlapped MSPs

We align the brain volumes on the overlapped MSPs to estimate other three rigid parameters, including a translation vector on the MSP and a rotation angle around the MSP normal. The rotation angle is first estimated by the directions of corpus callosums (CC) segmented from the MSPs. Applying PCA to the seg-mented CC, the eigenvector corresponding to the largest eigenvalue provides an estimate of the CC direction. Directional difference between CCs gives a good initial estimate of the rotation around the MSP normal. The rotation angle and the translation vector

compose a rigid transform Tr¢ and are refined by

optimizing the CR, SCRðIt; Is; T0rÞ; between the source

image Isand target image It. As shown inFig.3c, the

Target image TDOG of target image

Source image TDOG of source image

Source image aligned by the estimated MSP Estimation of brain MSPs Alignment on overlapped MSPs Target image MSP (t1,t2) r

Aligned source image

Optimization of rigid and affine parameters

Target image

Source image aligned by initial rigid parameters

Source image aligned by optimized affine transformation Source image aligned by the estimated MSP

FIGURE 2. Flowchart of the proposed affine registration method. The brain MSPs in the source and target images are determined according to the TDOG images and are then overlapped together. The alignment on the overlapped MSPs is achieved by maximizing the image similarity. The obtained rigid parameters are further refined and then applied to ini-tialize the affine registration of brain images.

(5)

proposed method can robustly register brain images, even if there are large rotational differences in some unusual cases.

The segmentation of CC is based on the phe-nomenon that the intensities of CC in a T1-weighted MR image are significantly larger than those of the

FIGURE 3. Registration between the source image (a) and the target image (f) by using the proposed BIRT methods. The slices shown from (b) to (e) illustrate the results of the MSP overlapping, alignment on the overlapped MSP, affine registration, and non-rigid registration, respectively. The white line in (a) indicates the estimated MSP of the source image whereas those in (b), (c), and (f) indicate the estimated MSP of the target image. The arrows in (d) and (e) indicate those areas with stepwise improved alignment refined by the proposed affine and non-rigid methods.

(b) (a)

(c)

MSP normal

FIGURE 4. Brain MSP estimation and refinement. (a) TDOG image is more planarly distributed nearby the MSP (top), compared to other regions (middle). Applying PCA to the TDOG image, the eigenvector corresponding to the smallest eigenvalue gives an estimate to the MSP normal (bottom). (b) The MSP estimated with an analytical solution. (c) The MSP refined by maximizing the number of foreground TDOG voxels.

(6)

surrounding tissues. One or two intensity threshold-ing steps are used to segment the CC. For each located MSP, we first calculate an effective intensity range [t1, t2] to ignore the pixels with unusual

intensities, the brain centroid O, and a radius r mm which estimates the brain size, as the work of Smith.68 The segmentation of CC only considers the area within the circle with center at O and with radius kr mm, where the value of k is set to be 0.7 in our implementation, as shown in Fig. 5a. All the connected regions with intensity larger than t1+

b(t2  t1) and with area larger than c are extracted

from MSP, where the values of b and c were set to be 0.7 and 100 mm2 in BIRT, respectively. We regard the extracted region(s) as the candidate(s) of CC. The segmentation of CC is achieved if there is only one CC candidate, as shown in Fig.5b. How-ever, the intensity differences between CC and the surrounding tissues can vary considerably and thus the regions other than CC often survive in the first thresholding process, as shown in Fig.5d. To further distinguish these tissues, we apply Otsu’s method54to calculate another intensity threshold if there are many CC candidates. Notice that only the pixels in the candidate regions are involved in the calculation. The proposed method can well segment the CC on MSP, even if there are excess non-brain tissues, such

as the neck and shoulder areas, as shown in Fig.5e. Nevertheless, extracting the brain regions beforehand increases the robustness of our segmentation method because it avoids the disturbance of non-brain tis-sues, which can result in the estimation bias of the brain centroid, as shown in Fig.5c.

Non-rigid Registration

Non-rigid spatial mapping between brain images is related by a set of Wendland’s RBFs with different levels of support extents. Figure 6shows the flowchart of the proposed non-rigid registration procedure. Non-brain structures are first removed because the inter-subject variation of these regions is relatively large compared to that of brain tissues and hence could interfere with registration efficacy.80Therefore, we use a brain mask to extract the brain area as well as the boundary of the brain in the TDOG image, referred as the brain-only TDOG. Though the structures revealed in the brain-only TDOG are quite rough, they provide a guidance to deploy RBFs near the brain boundary, the boundary between GM and WM, and the bound-ary between CSF and GM/WM. Furthermore, the deformation field is progressively estimated by opti-mizing the coefficients of each RBF in a coarse-to-fine manner.

FIGURE 5. Segmentation of CC on MSP. (a) The circular area centered at the gravity of brain MSP, O, is considered in the segmentation of CC. (b) Only one connected region, the CC on MSP, is found in the first thresholding step locates the CC. (c) Excess non-brain tissues can bias the estimation of the circular area considered in the CC segmentation. (d) The intensity differences between CC and the surrounding tissues are not significant and thus many connected regions, the candidates of CC, are found in the first thresholding step. (e) Applying Otsu’s method,54the calculated intensity threshold can distinguish the CC

(7)

Non-rigid Transformation Model

We use a combination ofKRBFs to model the non-rigid deformation field, TlðÞ;

TlðpÞ ¼ XK

i¼1

ai/ðkp  cikÞ; ð3Þ

wherekk is the Euclidean norm and the one-argument function /:<+fi < is the RBF centered at ci with

coefficients ai2 <3; i = 1, ..., K. There are different

types of RBFs. In this work, we use one of the Wendland’s w-functions, w3,1, as the RBF / due to its

low computational complexity and the compact sup-port property.31,78This function / is formulated as

/ðrÞ ¼ ð1  rÞ 4 ð4r þ 1Þ; 0 r<1 0; 1 r  : ð4Þ

Based on this set of KRBFs, the positions ciand the

coefficients ai; i = 1, ..., K, of the RBFs determine the

3-D displacement vector TlðpÞ for each point p in the

image volume.

With the compact support property, the influence of each RBF is restricted to a local region which is a unit sphere around the RBF center. It is a preferred char-acteristic in non-rigid registration because the defor-mation of one area should not affect remote regions. This property also greatly alleviates the computational

complexity of spatial transformation in contrast to the functions with global support, such as TPS and multi-quadrics, because only a few RBFs are involved in the calculation of displacement vector for an image point. To accommodate various extents of compact support, the argument r of the function /ðÞ is scaled by the shape parameter, s.43The Wendland’s RBF with sup-port extent s is formulated as

/sðrÞ ¼ / r s  

: ð5Þ

Hierarchical Decomposition of Deformation Field For the K RBFs involved in our non-rigid defor-mation model, there are L different support extents, sj,

j = 1, ..., L, each with Kj RBFs and K1+ K2+ ...

+ KL = K. Therefore, the non-rigid deformation field,

TlðÞ; can be rewritten as TlðpÞ ¼ XL j¼1 XKj i¼1 aj;i/sjðkp  cj;ikÞ; ð6Þ where /sjðÞ denotes the RBF with support extent sj

and coefficients aj;i centered at cj,i. For proper

deployment of RBFs, brain volume is hierarchically divided into eight equal subregions, which are cubes in our implementation, and each subregion has an RBF Target Image

Source Image

Deformation modeling for non-rigid registration

Masking

Guide the deployment of RBFs Brain-only TDOG

FIGURE 6. Flowchart of the proposed non-rigid registration method. Non-brain regions of the target brain are first removed to obtain a brain mask, which is then used to extract the brain region in the TDOG image of the target brain and construct the brain-only TDOG. The deformation field is modeled by a set of Wendland’s RBFs hierarchically distributed at the anatomical structures revealed in the brain-only TDOG.

(8)

placed at the center if this region contains any fore-ground voxels in the brain-only TDOG, as shown in

Fig.7. Although the maximum number of RBFs

required at level j is 8j, the use of brain-only TDOG can help to place necessary RBFs near important anatomical features and avoid the dramatic increase of the number of RBFs. This deployment method is fairly beneficial for computational efficiency while main-taining high registration accuracy, particularly at the fine levels. The support extent sj at each level j is a

parameter which is set to be multiples of the width of subregions. Therefore, the RBFs from low to high levels are capable of modeling the deformation field from coarse to fine resolutions.

Since the deformation field model is hierarchically decomposed into RBFs with different levels of support extent, we estimate the coefficients of RBFs one-by-one while gradually accumulating the deformation field in a coarse-to-fine manner. Consider the coefficient

esti-mation for the m-th RBF at level l. The centered

positions and coefficients of the RBFs at the coarser levels from 1 to l 1 and the RBFs from 1 to m  1 at level l are all determined previously. We update the deformation field by adding a new RBF /slðkp  cl;mkÞ

centered at cl,mwith coefficients al,m: Tl;ml ðpÞ ¼X l1 j¼1 XKj i¼1 aj;i/sjðkp  cj;ikÞ þX m1 i¼1 al;i/slðkp  cl;ikÞ þ al;m/slðkp  cl;mkÞ: ð7Þ In this way, the displacement vector for the point p is progressively updated. The accumulation process terminates when l = L and m = KL, that is,

TlðpÞ ¼ TL;Kl LðpÞ: Because only three coefficients of an

RBF are estimated at a time, the proposed method avoids the searching in a huge parameter space and thus the whole optimization process is quite fast.

Objective Function

Coefficient estimation for each RBF is an optimi-zation process that minimizes an objective function:

CðIt; Is; TÞ ¼ SCRðIt; Is; TÞ þ kEðTÞ; ð8Þ

where T is the spatial mapping between the target

image It and the source image Is, SCR measures the

image similarity by CR, the smoothness regularization function E calculates the deformation energy of T, and the parameter k compromises the measurements SCR

and E. The Laplacian model and the thin-plate model are widely applied to regularize the structure defor-mation in image registration.62 Due to the computa-tional efficiency, we adopt the Laplacian model in this work: EðTÞ ¼ 1 V Z Z Z @T @x  2 þ @T @y  2 þ @T @z  2 " # dxdydz; ð9Þ whereVis the volume involved in the estimation. The Nelder–Mead downhill simplex method is utilized to optimize the objective function. Figure3e demon-strates that the proposed non-rigid method can well register the corresponding anatomical structures. Implementation Issues

In the optimization process of non-rigid registra-tion, iterative calculation of image transformation contributes to the major computation burden, partic-ularly when registering high resolution image volumes. The hierarchical decomposition of the non-rigid transformation model and the compact support prop-erty of Wendland’s RBFs are both beneficial to the alleviation of this heavy burden. However, the execu-tion time of non-rigid registraexecu-tion is still large. Some implementation techniques described below are helpful to further improve the efficiency. First, we construct a

volume pyramid with L¢ levels for each MR image

FIGURE 7. The whole brain volume is hierarchically divided into eight subregions. An RBF is deployed at the center of a subregion containing salient structure revealed by the brain-only TDOG. This figure illustrates the RBF distributions at four levels, in which a red square represents a subregion deployed with an RBF.

(9)

volume, generally L¢ £ L. When evaluating the objec-tive function, the KjRBFs, /sjðÞ with support extent sj,

j = 1, ..., L, are associated with the j¢-th level of vol-ume pyramid, where j¢ = j when j £ L¢ and j¢ = L¢ when j > L¢. This hierarchical architecture enables a coarse-to-fine optimization that helps to avoid the local traps and improves the computational efficiency. Second, one lookup table is constructed beforehand for each support extent level of Wendland’s RBFs to avoid the repeated function evaluations. Therefore, updating the deformation field with the RBF /sjðÞ in Eq. (7) requires only three subtractions, one table lookup, three multiplications, and three additions.

Correlation Ratio

The image similarity function has strong influence on registration results. Statistical measurements such as MI,23,76NMI,69 and CR60 have been used in mul-timodal image registration.33,57,83These measurements are also adequate to unimodal image registration because they are robust to the noise, the different intensity contrast, and the intensity inhomogeneity. In this work, CR is adopted because it is superior in accuracy, efficiency, and robustness.39,56,60

For the voxels of the same tissue in the target image, CR measures the intensity dispersion for their

corre-sponding points in the source image. Let N be the

number of voxels in the overlapping region X between the source image Isand the target image It. We divide

the whole intensity range into NB bins: Bi, i =

1, ..., NB. Let Xidenote a set of voxels in the region X

of the source image satisfying that their corresponding voxels in the target image have intensities belonging to the same intensity bin, Bi, as shown inFig.8. That is, Xi¼ fp j p 2 X; ItðFðpÞÞ 2 Big; ð10Þ whereF can be the spatial transformation function Tr,

Tr¢, Tg, or T in this work. Notice that N1þ N2þ

   þ NNB ¼ N; where Niis the number of voxels in Xi.

Image similarity based on CR is calculated by

SCRðIt; Is; FÞ ¼ 1  1 VarðIsðXÞÞ XNB i¼1 Ni NVarðIsðXiÞÞ; ð11Þ where function Var(Is(X)) and Var(Is(Xi)) evaluate the

intensity variances of source image in X and Xi,

respectively. The voxels belonging to the same tissue generally have similar intensities. In the target image,

0

bin Bi

FIGURE 8. CR measures the intensity dispersion for the voxel set Xi in the source image which contains voxels with their

(10)

therefore, the voxels having intensities within each intensity bin Bivery likely belong to the same tissue. If

the source and target images are well aligned, voxels in Xiof the source image should also belong to the same

tissue and hence the intensity variance of Xishould be

small. Consequently, larger CR value indicates higher image similarity and hence better alignment.

Evaluation of Registration Performance This section introduces the methods used to evalu-ate the performance of the proposed affine and non-rigid registration algorithms, including the data sets and the approaches used for comparisons.

Registration Algorithms for Performance Comparisons The proposed affine registration method in BIRT was compared with Statistical Parametric Mapping 2

(SPM2),10 Automated Image Registration version

5.2.5 (AIR5),79and FMRIB’s Linear Image Registra-tion Tool version 5.5 (FLIRT).39 The proposed non-rigid registration method in BIRT was compared with SPM2,7 Hierarchical Attribute Matching Mechanism for Elastic Registration version 1.0 (HAMMER),66 and Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra (DARTEL)5 in SPM5. BIRT was implemented in C++ and the programs of other algorithms were downloaded from their web-pages. All registration experiments were executed on a PC with an AMD Opteron 1.4 GHz processor running Linux.

Data Sets with Known Spatial Mapping Relation T1-weighted head MR images acquired on a

1.5 Tesla GE MR scanner (3D-FSPGR pulse

sequence; TR = 8.67 ms, TE = 1.86 ms, TI = 400 ms, NEX = 1, flip angle = 15, bandwidth = 15.63 kHz) were used to construct simulation data in our experi-ments. Both the experiments of affine and non-rigid registration quantitatively measured the alignment accuracy and execution efficiency using two sets of images, each set with 30 images, with different spatial resolutions (256 9 256 9 124, voxel size = 1.02 9 1.02 9 1.5 mm3and 128 9 128 9 34, voxel size = 2 9 2 9 5 mm3). The registration error was evaluated by the average deviation of the estimated displacement vec-tors from the ground-truth ones, and the computa-tional efficiency was compared according to the average execution time for registering an image pair.

Both the low- and high-resolution data sets applied in the experiment of affine registration were con-structed by applying 30 times of affine transformation to each of the 30 MR images originally scanned, each time with random transformation parameters

uniformly distributed within [p/15, p/15] in radian, [15 mm, 15 mm], [0.05, 0.05], and [0.01, 0.01] for rotation, translation, scaling, and shearing, respec-tively. In the accuracy evaluation, each originally scanned MR image was regarded as the source image and the randomly transformed images were regarded as the target images, one at a time. The known random transformations provided the ground truths of corre-sponding points. Moreover, the rigid parameters of the ground truths with the same image set were also used to evaluate the accuracy of the rigid transformation estimated by the proposed method.

Three sets of source–target pairs of brain images, low and high resolution, with known deformation fields were constructed by applying SPM2, HAM-MER, and DARTEL to nonlinearly register 30 origi-nally scanned MR images to a specified target image. We regarded the originally scanned images, the deformed images, and the deformation fields obtained by each registration method as the source images, the target images, and the ground truths of deformation, respectively, to evaluate the performance of other three algorithms. These materials facilitate the performance evaluation in a more objective way because the results are not biased toward a specific deformation type. The value to threshold the TDOG image can affect the performance of our non-rigid registration method, in terms of alignment accuracy and computational effi-ciency. Therefore, the constructed image pairs were also used to evaluate the influences of the thresholding level.

Parameters of Registration Algorithms

In our experiments, the parameters of the compared methods were set to their default values, except AIR5. The intensity thresholds in AIR5 were set to the values which can exclude most of the background voxels and the scales of Gaussian kernels were determined by the best average accuracy. Two necessary preprocesses in HAMMER, brain extraction and tissue segmentation, were accomplished by Brain Extraction Tool version

2.1 (BET2)68and FMRIB’s Automated Segmentation

Tool version 4.1,82 which are parts of the FMRIB’s Software Library.67 For our affine and non-rigid methods, we set the FWHM of the Gaussian variances (r12, r22) in TDOG to be (4 mm, 3 mm). The proposed

non-rigid method applied BET2 to obtain brain masks and the RBF support extent was set to be 1.5 times of the subregion width.

Effects of Registration Accuracy on VBM Analysis This section introduces the experiment designed to investigate the influence of registration accuracy on the

(11)

results of VBM analysis, including the construction of image groups and the criteria used for quantitative evaluation.

Construction of Image Groups

Six groups of GM images were generated by applying TPS transformation to deform a phantom GM image obtained from BrainWeb.25This avoids the segmentation step in VBM protocol8,34 such that the analysis results were not confounded by segmentation

error. One normal group and five patient groups with different scales of volume differences were generated in this experiment, in which each group contained 30 subjects and all the images were 157 9 189 9 156 with voxel size 1.0 mm3. Some related techniques for gen-erating brain images with volumetric changes can be found in references.19,20,40

A total of 208 control points on the cortical surface of the cerebrum and cerebellum were selected to create MR brain images with or without volume difference in the cerebellum. Thirty of these points, referred as

FIGURE 9. The construction of simulated images with different degrees of volume difference in the cerebellum. (a) All the images were constructed from the BrainWeb phantom GM (top) by using the TPS transformation. An inferior slice (bottom) of the phantom image illustrates the distribution of the control points. The red points placed on the exterior cortical surface of the cerebellum are used to generate volume difference patterns in the cerebellum. The green and the blue points distributed on the interior cortical surface of cerebellum and the cortical surface of cerebrum, respectively, are used to fix anatomical structures. The following figures illustrated the constructed MR images with different scales of volume differences: (b) 0 mm (normal subject), (c) 2 mm, (d) 3 mm, (e) 4 mm, (f) 5 mm, and (g) 6 mm.

TABLE 1. Performance comparison of affine registration algorithms.

Performance criteria MDM

BIRT

FLIRT SPM2 AIR5 MSP overlap MSP alignment Rigid

Affine

Level 1 Level 2 Level 3 Level 4

Error (mm)H 22.83 9.79 (9.09) 3.34 (1.62) 3.30 (1.46) 2.98 0.78 0.20 0.05 0.28 0.57 3.52

Time (sec)H – 27 (26.89) 45 51 51 53 76 364 442 51 131

Error (mm)L 22.80 9.89 (9.83) 7.80 (3.35) 3.35 (2.62) 3.00 0.76 0.21 0.06 0.37 0.94 4.00

Time (sec)L – 1 (0.99) 9 12 12 13 28 43 120 21 7

The upper and lower parts list the results of experiments using images with higher and lower spatial resolution, respectively. The first column lists the mean displacement magnitude (MDM) calculated from the spatial mapping relation of image pairs. The following seven columns list the averaged performance of the proposed BIRT method step-by-step, including the MSP overlapping, the alignment on the overlapped MSP, the rigid registration, and the affine registration across four multi-resolution levels. To demonstrate the accuracy of rigid transformation estimation, the numbers shown in the parentheses of the Error rows are the average alignment error when only the rigid parameters of the ground truths are considered. The average execution times used to calculate the TDOG images are listed in the parentheses of the Time rows. Note that the execution time of BIRT is accumulated for each step. The last three columns list the performance indices for FLIRT, SPM2, and AIR5.

H

256 9 256 9 124 images with voxel size 1.02 9 1.02 9 1.5 mm3.

L

(12)

manipulating points, were identified on the exterior cortical surface of the cerebellum and were used to generate difference patterns with different scales. The other 178 points were used to fix anatomical structures, in which 38 points were selected from the interior surface of the cerebellum and 140 points were placed on the surface of the cerebral cortex. Figure9a illus-trates the placement of these control points.

MR brain images of the normal subjects and patients were constructed by the following procedures. There are three steps in the generation of a patient image. First, we moved the manipulating points toward the cerebellum center. The displacement mag-nitude, that is, the volume difference scale, was parameter-controlled. Second, all the 208 control points were moved by a random vector with uniformly distributed magnitude within 2 mm in order to model the inter-subject structure variation. In the last step, we applied TPS transformation to construct a patient image according to these 208 control points. The construction procedure for a normal subject was the same as that for a patient, except the scale of volume difference was set to zero. In our experiments, the magnitude of the volume difference was uniformly distributed within different scales. Figures9b–9g show the examples of the constructed images with difference scale of 0 (normal subject), 2, 3, 4, 5, and 6 mm, respectively.

Accuracy Assessment of VBM Analysis

Two non-rigid registration algorithms, SPM2 and BIRT, were applied in the VBM analysis procedure. Normal subjects and patients were first registered to

the GM phantom and the obtained deformation fields were used to modulate the deformed images for GM volume estimation.8The modulated images were con-volved by a Gaussian kernel (FWHM = 4 mm) and the resulted images were statistically analyzed by using two-sample t-test.

We examined the VBM analysis procedure and defined proper ground truths for quantitative com-parison. VBM structure analysis uses a registration method to normalize all subjects into the same ste-reotaxic space such that the corresponding structures are aligned and the structure variances between sub-jects can be removed. Therefore, the MR images

FIGURE 10. Comparison for the stability and accuracy of affine registration algorithms. The solid (dashed) lines show the results of experiments using 256 3 256 3 124 (128 3 128 3 34) images with voxel size 1.02 3 1.02 3 1.5 (2 3 2 3 5) mm3.

–15 –10 –5 0 5 10 15 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3

TDOG thresholding influences on the alignment accuracy of BIRT (non–rigid)

DOG thresholding level

Non–rigid registration error (mm)

(2,1)H (4,3)H (8,7)H (12,11)H (2,1)L (4,3)L (8,7)L (12,11)L (a) (b)

FIGURE 11. Influences of the thresholding value in TDOG upon the alignment accuracy and computational efficiency of our non-rigid registration method. Four different FWHM set-tings for the Gaussian variances were evaluated, including (12, 11 mm), (8, 7 mm), (4, 3 mm), and (2, 1 mm). The solid (dashed) lines show the results of experiments using 256 3 256 3 124 (128 3 128 3 34) images with voxel size 1.02 3 1.02 31.5 (2 3 2 3 5) mm3. Notice that the listed computational efficiency excluded the execution time of the required brain extraction and affine registration processes.

(13)

transformed by a perfect registration method should be identical to the target (the GM phantom in this experiment). According to this viewpoint, the struc-tural discrepancies discovered by the VBM analysis with a perfect registration method were the subtraction between the phantom image (that is, the perfectly registered images of normal subjects) and the phantom image modulated by the TPS deformation field (that is, the perfectly registered patient images). We applied Gaussian kernel (FWHM = 4 mm) to smooth the obtained ground truths of different scales of volume differences because the same kernel was applied to convolve the normalized and modulated subjects before the statistical analysis.

Correlation coefficient provides another quantita-tive measurement showing that whether accurate registration can facilitate VBM procedure to reveal more accurate statistical inferences. The voxels with higher values in the Gaussian smoothed ground truth indicate that there are larger volume differences between groups. For an accurate VBM analysis, the positions of these voxels in the t-map obtained from VBM results should have higher significance. There-fore, the higher correlation coefficient between the ground truth and the t-map implies that the VBM analysis can reveal regions with volume differences more accurately.

EXPERIMENTAL RESULTS

This section presents our experimental results, including the validation of registration performance and the investigation for the influences of registration accuracy upon VBM analysis.

Comparison of Affine Registration Algorithms Table1lists the performance comparisons for affine registration algorithms in terms of execution time and mean registration error calculated from all image voxels. For BIRT, the performance indices of the estimation for rigid parameters and the affine regis-tration across four multi-resolution levels are listed in the table. The results of our experiment using high-(low-) resolution images showed that BIRT can achieve better accuracy than other algorithms in 76 (28) s, which is longer than that of SPM2 (SPM2 and AIR5) within 30 s. We can observe that the initial rigid transformation estimated by the proposed method is quite accurate, which was 1.46 (2.62) mm in the alignment of high- (low-) resolution images when only the rigid parameters of the ground truths are consid-ered. It is apparent that the alignment error of the subsequent affine registration decreases rapidly from coarse to fine levels. Figure10 plots the sorted regis-tration accuracy for all methods. The relatively low increasing rate indicates that BIRT can stably register brain image volumes. This figure also shows that the accuracy of about 95% cases was smaller than 0.1 mm by using the proposed method in this experiment. From Table 1and Fig.10, we can see that good spatial resolution of images can benefit the alignment accu-racy for all compared methods at the expense of longer execution time. Nevertheless, BIRT was less sensitive to image resolution in contrast to other algorithms.

Determination of TDOG threshold

Figure11 illustrates the performance of our non-rigid method with respect to different settings of

the Gaussian variances and TDOG threshold.

TABLE 2. Performance comparison of non-rigid registration algorithms.

Data set MDM BIRT SPM2 HAMMER DARTEL Affine L1 L2 L3 L4 L5 SPM2 dataH 17.44/– 2.74/113 (47) 2.67/116 1.55/121 0.77/141 0.33/242 0.19/321 0.59/8845 (3766) 0.52/12783 (2145) HAMMER dataH 14.39/– 2.67/119 (33) 2.53/125 1.96/150 1.22/278 0.65/350 0.34/415 2.07/276 1.92/11480 (1950) DARTEL dataH 14.49/– 2.86/114 (33) 2.02/119 1.86/141 1.60/251 1.12/311 0.78/367 2.26/298 1.01/8646 (3777) – SPM2 dataL 17.49/– 3.75/45 (8) 3.44/46 2.70/71 1.52/98 0.80/110 0.60/121 – 1.17/781 (135) 2.03/11273 (1011) HAMMER dataL 13.39/– 2.95/43 (8) 2.82/43 2.64/64 2.05/84 1.36/94 1.03/102 2.02/194 – 2.15/11594 (933) DARTEL dataL 15.49/– 2.91/32 (8) 2.16/32 2.04/51 1.94/69 1.70/78 1.45/85 2.42/186 2.23/595 (125) – The upper and lower parts show the results of experiments using images with higher and lower spatial resolution, respectively. For each part, the performance indices with the images constructed by SPM2, HAMMER, and DARTEL are listed from top to bottom. The first column lists the mean displacement magnitude (MDM) calculated from the spatial mapping relation of image pairs. The following six columns list the performance of the proposed BIRT method. The last three columns list the registration performance of SPM2, HAMMER, and DARTEL. The numbers before and after the slash are the registration error and the execution time, respectively. The numbers in parentheses show the preprocessing time if required. Note that the execution time of BIRT is accumulated for each step.

mm/s (s).

H256 9 256 9 124 images with voxel size 1.02 9 1.02 9 1.5 mm3. L128 9 128 9 34 images with voxel size 2 9 2 9 5 mm3.

(14)

Our experimental results showed that a smaller TDOG threshold, which increases the amount of structure information, generally resulted in better alignment accuracy at the expense of longer execution time. We can observe that both the alignment accuracy and computational efficiency of our non-rigid method

converged when the threshold was smaller than 5.

Gaussian variances also determine the amount of the structure information, coarse or fine. With the same thresholding level, it can be observed that the Gaussian variances providing coarse structure information commonly resulted in efficient registration process at the cost of high alignment error. Our experimental results also showed that the TDOG threshold had greater influences upon the registration performance for high-resolution images compared to that for low-resolution images. Compromising the registration accuracy and execution efficiency, we empirically determined the values of the FWHMs of the Gaussian variances r12and r22and the TDOG thresholding level

to be 4, 3, and 0 mm, respectively.

Comparison of Non-rigid Registration Algorithms Table 2shows the evaluation results of BIRT (both the affine and non-rigid methods), SPM2, HAMMER, and DARTEL, in which the registration error was calculated from the voxels in the brain area. Both the experiments using high- and low-resolution images indicated that the registration error of BIRT steadily decreased across five deformation levels and achieved better accuracy than other algorithms in a short period of time. It can be observed that HAMMER, DARTEL, and BIRT performed better for the simu-lation images constructed by SPM2, compared to other data sets. The registration accuracy of BIRT at the second (third) deformation level was comparable to that of SPM2 with the high- (low-) resolution HAMMER and DARTEL data sets. The registration errors of HAMMER and BIRT for the images con-structed by DARTEL were relatively higher than those for other data sets. On the other hand, in the experiments using high-resolution images, DARTEL did not per-form as well for the alignments of the HAMMER data set as that of the SPM2 data set. Figure12illustrates the sorted accuracy for the experiments of all data sets. From the increasing rate of the profiles, we can see that the registration processes of all non-rigid methods were quite stable, except for SPM2 with the DARTEL data set and HAMMER with the low-resolution DARTEL data set. Our experimental results also indicated that the alignment of high-resolution images can achieve better accuracy at the expense of longer execution time com-pared to the registration of low-resolution data. Notice

5 10 15 20 25 30 0 0.5 1 1.5 2 2.5 3 3.5 4

Nonlinear registration accuracy–SPM2 data

Source–target image pair

Sorted registration error (mm)

HAMMER DARTEL BIRT HAMMERL DARTELL BIRTL H H H 5 10 15 20 25 30 0 0.5 1 1.5 2 2.5 3 3.5

4 Nonlinear registration accuracy–HAMMER data

Source–target image pair

Sorted registration error (mm)

SPM2 DARTEL BIRT SPM2L DARTELL BIRTL H H H 5 10 15 20 25 30 0 0.5 1 1.5 2 2.5 3 3.5

4 Nonlinear registration accuracy–DARTEL data

Source–target image pair

Sorted registration error (mm)

SPM2 HAMMER BIRT SPM2L HAMMERL BIRTL H H H (a) (b) (c)

FIGURE 12. Comparison for the stability and accuracy of non-rigid registration algorithms with the simulated images constructed by (a) SPM2, (b) HAMMER, and (c) DARTEL. The solid (dashed) lines show the results of experiments using 256 3 256 3 124 (128 3 128 3 34) images with voxel size 1.02 31.02 3 1.5 (2 3 2 3 5) mm3.

(15)

that the execution efficiency of DARTEL did not im-prove significantly for low-resolution images, possibly because DARTEL imports data with the same spatial resolution, 1.5 mm3.

Effects of Registration Accuracy on VBM Analysis Figure13shows the results of VBM analyses using two different registration algorithms, SPM2 and

BIRT. It can be seen that the revealed patterns of volume difference (p < 0.005) using BIRT are more focal in all cases. This figure also depicts that the VBM analysis using SPM2 non-rigid registration presents more false volume differences, particularly in the groups with larger difference scales. Comparing the ground truths with the VBM results of various thresholds for significance level, we obtained the receiver operating characteristics (ROC) of each

FIGURE 13. The VBM structural analysis results (p < 0.005) using simulated images with different scales of volume differences, in which the image normalization was performed by using (a) BIRT and (b) SPM2.

(16)

registration algorithm. According to the ROC curves shown in Fig.14, BIRT is superior to SPM2 with respect to sensitivity and specificity. Table3 lists the correlation coefficients between the ground truths and the t-maps. Apparently, BIRT has higher values and thus is superior to SPM2 in all cases.

DISCUSSION

The proposed affine method registers brain images by optimizing the 12 parameters subsequent to the estimation of the rigid transformation. Generally, rigid parameters are highly coupled and are critical to accurate registration results. Due to the use of struc-ture information, BIRT can robustly estimate the ori-entation and position differences between MR images of the brains and the estimation error is small in our experiments. Therefore, the rigid transformation can provide a good initial for the optimization of all affine parameters, even if there are large orientation differ-ences, as the example shown in Fig.15. Both FLIRT and BIRT can find the correct orientation in such cases. However, FLIRT achieves this goal by searching in a large space of parameters at the expense of longer execution time than that of BIRT. Experimental results showed that the proposed affine method performs well in terms of accuracy, efficiency, and stability.

The determination of brain MSP is an essential step in our affine registration algorithm. In general, previ-ously published methods4,49,73can be applied to locate

0 0.05 0.1 0.15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1–Specificity Sensitivity 2mm (BIRT) 3mm (BIRT) 4mm (BIRT) 5mm (BIRT) 6mm (BIRT) 2mm (SPM2) 3mm (SPM2) 4mm (SPM2) 5mm (SPM2) 6mm (SPM2)

FIGURE 14. The ROC curves of the simulated VBM structural analyses, in which the red and blue curves represent the results with different scales of volume differences from 2 to 6 mm using BIRT and SPM2, respectively.

TABLE 3. The correlation coefficients between the ground truths and the t-maps obtained from VBM analysis using BIRT

and SPM2 methods.

Scale of volume difference

2 mm 3 mm 4 mm 5 mm 6 mm

SPM2 0.61 0.61 0.63 0.60 0.58

BIRT 0.65 0.67 0.74 0.74 0.74

FIGURE 15. The affine registration for brains with large orientation difference. (a) The ICBM-152 brain template used as the target image. (b) A T1-weighted image used as the source image. The affine-registered result using (c) BIRT, (d) FLIRT, (e) SPM2, and (f) AIR5.

(17)

the MSP in the proposed procedure. In this work, we propose a new MSP location method based on the TDOG image. Because of the structural properties of MSP revealed in the TDOG image, the orientation of MSP can be estimated by a closed-form solution and the further refinement is very time efficient. As shown in Table1, the major computational burden of our method is from the Gaussian filtering procedure in the TDOG image computation. Because the TDOG image is necessary for the subsequent non-rigid registration process, the proposed MSP location method only slightly increases the whole computation with the centroid estimation, the PCA computation of 3 9 3 matrices, and the following optimization procedure. Moreover, the TDOG image and the PCA procedure can reduce the influence of noise and intensity inho-mogeneity. Even when the neck and shoulder regions are included in the images and could largely bias the estimation of brain centroid, the MSP can also be

robustly located by the proposed affine method, as the example shown in Fig.16.

Extracting the brain region beforehand is usually beneficial to registration accuracy because the inter-subject variation of non-brain structures are relatively large.80 Nevertheless, brain extraction is not included as a necessary step in the proposed affine method for the generality of methodology. Notice that the esti-mation to brain orientation in the proposed affine method is only adequate to the registration of brain images because it uses the information specific to brain structures. Therefore, it is not a solution to general rigid registration problems.

A non-rigid registration method in BIRT applies Wendland’s RBFs to model image deformation field hierarchically, in which all RBFs are adaptively deployed near the important structures of brains. Execution efficiency is an important issue for the align-ment of 3-D images, especially for those applications

FIGURE 16. Affine registration for the brain images which include the neck regions and partial brain tissues are out of the field of view. (a) Source image. (b) Source image transformed using the estimated rigid parameters. (c) Affine registered source image. (d) Target image. (e) The estimated MSP of (d).

(18)

which apply registration intensively, such as VBM analysis. Some characteristics of the proposed method can greatly reduce the computational complexity. First, the adaptive placement decreases the number of the required RBFs. Second, coefficient estimation for one RBF at a time can greatly reduce the parameter search space and hence accomplish the registration task rapidly. In this way, the structure topology is al-ways preserved in the modeling of deformation fields. Third, because of the compact support property of the Wendland’s RBFs, there are only a few functions involved in the transformation calculation for each image point. Fourth, some implementation skills, such as the use of pyramid images and the lookup tables of the RBFs, are also quite beneficial to further improve the efficiency of image registration. Experimental results demonstrated that BIRT can accurately align brain images in a relatively short period of time com-pared with SPM2, HAMMER, and DARTEL.

The spatial normalization transforming MR brain images into the same stereotaxtic space for statistical comparison is an essential step in VBM analysis. It is commonly believed that the registration accuracy greatly affects the analysis results. However, to the best of our knowledge, there is still no quantitative study addressing this issue. Deviation of the VBM analysis was majorly contributed by the smoothing, the voxel-wise statistics, and the alignment errors of image reg-istration. In this work, we constructed MR images of subjects with or without volume differences to inves-tigate the registration errors upon the accuracy of VBM analysis. From the results of the well-controlled experiments, we conclude that an accurate registration method can facilitate VBM analysis to precisely report the structure differences between subject groups, in terms of both sensitivity and specificity.

BIRT can also be applied to register images of dif-ferent modalities other than T1-weighted images. Our affine method is applicable to the alignment of brain images in which the MSP and CC have sufficiently large intensity contrast with the neighboring tissues. Additional preprocessing is required to invert the intensity contrast of brain tissues if the intensities in the MSP region are higher than those of CC. On the other hand, the proposed non-rigid registration meth-od is general and suitable for the registration of images with similar structures.

In conclusion, we have proposed image registration, affine and non-rigid, which utilize the structure infor-mation derived from image derivatives. Experimental results indicated that the proposed BIRT methods can rapidly align brain structures with high accuracy compared to several other existing algorithms. An experiment using MR image groups with or without volume differences was also conducted to demonstrate

that accurate registration can improve VBM analysis, in terms of both the sensitivity and the specificity.

ACKNOWLEDGMENTS

This work was supported in part by Taipei Veterans General Hospital, Taiwan, under Grants VGHUST-96-P1-05 and V97ER1-004, and National Science Council, Taiwan, under Grant NSC-98-2752-B-075-001-PAE.

REFERENCES

1Amit, Y. A nonlinear variational problem for image

matching. SIAM J. Sci. Comput. 15:207–224, 1994. 2

Arad, N., N. Dyn, D. Reisfeld, and Y. Yeshurun. Image warping by radial basis functions: application to facial expressions. CVGIP-Graph Model Image Process. 56:161– 172, 1994.

3

Ardekani, B. A., S. Guckemus, A. Bachman, M. J. Hoptman, M. Wojtaszek, and J. Nierenberg. Quantitative comparison of algorithms for inter-subject registration of 3D volumetric brain MRI scans. J. Neurosci. Methods 142:67–76, 2005.

4

Ardekani, B. A., J. Kershaw, M. Braun, and I. Kanno. Automatic detection of the mid-sagittal plane in 3-D brain images. IEEE Trans. Med. Imaging 16:947–952, 1997. 5

Ashburner, J. A fast diffeomorphic image registration algorithm. NeuroImage 38:95–113, 2007.

6

Ashburner, J., J. L. R. Andersson, and K. J. Friston. High-dimensional image registration using symmetric priors.

NeuroImage9:619–628, 1999.

7Ashburner, J., and K. J. Friston. Nonlinear spatial

nor-malization using basis functions. Hum. Brain Mapp. 7:254– 266, 1999.

8

Ashburner, J., and K. J. Friston. Voxel-based morphom-etry—the methods. NeuroImage 11:805–821, 2000. 9

Ashburner, J., and K. J. Friston. Why voxel-based mor-phometry should be used. NeuroImage 14:1238–1243, 2001. 10

Ashburner, J., P. Neelin, D. L. Collins, A. Evans, and K. Friston. Incorporating prior knowledge into image registration. NeuroImage 6:344–352, 1997.

11

Bajcsy, R., and S. Kovacˇicˇ. Multiresolution elastic match-ing. Comput. Vis. Graph Image Process. 46:1–21, 1989. 12

Beyer, J. L., and K. R. R. Krishnan. Volumetric brain imaging findings in mood disorders. Bipolar Disord. 4:89– 104, 2002.

13

Bookstein, F. L. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. 11:567–585, 1989.

14Bookstein, F. L. ‘‘Voxel-based morphometry’’ should not

be used with imperfectly registered images. NeuroImage 14:1454–1462, 2001.

15

Brenneis, C., S. M. Bo¨sch, M. Schocke, G. K. Wenning, and W. Poewe. Atrophy pattern in SCA2 determined by voxel-based morphometry. NeuroReport 14:1799–1802, 2003.

16

Bro-Nielsen, M., and C. Gramkow. Fast fluid registration of medical images. Lect. Notes Comput. Sci. 1131:267–276, 1996.

(19)

17

Cachier, P., E. Bardinet, D. Dormont, X. Pennec, and N. Ayache. Iconic feature based nonrigid registration: the

PASHA algorithm. Comput. Vis. Image Understand

89:272–298, 2003. 18

Camara, O., G. Delso, O. Colliot, A. Moreno-Ingelmo, and I. Bloch. Explicit incorporation of prior anatomical infor-mation into a nonrigid registration of thoracic and abdominal CT and 18-FDG whole-body emission PET images. IEEE Trans. Med. Imaging 26:164–178, 2007. 19

Camara, O., J. A. Schnabel, G. R. Ridgway, W. R. Crum, A. Douiri, R. I. Scahill, D. L. G. Hill, and N. C. Fox. Accuracy assessment of global and local atrophy mea-surement techniques with realistic simulated longitudinal Alzheimer’s disease images. NeuroImage 42:696–709, 2008. 20

Camara, O., M. Schweiger, R. I. Scahill, W. R. Crum, B. I. Sneller, J. A. Schnabel, G. R. Ridgway, D. M. Cash, D. L. G. Hill, and N. C. Fox. Phenomenological model of diffuse global and regional atrophy using finite-element methods.

IEEE Trans. Med. Imaging25:1417–1430, 2006.

21

Christensen, G. E., R. D. Rabbitt, and M. I. Miller. Deformable templates using large deformation kinematics. IEEE Trans. Image Process. 5:1435–1447, 1996.

22

Chu, C., G. Tan, and J. Ashburner. Improving voxel-based morphometry with diffeomorphic non-linear registration by DARTEL toolbox: conventional SPM normalization vs. DARTEL normalization. In: 14th Annual Meeting of the Organization for Human Brain Mapping, Melbourne, Australia, 2008.

23Collignon, A., F. Maes, D. Delaere, D. Vandermeulen,

P. Suetens, and G. Marchal. Automated multimodality medical image registration using information theory. In: Proceedings of the Information Processing in Medical Imaging, pp. 263–274, 1995.

24

Collins, D. L., C. J. Holmes, T. M. Peters, and A. C. Evans. Automatic 3-D model-based neuroanatomical segmenta-tion. Hum. Brain Mapp. 3:190–208, 1995.

25

Collins, D. L., A. P. Zijdenbos, V. Kollokian, J. G. Sled, N. J. Kabani, C. J. Holmes, and A. C. Evans. Design and construction of a realistic digital brain phantom. IEEE

Trans. Med. Imaging17:463–468, 1998.

26

Cootes, T. F., C. Beeston, G. J. Edwards, and C. J. Taylor. A unified framework for atlas matching using active appearance models. Lect. Notes Comput. Sci. 1613:322– 333, 1999.

27

Edwards, P. J., D. L. G. Hill, J. A. Little, and D. J. Hawkes. Deformation for image guided interventions using a three component tissue model. Lect. Notes Comput. Sci. 1230:218–231, 1997.

28Evans, A. C., D. L. Collins, S. R. Mills, E. D. Brown, R. L.

Kelly, and T. M. Peters. 3D statistical neuroanatomical models from 305 MRI volumes. In: Nuclear Science Sym-posium and Medical Imaging Conference, San Francisco, CA, USA, Vol. 3, pp. 1813–1817, 1993.

29

Evans, A. C., D. L. Collins, and B. Milner. An MRI-based stereotactic atlas from 250 young normal subjects. In: Society for Neuroscience (Abstract), Vol. 18, p. 408, 1992. 30

Ferrant, M., A. Nabavi, B. Macq, F. A. Jolesz, R. Kikinis, and S. K. Warfield. Registration of 3-D intraoperative MR images of the brain using a finite-element biomechanical model. IEEE Trans. Med. Imaging 20:1384–1397, 2001. 31

Fornefett, M., K. Rohr, and H. S. Stiehl. Radial basis functions with compact support for elastic registration of medical images. Image Vis. Comput. 19:87–96, 2001. 32

Freeborough, P. A., and N. C. Fox. Modeling brain deformations in Alzheimer disease by fluid registration of

serial 3D MR images. J. Comput. Assist. Tomogr. 22:838– 843, 1998.

33

Gholipour, A., N. Kehtarnavaz, R. Briggs, M. Devous, and K. Gopinath. Brain functional localization: a survey of image registration techniques. IEEE Trans. Med. Imaging 26:427–451, 2007.

34

Good, C. D., I. Johnsrude, J. Ashburner, R. N. A. Henson, K. J. Friston, and R. S. J. Frackowiak. Cerebral asym-metry and the effects of sex and handedness on brain structure: a voxel-based morphometric analysis of 465 normal adult human brains. NeuroImage 14:685–700, 2001.

35Good, C. D., I. S. Johnsrude, J. Ashburner, R. N. A.

Henson, K. J. Friston, and R. S. J. Frackowiak. A voxel-based morphometric study of ageing in 465 normal adult human brains. NeuroImage 14:21–36, 2001.

36

Hellier, P., and C. Barillot. Coupling dense and landmark-based approaches for nonrigid registration. IEEE Trans.

Med. Imaging22:217–227, 2003.

37

Hill, D. L. G., P. G. Batchelor, M. Holden, and D. J. Hawkes. Medical image registration. Phys. Med. Biol. 46:R1–R45, 2001.

38

Jenkinson, M., P. Bannister, M. Brady, and S. Smith. Improved optimization for the robust and accurate linear registration and motion correction of brain images.

Neu-roImage17:825–841, 2002.

39

Jenkinson, M., and S. Smith. A global optimisation method for robust affine registration of brain images. Med. Image Anal. 5:143–156, 2001.

40Karacali, B., and C. Davatzikos. Simulation of tissue

atrophy using a topology preserving transformation model.

IEEE Trans. Med. Imaging25:649–652, 2006.

41Karas, G. B., E. J. Burton, S. A. R. B. Rombouts, R. A.

van Schijndel, J. T. O’Brien, P. Scheltens, I. G. McKeith, D. Williams, C. Ballard, and F. Barkhof. A comprehensive study of gray matter loss in patients with Alzheimer’s dis-ease using optimized voxel-based morphometry.

NeuroIm-age18:895–907, 2003.

42

Kim, J. S., J. M. Lee, Y. H. Lee, J. S. Kim, I. Y. Kim, and S. I. Kim. Intensity based affine registration including feature similarity for spatial normalization. Comput. Biol. Med. 32:389–402, 2002.

43

Kowalczyk, P., and M. Mrozowski. Mesh-free approach to Hemholtz equation based on radial basis functions. In: 15th International Conference on Microwave, Radar and Wireless Communication, Warsaw, Poland, pp. 702–705, 2004.

44Kybic, J., P. The´venaz, A. Nirkko, and M. Unser.

Unwarping of unidirectionally distorted EPI images. IEEE

Trans. Med. Imaging19:80–93, 2000.

45Lester, H., and S. R. Arridge. A survey of hierarchical

non-linear medical image registration. Pattern Recogn. 32:129– 149, 1999.

46

Likar, B., and F. Pernusˇ. A hierarchical approach to elastic registration based on mutual information. Image Vis. Comput. 19:33–44, 2001.

47

Little, J. A., D. L. G. Hill, and D. J. Hawkes. Deformations incorporating rigid structures. Comput. Vis. Image

Under-stand66:223–232, 1997.

48

Liu, T. M., D. G. Shen, and C. Davatzikos. Deformable registration of cortical structures via hybrid volumetric and surface warping. NeuroImage 22:1790–1801, 2004. 49

Liu, Y. X., R. T. Collins, and W. E. Rothfus. Robust midsagittal plane extraction from normal and pathological 3-D neuroradiology images. IEEE Trans. Med. Imaging 20:175–192, 2001.

數據

FIGURE 2. Flowchart of the proposed affine registration method. The brain MSPs in the source and target images are determined according to the TDOG images and are then overlapped together
FIGURE 4. Brain MSP estimation and refinement. (a) TDOG image is more planarly distributed nearby the MSP (top), compared to other regions (middle)
FIGURE 5. Segmentation of CC on MSP. (a) The circular area centered at the gravity of brain MSP, O, is considered in the segmentation of CC
FIGURE 6. Flowchart of the proposed non-rigid registration method. Non-brain regions of the target brain are first removed to obtain a brain mask, which is then used to extract the brain region in the TDOG image of the target brain and construct the  brain
+7

參考文獻

相關文件

Radiomorphometric indices can be used to deter- mine the existence of a porous structure in the man- dible on panoramic images of patients who have scleroderma and may have a high

In this study, the clinical relevance of 3D CBCT images compared to the 2D ones has been evaluated for the diagnosis and treatment planning of impacted canines.. CBCT allows

Shorter pulses allow cells to be analyzed with high accuracy Shorter pulses allow cells to be analyzed with high accuracy... Cuvette Flow Cell – Coulter ALTRA and

– For each image, use RANSAC to select inlier features from 6 images with most feature matches. •

a single instruction.. Thus, the operand can be modified before it can be modified before it is used. Useful for fast multipliation and dealing p g with lists, table and other

Chou, “The Application on Investigation of Rice Field Using the High Frequency and High Resolution Satellite Images (1/3)”, Agriculture and Food Agency, 2005. Lei, “The Application

(C) Some researchers say that if a person’s brain is right, she or he can easily have good musical skills.. (D) According to some researchers, the right brain helps us to improve

Examples of thermal image (left) and processed binary images (middle and right) of