## 國

## 立

## 交

## 通

## 大

## 學

### 資訊科學與工程研究所

### 碩

### 士

### 論

### 文

### 快速準確的微分同構對稱非剛性腦部磁振

### 造影影像對位演算法

### A Fast and Accurate Algorithm for Diffeomorphic and

### Symmetric Non-rigid Registration of Brain Magnetic

### Resonance Images

### 研 究 生：林奭彥

### 指導教授：陳永昇 教授

### 快速準確的微分同構對稱非剛性腦部磁振造影影像對位演算法

### A Fast and Accurate Algorithm for Diffeomorphic and Symmetric

### Non-rigid Registration of Brain Magnetic Resonance Images

### 研 究 生：林奭彥 Student：Shih-Yen Lin

### 指導教授：陳永昇 Advisor：Yong-Sheng Chen

### 國 立 交 通 大 學

### 資 訊 科 學 與 工 程 研 究 所

### 碩 士 論 文

A ThesisSubmitted to Institute of Computer Science and Engineering College of Computer Science

National Chiao Tung University in partial Fulfillment of the Requirements

for the Degree of Master

in

Computer Science

December 2012

Hsinchu, Taiwan, Republic of China

### A Fast and Accurate Algorithm for

### Diffeomorphic and Symmetric Non-rigid

### Registration of Brain Magnetic Resonance

### Images

A thesis presented by

### Shih-Yen Lin

to

### Institute of Computer Science and Engineering

### College of Computer Science

in partial fulfillment of the requirements for the degree of

Master in the subject of

### Computer Science

National Chiao Tung University Hsinchu, Taiwan

A Fast and Accurate Algorithm for Diffeomorphic and Symmetric Non-rigid Registration of Brain Magnetic Resonance Images

Copyright © 2012 by

i 摘 要 於本論文中，我們提出了一個快速、精確、對稱且微分同構的核磁共振影像(MRI) 對位演算法。我們利用一個對數－歐幾里德的架構以建立微分同構的模型，在此模型中， 微分同構的李代數是由不隨時間變化的速度場表示，此速度場的模型是由多個具緊支撐 的 Wendland 徑向基底函數的線性組合構成。最佳化所用的目標函數是由一有對稱性質 的相關比及經過權重後的拉普拉斯算子模型組成。我們使用一具區域性及貪婪演算法性 質的最佳化架構，藉以增進演算法的速度。在此架構中，我們使用一具對稱性的單純形 演算法，來分別且逐一地找出各個徑向基函數的係數。為了在與仿射對位的結果合併時 仍能保有整體對稱性，我們設計出一個應用“中途空間”概念的架構。藉由此架構，如 果使用具對稱性的仿射對位演算法，則能確保整體對位流程也具對稱性。我們使用一個 階層式的架構以增加演算法的速度及準確度，在此架構中，徑向基底函數是以由粗略至 精細的順序逐一地被部署及估計。我們利用 LPBA40 數據集的 40 個 T1-權重 MRI 影像 的共 1560 對影像對的對位以驗證本論文提出的演算法。經由驗證可得知此演算法完全 滿足微分同構的性質，且對稱性的誤差也小於體素寬度。為了驗證準確度，我們利用 Klein 等人於 2010 年提出的驗證架構評估本演算法，並與其他 14 種對位演算法進行比 較。驗證結果顯示本演算法的中位目標重疊值高於全部 14 個演算法。另外，在使用 5 層規模級別時，本演算法較 14 種演算法中所有具微分同構性質者快速。

ii

**Abstract **

A fast symmetric and diffeomorphic non-rigid registration algorithm for magnetic resonance images (MRIs) is proposed in this work. A log-Euclidean framework is used to model diffeomorphisms, in which the Lie Algebra of the diffeomorphism is modeled by time-invariant velocity fields. The velocity fields are modeled using linear combinations of compactly-supported Wendland radial basis functions. A symmetric correlation ratio combined with a weighted Laplacian model is used as the objective function for optimization. We used a greedy local optimization scheme to increase the speed of the algorithm. In this setup, a symmetric downhill simplex method is used to estimate the coefficient of each radial basis function separately and consecutively. To incorporate the result of initial affine registration while maintaining overall symmetry, a framework utilizing the concept of “halfway space” is devised. This framework can ensure overall symmetry if the affine registration algorithm is symmetric. To increase the speed and accuracy, we used a hierarchical framework in which the RBFs are deployed and estimated in a coarse-to-fine manner. The proposed algorithm was evaluated using the results of 1560 pairwise registrations of 40 T1-weighted MRIs in LPBA40 dataset. According to the evaluation results, the proposed algorithm is completely diffeomorphic and has sub-voxel accuracy in terms of symmetry. The accuracy of the proposed algorithm was evaluated and compared with 14 registration methods using the evaluation framework by Klein et al., 2010. The median target overlap of the proposed algorithm using LPBA40 dataset is higher than all 14 registration methods. In addition, the proposed algorithm is faster than all diffeomorphic registration methods in the comparison when using 5 scale levels.

iii 致 謝 我很幸運能夠在 BSP 實驗室渡過我的碩士生涯。一路走來，陳永昇老師不只在學業 的指導讓我受益良多，也在生活上給予我們諸多關心和照顧，是個不可多得的好老師。 另外實驗室的氣氛也相當和樂與團結。最要感謝的是同組的予涵，兩年間無數次的討論， 才讓我得到了如今的論文成果。另外，我也很謝謝泰成、旭龍、佩綺這些同屆一起同甘 共苦的同學們，以及其他的實驗室夥伴們。另外也謝謝鍾孝文老師、陳麗芬老師以及王 才沛老師，老師們給的建議以及指正讓我的論文內容更臻詳實。最後，也謝謝一路支持 我的家人。

### Contents

List of Figures vii

List of Tables ix

1 Introduction 1

1.1 Backgrounds . . . 2

1.2 Small-Deformation Registration Frameworks . . . 3

1.3 Diffeomorphic Registration frameworks . . . 4

1.4 Symmetric Registration . . . 4

1.5 Motivation . . . 5

1.6 Thesis Overview . . . 6

2 Background Knowledge and Related Works 7 2.1 Diffeomorphisms and Lie Algebra . . . 8

2.2 Models of Diffeomorphisms . . . 10

2.3 Basis Functions . . . 12

3 Methods 13 3.1 Model of Diffeomorphism . . . 14

3.2 Radial Basis Functions . . . 14

3.3 The Objective Function . . . 16

3.3.1 The Likelihood Term . . . 17

3.3.2 The Prior Term . . . 23

3.4 Optimization . . . 24

3.4.1 Local Optimization Scheme . . . 24

3.4.2 Optimization Algorithm . . . 25

3.5 Incorporation of Affine Registrations . . . 26

3.6 A Hierarchical Framework . . . 29

4 Implementation Issues 35 4.1 Solving Partial Differential Equations . . . 36

4.2 Re-sampling Algorithms . . . 37 v

4.2.1 Nearest Neighbor Interpolation . . . 38

4.2.2 Trilinear Interpolation . . . 38

4.2.3 Tricubic Interpolation . . . 41

4.2.4 Sinc Interpolation . . . 42

4.3 The Evaluation Point Set . . . 44

4.4 The Selection of Bin Width . . . 45

5 Results 49 5.1 Data and Registration . . . 50

5.2 Evaluation of Diffeomorphism . . . 50

5.3 Evaluation of Symmetry . . . 53

5.4 Evaluation of Accuracy . . . 57

5.5 Evaluation of Speed . . . 58

6 Discussion 65 6.1 Validity of Evaluation Results . . . 66

6.2 Robustness of Symmetry . . . 67

6.3 Effect of Basis Functions on Accuracy . . . 69

6.4 Similarity Measure . . . 70

7 Conclusion 75

Bibliography 77

### List of Figures

1.1 An example of folding . . . 3

2.1 A simple illustration of the relationship between the Lie Group of diffeo-morphisms and its Lie Algebra . . . 9

2.2 A diffeomorphism is obtained by calculating the flow of its velocity field . . 11

3.1 Wendland’s RBF . . . 15

3.2 T1-weighted images acquired from different scanners . . . 18

3.3 Intensity inhomogeneity of T1-weighted MR images . . . 18

3.4 MR images acquired using different pulse sequences . . . 19

3.5 An illustration of CR . . . 21

3.6 An example of CR using a real image pair . . . 22

3.7 An illustration of symmetric initialization of downhill simplex method. . . . 27

3.8 An illustration of the proposed symmetric framework for incorporating affine registration. . . 30

3.9 An illustration of the proposed hierarchical framework. . . 33

4.1 An illustration of trilinear and tricubic interpolation . . . 40

4.2 Comparison between different interpolation methods . . . 43

4.3 An illustration of finding the evaluation point set . . . 46

5.1 An example of LPBA40 dataset . . . 51

5.2 An example of registration result . . . 52

5.3 TO value for LPBA40 dataset . . . 59

5.4 Visualization of mean TO values for each brain region . . . 60

5.5 Plot of mean TO value in each region for different methods . . . 61

6.1 Poor robustness of symmetry . . . 67

6.2 An example of artifacts produced by RBF approximation . . . 71

6.3 Visualization of a displacement field . . . 71

6.4 Artifacts in a registered image. . . 72

### List of Tables

3.1 Summary of proposed algorithm . . . 34

5.1 Volume Loss values for the proposed algorithm. . . 53

5.2 Error of symmetry for the proposed algorithm. . . 56

5.3 Comparison of speed with other diffeomorphic registration methods. . . 63

### Chapter 1

### Introduction

**2** **Introduction**

### 1.1

### Backgrounds

In a nutshell, image registration is a procedure that warps an image toward another such that these two images will match one another as well as possible under a certain criterion. It is an important operation for the analysis of brain magnetic resonance images (MRI), and, more specifically, the central procedure of the spatial normalization. Spatial normalization involves moving multiple images to a single reference frame for further analysis. It enables the researchers to analyze the intra-subject and inter-subject anatomical variation on the same basis. As a result, spatial normalization has been used in a large number of studies for various purposes such as the investigation of various brain diseases such as dementia [9], schizophrenia [16] and epilepsy [25].

Registration can be categorized into two types: Affine (or global) registration and
non-rigid (or local) registration. Affine registrations (Φ(g)) transform images globally by
mod-elling the transformations as affine transformation matrices (Φ(g)_{(x) 1}T

= T [x 1]T).
Affine registrations are simple but can only roughly match two images. In contrast,
non-rigid registrations model transformations as mapping functions (Φ : <3 → <3_{). Non-rigid}

registrations can account for local deformations and can hence achieve better correspon-dence between two images. However, non-rigid registration is in general a more difficult task. The speed and accuracy of non-rigid registrations are likely to be affected if the im-ages in the image pair are very different from one another. In practice, most non-rigid registration algorithms entail affine registrations in pre-processing steps for the sake of stability:

Φ = Φ(l)◦ Φ(g)

(1.1) Our work mainly focused on the non-rigid registration.

In this thesis, the two images in the image pair used in the registration is referred to as the source image (Is : <3 → <) and the target image (It : <3 → <) respectively, in

which the source image denotes the image to be warped and the target image is the static reference image. The registration process is the estimation of the optimal mapping such that the warped source image will match the target image best under a certain cost function

**1.2 Small-Deformation Registration Frameworks** **3**

(a) (b) (c)

Figure 1.1: An example of folding. (a) The image before transformation. (b) The mapping function along y-axis (non-injective). (c) The image after transformation. The center part of the image is folded together, causing unrecoverable volume loss.

(or objective function):

¯

Φ = arg min

Φ E (Is, It, Φ) (1.2)

### 1.2

### Small-Deformation Registration Frameworks

According to Ashburner [7], registration methods can be divided into two general cate-gories: Small-deformation frameworks and large-deformation (or diffeomorphic) frame-works. Most conventional registration algorithms use a small-deformation framework, in which transformations are encoded in the form of displacement fields (d): Φ (x) = x + d (x). Registration algorithms using such framework include FNIRT [2], SPM5 nor-malization algorithm and BIRT [22]. Registration algorithms using this type of framework assume transformations to be Euclidean and allows transformations to be processed by Eu-clidean operations. The main advantage of this type of framework is its simplicity and efficiency in terms of the calculation of the mapping function, since the relationship be-tween the mapping function and the displacement field can be established through a single vector addition. However, it does not naturally guarantee a one-to-one mapping and, in

**4** **Introduction**

other words, the existence of the inverse transformation. This means that the topology of the image is not necessarily preserved after transformation in this type of framework. Loss of topology leads to artifacts such as folding (see Figure 1.1), which is physically impossi-ble and also causes unrecoveraimpossi-ble loss of volume after transformation as well. In the light of this problem, some small-deformation methods attempted to ensure the preservation of topology by imposing regularization constraints [31]. Another major drawback of small-deformation based methods is that the inverse transformation in this type of framework cannot be found in a straightforward manner. This may be a fatal flaw for applications such as the construction of brain templates [20], in which the informations of inverse transfor-mations is necessary. Some works tackled this problem by estimating forward and inverse transform simultaneously [27, 30].

### 1.3

### Diffeomorphic Registration frameworks

The development of diffeomorphic registration frameworks has provided a more elegant solution to these limitations of small-deformation frameworks [7, 8, 10, 14, 38]. Diffeomor-phic frameworks represent the mapping using the well-defined mathematical structure of diffeomorphism. A diffeomorphism is a smooth and invertible function whose inverse is also smooth and invertible. This type of framework ensures one-to-one relationship of both the mapping function and its inverse, and thus guarantees to preserve local structures after transformation. Moreover, inverse transformations (Φ−1) in diffeomorphic frameworks are readily available once the forward transformations (Φ) is found (detailed explanations are given in Section 2). Diffeomorphisms form a Lie Group under composition of functions, and, as a result, the composition of multiple diffeomorphic transformations will also be diffeomorphic. Same as Lie Groups, the space of diffeomorphisms is also a manifold.

### 1.4

### Symmetric Registration

With guaranteed invertibility and readily available inverse transformations, diffeomor-phic registration facilitates the imposition of another important property: symmetry (or

**1.5 Motivation** **5**

inverse-consistency). Symmetric registrations can produce consistent result when the source and the target image are interchanged. Let ¯ΦB→A denote the estimated transformation

ob-tained by registering image A to image B, fully symmetric registrations will satisfy the following:

∀A, B, ¯ΦB→A = ¯ΦA→B

−1

(1.3) This property is crucial especially for applications in which pairwise registrations are in-volved. If a registration algorithm is asymmetric, the transformation of registering image A to image B will be inconsistent to that of registering image B to image A. This means the result of the registration somehow depends on the choice of which image in the image pair is to be warped onto one another. This dependency may introduce bias and algorithm-induced artifacts in subsequent analysis. Reuter et al. [27] pointed out that the bias caused by asymmetric registrations in longitudinal image processing may lead to incorrect results and potentially flawed interpretation of outcome measures. Artifacts induced by asym-metric registrations have also been reported in various works [37, 41]. Using symasym-metric registrations can prevent this kind of problem and generate more valid analysis result.

Some symmetric registration methods using the small-deformation framework has been proposed [27, 36]. The method in [27] transforms two images in both directions simulta-neously in order to achieve symmetry. However, these small-deformation-based methods merely approximate symmetry by including penalties in the optimization algorithm. On the other hand, diffeomorphic frameworks provide the well-formed mathematical model of diffeomorphism. Using this type of framework, the design of symmetric registration algorithms can be greatly simplified, and the accuracy in terms of the symmetry can be improved as well.

### 1.5

### Motivation

Despite all the advantages of diffeomorphic frameworks, the computation of diffeo-morphic transformations are rather computationally expensive when compared to simple displacement-field models in small-deformation frameworks. For example, registrations

**6** **Introduction**

using DARTEL [7] usually take more than an hour, while some small-deformation based methods such as BIRT only need a few minutes to achieve similar accuracy. Therefore, our goal in this work is to design a diffeomorphic and symmetric registration algorithm which is free of the typical long computation time of diffeomorphic registration algorithms and hopefully also surpasses other diffeomorphic registration algorithms in other aspects.

### 1.6

### Thesis Overview

The remaining part of this thesis is organized as follows. In Chapter 2, some background knowledge and related works are briefly explained, which includes the concepts of dif-feomorphisms, Lie Algebra and basis functions. Chapter 3 gives a detailed description of the proposed registration algorithm. Chapter 4 explains various implementation details in-volved in our work. In Chapter 5, the results and the evaluations of the proposed algorithm in different aspects are given. Chapter 6 discusses various issues related to the proposed al-gorithm, which includes discussions on the result of evaluation as well as the potential flaws of the proposed algorithm and their possible solutions. Finally, the concluding remarks and our future goals are given in Chapter 7.

### Chapter 2

### Background Knowledge and Related

### Works

**8** **Background Knowledge and Related Works**

Before giving a detailed description of our algorithm, some prerequisite background knowledge related to our work are explained in this chapter. Diffeomorphisms and Lie Algebra are two key elements to most diffeomorphic registration methods. The theories as well as the implementations of these two terms are briefly explained in this chapter. Ad-ditionally, the role of basis functions in registration algorithms is also discussed. Previous works related to these topics are also included in this chapter.

### 2.1

### Diffeomorphisms and Lie Algebra

As mentioned, diffeomorphisms forms a manifold of Lie Group under composition of functions. In other words, a diffeomorphism only resembles an Euclidean space in a local scale. The non-Euclidean property of diffeomorphisms becomes apparent when the defor-mation is large. As a result, it cannot be directly modeled using a simple Euclidean vector field (e.g. a displacement field as in the small-deformation framework). However, diffeo-morphisms can be implicitly modelled as Euclidean vector fields by using the concept of Lie algebra. By doing so, operations on the manifold of diffeomorphisms can be simplified to simple Euclidean operations, and the property of diffeomorphism can be also preserved after operations. As a result, the concept of Lie Algebra was utilized in most diffeomorphic registration algorithms.

For a Lie Group (G), its Lie Algebra (g) is the Euclidean tangent space at the identity el-ement. The conversion between Lie Groups and Lie Algebras is defined as the exponential and the logarithm operators:

exp (·) : g → G (2.1)

log (·) = exp−1(·) : G → g (2.2)

To be simple, the logarithm operator projects a point on a manifold of Lie Group to a Euclidean tangent space, and the exponential operator projects a point on the Euclidean tangent space back to the Lie group. In our case, a diffeomorphism (Φ), which belongs to a Lie Group G, can be modeled by its Lie algebra (V ), which is an Euclidean vector field:

**2.1 Diffeomorphisms and Lie Algebra** **9**

Figure 2.1: A simple illustration of the relationship between the Lie Group of diffeomor-phisms and its Lie Algebra. A diffeomorphism (Φ) is a point on the Lie Group of diffeo-morphisms (The gray curved surface). The logarithm operator projects a diffeomorphism to a point in the Euclidean space of Lie Algebra (The purple plane). Conversely, the expo-nential operator projects a point in the Lie Algebra back to the Lie Group.

Conversely, a diffeomorphism can be found through exponentiation of its Lie algebra:

Φ = exp (V ) (2.4)

A simple illustration is given in Figure 2.1. Since a Lie algebra is an Euclidean space, oper-ations of diffeomorphisms can be simplified to simple Euclidean operoper-ations. For instance, combinations of diffeomorphisms can be done through simple vector additions:

(exp aV ) (exp bV ) = exp (a + b) V (2.5)

And the inverse transformation of a diffeomorphism is simplified to the negation of its Lie Algebra:

**10** **Background Knowledge and Related Works**

This simplicity for combination and inversion of transformations is a highly desirable prop-erty for applications where operations of transformations are needed.

### 2.2

### Models of Diffeomorphisms

The concept of Lie algebra is used in many diffeomorphic registration algorithms, al-though the definitions of the Lie Algebra of diffeomorphism vary. Christensen et al. [14] and Beg et al. [10] modelled the Lie Algebra of a diffeomorphism as a time-dependent ve-locity field (V (t, ·)), and the exponential operator is modeled as the flow (ϕ) of this veve-locity field at a unit time:

exp(V ) = ϕV(·, 1) (2.7)

ϕV(·, t) denotes the flow of the partial differential equation (PDE) ˙x(t) = V (t, x(t)) at

time t.

This model was used in many diffeomorphic registration algorithms. In the work by Christensen et al. [13, 14], diffeomorphisms were modeled as a highly viscous fluid, and the registrations were done by solving PDE’s derived from continuum mechanics for de-formable bodies. Beg et al. [10] proposed the LDDMM (larger deformation diffeomorphic metric mapping) algorithm, which finds the optimal velocity fields that will minimize the geodesic distance on the manifold of diffeomorphisms.

A simpler model was also proposed [3, 7]. These works modelled transformations using one-parameter subgroups of diffeomorphisms, which can be modelled as flows of time-independent velocity fields (V (·)) at a unit time. The flows involved in the exponentiation in these works are simplified to finding a the solution of a stationary PDE ( ˙x(t) = V (x)). An example is given in Figure 2.2. This model greatly alleviated the computational bur-dens when calculating diffeomorphisms, since diffeomorphisms can be generated through estimating a single time-invariant velocity fields instead of a series of velocity fields at different time.

**2.2 Models of Diffeomorphisms** **11**

(a) (b)

V (·) ϕV(·, 1)

Figure 2.2: A diffeomorphism is obtained by calculating the flow of its velocity field. (a) A velocity field. (b)The resultant diffeomorphism is the flow of this velocity field at a unit time.

**12** **Background Knowledge and Related Works**

### 2.3

### Basis Functions

Basis functions have been used as the models of local deformations in many registration algorithms, regardless of the type of framework used. In these works, mapping functions (either in the form of displacement fields as in small-deformation frameworks or veloc-ity fields as in diffeomorphic frameworks) are estimated by linear combinations of basis functions:

u(x) =X

i

αiρi(x) (2.8)

Where α is a vector coefficient and ρi is the ith basis function. u can be either a

displace-ment field or a velocity field, which depends on the type of framework used. For registration approaches based on the small-deformation framework, basis functions are used to describe the displacement fields. On the other hand, basis functions are used to estimate velocity fields in diffeomorphic registration approaches. For example, Ashburner [7] parametrizes the velocity fields using combinations of B-spline basis functions. These basis functions can either be deployed regularly [22] or according to other mechanisms (e.g. around mis-registered regions [31] or on user-identified landmarks [21]).

The use of basis functions greatly reduces the parameters to be estimated. For example, for a 256 × 256 × 128 MR image, there are a total of 25 million parameters if no basis function is used. If the mapping function is described by basis functions on regular grids with width W , the number of parameters can be reduced by a ratio of W3. Many types of basis functions have been used in previous works on registration. This includes thin-plate splines, Gaussian, inverse-multiquadratics [33] , multiquadratics [21], wavelets [1], discrete cosine transform [4], B-splines [32], and Wendland’s RBFs [22, 31].

### Chapter 3

### Methods

**14** **Methods**

### 3.1

### Model of Diffeomorphism

In our work, a framework similar to that of Ashburner [7] and Arsigny et al. [3] is used. A diffeomorphism (Φ) is implicitly parametrized by its Lie Algebra, which is an time-independent velocity field (denoted as V ):

V = log(Φ) (3.1)

Or, conversely, a diffeomorphism is the exponential map of its velocity field:

Φ = exp(V ) = log−1(V ) (3.2)

The exponential operator proposed by Arsigny et al. is defined as the flow of the PDE ˙x(t) = V (x) at time 1:

exp(V ) = ϕV(·, 1) (3.3)

ϕV denotes the flow of the velocity field V . The diffeomorphism is obtained through

the exponentiation of its Lie Algebra (i.e. velocity field). Details of calculation of flows are further discussed in Chapter 4.

### 3.2

### Radial Basis Functions

Recalling (2.8), the velocity fields can be modelled as linear combinations of basis func-tions. In the proposed algorithm, the velocity field is estimated using the linear combination of radial basis functions (RBF):

V (x) =X

i

αiρi(x) (3.4)

Where:

ρi(x) = ΨSi(kx − cik) (3.5)

α is the vector coefficient representing the orientation and the magnitude. ΨS(kx − ck) is a

radial basis function centered at c with support extent s. In this work, a type of the Wend-land Ψ-functions [40], Ψ3,1, is used as the radial basis function. Wendland Ψ-functions

**3.2 Radial Basis Functions** **15**

Figure 3.1: Wendland’s RBF (Ψ(r))

(shown in Figure 3.1) is a piecewise polynomial and compact supported radial basis
func-tion:
Ψ3,1(r) =
(1 − r)4_{(4r + 1)} _{if 0 ≤ r < 1}
0 if r =≥ 1
(3.6)

The support extent of Wendland Ψ-functions is restricted to a sphere with unit length, meaning the RBF Ψ3,1(r) is always zero when r is larger than 1. An additional parameter

S can be used to specify variable support extents: ΨS(r) = Ψ(

r

S) (3.7)

The characteristics of Wendland Ψ-functions are highly beneficial for the speed of the proposed algorithm in several aspects. First, piecewise polynomial RBFs can be calculated much faster than other types of RBFs such as Gaussian since only calculations of polynomi-als are involved (while Gaussian requires numerical calculations of exponentipolynomi-als). Second, the support of a Wendland’s RBF is confined to a certain local region. This means that adding a Wendland’s RBF to the velocity field will only affect a certain local region. In

**16** **Methods**

contrast, adding a RBF without compact support (e.g. Gaussian) to the velocity will ef-fect the whole velocity field, meaning all points in the velocity field have to be included in the calculation. As a result, the use of Wendland’s RBF can greatly reduce the time of summing basis functions or optimizing the vector coefficient of a certain RBF.

### 3.3

### The Objective Function

Generally, the objective function for the registration algorithm is a measure of how good the parameter set (v) is for registering a dataset (D). The maximization of the objective function can be formulated as the maximum a posteriori probability (MAP) estimate:

vM L= arg max

v p(v|D) (3.8)

Where the objective function is in the form of the posterior probability: p(v|D) = p(D|v)p(v)

p(D) (3.9)

p(D|v) is the likelihood, which indicates the probability of the data given the parame-ters. p(v) is the prior probability. p(D) is the probability of the data, which is a constant. Normally, the probabilities are represented in the form of their logarithms (or negative log-arithms) in order to simplify the calculation. In this work, the probabilities are represented by their negative logarithms:

− log p(v|D) = − log p(D|v) − log p(v) + log p(D) (3.10) The negative logarithm of the probabilities can also be thought of as the energy of the transformation. The constant term log p(D) is neglected since it does not affect the result. We rewrite the objective function as follows:

E(Is, It, exp(V )) = E1(Is, It, exp(V )) + λE2(exp(V )) (3.11)

E1 is the likelihood term and E2 is the prior term. The likelihood term is a measure which

indicates the similarity (or dissimilarity) between the warped source image and the target image. The prior term regularizes the transformation to conform to a given prior knowl-edge. λ is a weight (usually user-specified) indicating the comparative importance of the

**3.3 The Objective Function** **17**

prior term to the overall objective function. The best set of parameters is the one which has the minimum energy:

vM L = arg min

v E(Is, It, exp(V )) (3.12)

The definitions of the likelihood and the prior term vary between different registration approaches. The rest of this section will focus on different implementations of these terms in previous works as well as the choice of these terms in the proposed algorithm.

### 3.3.1

### The Likelihood Term

Likelihood term, which is often referred to as the similarity measure, is an intensity-based measure which indicates the similarity between two images. To choose a suitable similarity measure for the MRI registration algorithm, some characteristics of MRIs should be taken into considerations. First, there is a considerable variability of the brightness and the contrast among MRIs acquired from different scanners (see Figure 3.2). Second, there is a universal phenomenon of so-called inhomogeneity (or non-uniformity) among MRIs (see Figure 3.3). Due to the some technical factors, the intensity of a certain tissue may vary between local regions. As a result, correction of inhomogeneity is an indispensable preprocessing step for most registration algorithms. Third, MRIs acquired using different pulse sequences have distinct properties and intensity modalities. For images of different modalities, the intensity charastics of tissues could be totally different (see Figure 3.4). For example, the white matter has higher intensity than the gray matter T1-weighted images but has lower intensity in the gray matter in T2-weighted images. In some cases of research, re-searchers need to incorporate the results from images of different intensity modalities (e.g. T1-weighted MRIs and CT images) through registration. Since the intensity relationship between images of different modalities is even more complex, the similarity measure used in this kind of multi-modal registration needs to be chosen carefully.

The similarity measure used in the MRI registration algorithm should be able to account for these attributes of MRIs. Various types of similarity measures have been used to ac-count for the differences between images. Similarity measures can be categorized into

**18** **Methods**

(a) (b) (c)

Figure 3.2: T1-weighted MR images acquired from different scanners present quite differ-ent intensity properties. MR images (a), (b), and (c) were scanned on a Bruker MedSpec S300 3T system, GE Signa EXCITE 1.5T system, and Siemens Magnetom 1.5T system, respectively.

Figure 3.3: Intensity inhomogeneity of T1-weighted MR images. The right-posterior tis-sues highlighted in yellow boxes show obviously higher intensity compared to other re-gions.

**3.3 The Objective Function** **19**

(a) (b) (c)

Figure 3.4: MR images acquired using different pulse sequences. The intensity charac-teristics of tissues vary between different modalities. (a) T1-weighted MR image. (b) T2-weighted MR image. (c) Diffusion weighted image.

four types of hypothesis: Identity relationship, affine relationship, functional relationship and statistical relationship (For details, see [28]). Similarity measures assuming identity relationships (e.g. sum of squared intensity differences) cannot account for variable bright-ness and contrast or the inhomogeneity of MRIs. Measures assuming affine relationships (e.g. cross-correlation) prove to perform decently for uni-modal registrations of MRI [8]. Measures assuming functional relationship (e.g. correlation ratio [29]) and statistical re-lationship (e.g. mutual information) are more well-formed statistically. These two kinds of measures are robust against the variable brightness and contrast of MRIs. Additionally, they are also fairly suitable for multi-modal registration.

In our work, a symmetric version of correlation ratio (CR) is used as the likelihood term. Correlation ratio is a similarity measure which assumes that the intensities of perfectly matched images conform to a certain functional relationship (i.e. It ≈ f (Is)). Intuitively

speaking, for voxels with same (or similar) intensity in the target image, CR assumes the corresponding points of these voxels in a perfectly matched source image should have a certain uniform intensity (see Figure 3.5a). Conversely, intensity dispersion of these corre-sponding points is interpreted as the dissimilarity between the source image and the target

**20** **Methods**

image (see Figure 3.5b). The similarity can thus be quantified by measuring the degree of intensity dispersion of these corresponding points. Comparing to other conventional simi-larity measures that assume identity or affine relationships, CR is a more robust simisimi-larity measure against the variations of intensity and contrast as well as different modalities.

The definition of CR is given in the following. First, the intensity range of the target image is divided into NB bins: Bi, i = 1, , NB. Given an evaluation region Ω with N

voxels, Xi denotes the voxels in the region Ω of the source image whose corresponding

points in the target image belong to the ith_{intensity bin, B}

i:

Xi = {x|x ∈ Ω, It(Φ(x)) ∈ Bi} (3.13)

We denote Ni as the number of voxels in Xi. The correlation ratio is formulated as:

CR(Is, It, Φ) = 1 −

PNB

i=1NiVar(Is(Xi))

N Var(Is(Ω))

(3.14)

Figure 3.6 shows a real example of two T1-weighted MR images. In Figure 3.6a, a badly-matched image pair has no clear functional relationship and thus large variance in each bin. On the other hand, Figure 3.6b shows the intensity relationship of a registered image pair, and we can see a clearer intensity relationship and hence smaller variances.

Although CR is a powerful similarity measure, it is not symmetric. To achieve symmetry while preserving the advantages of CR, a symmetric version of CR inspired by Lau et al. [19] is used. For image A and B, the symmetric correlation ratio (SCR) is formulated as follows:

SCR(A, B, Φ) = SCR(B, A, Φ−1) = (CR(A, B, Φ) + CR(B, A, Φ

−1_{))}

2 (3.15)

The proposed symmetric correlation ratio is similar to that of Lau et al. [19] except for the normalization term. The SCR in our work is normalized to the range of [0 1]. Using the SCR, the likelihood term can be formulated as follows:

**3.3 The Objective Function** **21**
(a) (b)
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
It(Φ(x))
Is
(x
)
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
It(Φ(x))
Is
(x
)

Figure 3.5: An illustration of CR. Red circles represent the intensity correspondence be-tween the target image and the warpped source image at each point. For each point, its X coordinate indicates its intensity in the target image and its Y coordinate indicates the intensity of its corresponding point in the warped source image. The vertical dotted lines represent the borders of intensity bins, and the blue bar in each bin indicates the range of standard deviation for all points within that bin. (a) Similar image pair leads to smaller variance within each bin, which indicates that the intensity of the image pair conforms to a certain functional relationship. (b) Dissimilar image pair has larger variance within each bin.

**22** **Methods**
(a) (b)
0 0,.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
It(x)
Is
(x
)
0 0,.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
It(Φ(x))
Is
(x
)

Figure 3.6: An example of CR using a real image pair. (a)The unmatched image pair does not has clear intensity relationships, leading to larger intensity variance in each bin. (b)A better-matched image pair has a clearer intensity relationship and smaller intensity dispersion in each bin.

**3.3 The Objective Function** **23**

When calculating the correlation ratio, one should note that the width and the number of
bins are important factors pertaining to the performance and the validity of CR. The
perfor-mance of CR may drop significantly if the bin width is either too large or too small.
Addi-tionally, the choice of the evaluation region is also important. To increase the efficiency, the
evaluation region should be as compact as possible. Also, the calculation of CR(A, B, Φ)
and CR(B, A, Φ−1) involves different evaluation regions. For now we simply denote the
evaluation region CR(A, B, Φ) as Ω and the evaluation region for CR(B, A, Φ−1) as Ωinv_{.}

Selection of the optimal bin width and the evaluation regions is further discussed in Chap-ter 4.

### 3.3.2

### The Prior Term

The prior term is a measure which indicates how probable the transformation function is. This term can ensure the transformation to be realistic according to a certain prior knowledge such as smoothness of deformation or preservation of topology. Various types of prior terms have been used in previous works of registrations. This includes membrane energy, bending energy and linear-elastic energy [5].

The prior term used in our work is the membrane energy of the velocity field, which is also known as the Laplacian model:

ELaplacian(V ) =
1
Ω
Z Z Z
[(∂V
∂x)
2_{+ (}∂V
∂y)
2_{+ (}∂V
∂z )
2_{]dx dy dz} _{(3.17)}
And:

λE2(exp(V )) = λELaplacian(V ) (3.18)

In here, Ω is the volume involved in the estimation. In our case, V is the support of the current RBF. The membrane energy is term that favors smoother deformation. This prior term was also used in other works [7, 22]. Unlike Liu et al. [22], in which the membrane energy was used to regularize the displacement field, this term was used to regularize the velocity field in the proposed algorithm. The user-specified weight (λ) controls how much the prior term effects the optimization. Large weight leads to results with smoother defor-mations and lower similarity measures. The registration results using small weight have

**24** **Methods**

higher similarity but may be unrealistic. In our work, we set the value of λ empirically to 0.05 according to experiments using T1-MRIs.

### 3.4

### Optimization

This section is divided into two parts. The first part explains how we designed the optimization problem as separate optimization problems in different local regions. The second part describes the symmetric optimization algorithm used in our work.

### 3.4.1

### Local Optimization Scheme

In most registration approaches, coefficients of all basis functions are estimated simul-taneously. This involves optimization in an extremely high-dimensional parameter space. Even with the use of basis functions to reduce the number of parameters, the number of pa-rameters to be estimated can still easily reach millions. The curse of dimensionality arises when finding the parameter set in such extremely high-dimensional search space, causing the typical high time complexity of registration algorithms. In the light of this problem, Rohde et al. [31] proposed a greedy approach which optimizes the basis functions sep-arately. In their work, 8 RBFs are applied to each identified region of mis-registration, and the vector coefficients of these 8 RBFs are estimated simultaneously. By doing so, the high-dimensional optimization problem is reduced to a sequence of 24-parameter op-timization problems. Liu et al. [22] further elevated this idea by optimizing all regularly deployed basis functions separately, which results in a sequence of 3-parameter optimiza-tion problems. The results of these works have showed that separating the optimizaoptimiza-tion of parameters yields significant improvement in term of the speed while maintaining decently high accuracy.

A similar greedy approach to Rohde et al. and Liu et al. is used in our work. In the proposed algorithm, a cumulative velocity field Vkis used to store the sum of all previously

**3.4 Optimization** **25**

estimated RBFs from the start to the current step k:

Vk = k

X

i=1

αiρi(x) (3.19)

Each RBF is estimated such that the objective function will be minimized when this RBF is added to the cumulative velocity field:

αi = arg min

αi

E(Is, It, exp(Vi−1+ αiρi(x))) (3.20)

The estimated RBF is then added to the cumulative velocity field to form a new one:

Vi ← Vi−1+ αiρi(x) (3.21)

Repeating these steps through all N RBFs yields the velocity field representing the overall transformation:

Φ = exp(VN) (3.22)

Since RBFs are estimated sequentially, the order of RBFs in this optimization scheme may affect the accuracy of the final result. In our work, the order of RBFs is given according to the distance of each RBF from the brain center such that RBFs closer to the brain center are estimated before those farther from the brain center. This design is based on the fact that the cerebral cortex has higher structural complexity and anatomical variability than the structures near the center of the brain (e.g. corpus callosum). As a result, registering tissues near the brain surface is more difficult than registering tissues around the brain center. In the light of this fact, RBFs are designed to be estimated in the ascending order of their distances from the brain center. Registering around the brain center first may warp the cortex area to better initial positions, potentially leading to better estimation and thus higher accuracy.

### 3.4.2

### Optimization Algorithm

The symmetry of the objective function alone still does not guarantee the symmetry of the whole registration algorithm. To make the registration algorithm symmetric, all elements in the algorithm must be unbiased toward the order within the image pair, i.e.,

**26** **Methods**

which image in the image pair is the target image and which image is the source image. Therefore, a symmetric optimization algorithm is necessary in our work.

The optimization algorithm used in our work is a modified version of the downhill
sim-plex method. Downhill simsim-plex method [26] is an optimization algorithm that is also used
by BIRT. This method is efficient but is rather unstable in terms of the initial positions of
the simplex points. A slight change in the initial positions of simplex points may
gener-ate a very different result. Furthermore, the original downhill simplex method initializes
the simplex points in a random manner. As a result, BIRT cannot reproduce the same
registration result in different trials using the same pair of image. This makes BIRT an
unstable algorithm. To solve this problem, we use the gradient of the objective function
at the origin of the orientation coefficient (α = [0 0 0]T_{) to initialize the simplex points.}

By doing so, we can ensure the symmetry of the optimization algorithm. An example is shown in Figure 3.7. According to the result of our expreiment, this symmetric downhill simplex method has similar speed and accuracy to the original downhill simplex method using random initial simplex points.

### 3.5

### Incorporation of Affine Registrations

In this work, we mainly focused on designing a symmetric non-rigid registration algo-rithm. However, a symmetric affine registration algorithm is also necessary for a com-plete symmetric registration framework, since a registration process typically consists of an affine registration followed by a non-rigid registration. Therefore, it is important for the proposed algorithm to incorporate the results of symmetric affine registrations.

We must point out that it is extremely crucial to choose a proper manner to incorporate the result of a symmetry affine registration. Improper collaboration with affine registration result will lead to asymmetry of the overall registration framework even if both the affine and non-rigid registration algorithms are symmetric. However, few previous researches has covered this issue as to combine the results of affine registration while maintaining symmetry.

**3.5 Incorporation of Affine Registrations** **27**
−3 −2 −1 0 1 2 3 −2
0
2
−3
−2
−1
0
1
2
3
Y
X
Z
−3 −2 −1 0 1 2 3 −2
0
2
−3
−2
−1
0
1
2
3
Y
X
Z

Figure 3.7: An illustration of symmetric initialization of downhill simplex method. The initial simplex points are determined by the gradient direction at the origin of the orientation coefficient.

To understand this problem in a more intuitive perspective, we can view the registration process as a processing pipeline which consists of a sequence of compositions of affine transformations and non-rigid transformations. Inversion of a registration result can be seen as a result from the reverse of the original processing pipeline. To achieve symmetry for the overall registration (provided that all registration algorithms are symmetric), the processing pipeline must be symmetric as well, i.e. the reversed pipeline must be equivalent to the original pipeline.

We give an example in the following to show how a asymmetric processing pipeline
of framework affects the symmetry of the final results. For registration form image A to
image B, we denote Φ(l)_{A→B} and Φ(g)_{A→B} as the resultant mapping function of non-rigid and
affine registration respectively. We assume that these registration algorithms are perfectly
symmetric, i.e. , Φ(g)_{B→A} = (Φ(g)_{A→B})−1 and Φ(l)_{B→A} = (Φ(l)_{A→B})−1. Typically, the registration
of a pair of images involves an affine registration followed by a non-rigid registration:

ΦA→B = Φ (l)

A→B◦ Φ

(g)

**28** **Methods**

This conventional framework is asymmetric. The inversion of the backward registration (registering image B to image A) results in a transformation which consists of a non-rigid transformation followed by a affine transformation:

(ΦB→A)−1 = (Φ (l) B→A ◦ Φ (g) B→A) −1

= (Φ(g)_{B→A})−1◦ (Φ(l)_{B→A})−1 = Φ(g)_{A→B}◦ Φ(l)_{A→B} (3.24)

This inversion is not equivalent to the pipeline of the forward registration (ΦA→B = Φ (l)

A→B◦

Φ(g)_{A→B}). Instead, it belongs to a totally different processing pipeline than the processing
pipeline of a registration. As a result, it does not impose symmetry to the final results even
if all registration algorithms are symmetric.

To ensure the overall symmetry when incorporating with the result of symmetric affine registrations, a framework inspired by the concept of ”half-way space” [27] is used. This concept involves warping both two images toward one another by half-way simultaneously instead of warping one image onto the other. For a registration framework using half-way space, it estimates the transformation such that the objective function will be minimized after the two images are warped by its square root and the inversion of its square root respectively:

Φ = arg min

Φ E(Φ

0.5_{(I}

s), Φ−0.5(It), I) (3.25)

I without lower index denotes an identity mapping function. The overall transformation consists of an affine transformation followed by a non-rigid transformation. Combining these two steps of registration yields:

(

Φ(g) _{= arg min}

Φ(g)E((Φ(g))0.5(I_{s}), (Φ(g))−0.5(I_{t}), I)

Φ(l)= arg min_{Φ}(l)E((Φ(l))0.5◦ (Φ(g))0.5(I_{s}), (Φ(l))−0.5◦ (Φ(g))−0.5(I_{t}), I)

(3.26)

Figure 3.8a gives an intuitive illustration of ((3.26)). For an affine transformation, the
square root of the transformation function can be obtained using the square root of the
transformation matrix ([(Φ(g)_{)}0.5 _{1]}T _{= T}0.5_{[x 1]}T_{). And for a non-rigid diffeomorphic}

transformation, the square root of a diffeomorphism can be found by dividing its Lie Alge-bra (which is the velocity field in our case) by two ( (exp V )0.5 = exp(0.5V ) ).

**3.6 A Hierarchical Framework** **29**

In practice, the simultaneous warping of two images can be simplified into only the warp-ing of the source image based on the hypothesis of E(Φ(Is), Γ(It)) ≈ E(Γ−1(Φ(Is)), It):

(
Φ(g) _{= arg min}
Φ(g)E(I_{s}, I_{t}, Φ(g))
Φ(l)_{= arg min}
Φ(l)E(I_{s}, I_{t}, (Φ(g))0.5◦ Φ(l)◦ (Φ(g))0.5)
(3.27)
And the overall registration result will be (Φ(g))0.5 ◦ Φ(l)_{◦ (Φ}(g)_{)}0.5_{. A simple illustration}

of this framework is shown in Figure 3.8b. This processing pipeline is symmetric and can thus ensure the overall symmetry of the registration framework:

(ΦB→A)−1
= ((Φ(g)_{B→A})0.5_{◦ Φ}(l)
B→A◦ (Φ
(g)
B→A)0.5))
−1

= (Φ(g)_{B→A})−0.5◦ (Φ(l)_{B→A})−1◦ (Φ(g)_{B→A})−0.5)
= (Φ(g)_{A→B})0.5_{◦ Φ}(l)
A→B ◦ (Φ
(g)
A→B)0.5
= ΦA→B
(3.28)

Therefore, the optimization formula in (3.20) can be re-formulated as follows: αi = arg min

αi

E(Is, It, (Φ(g))0.5◦ exp(Vi−1+ αiρi) ◦ (Φ(g))0.5) (3.29)

In addition to maintaining symmetry, this framework also benefits the registration greatly in other aspects. First and foremost, generating warped images involves re-sampling from the original image, which inevitably comes with some extent of degradation of image qual-ity. As a result, using affine matrix yields better accuracy of the registration result than directly using affine-transformed images due to better image quality. Our experiment has shown that using initial affine matrices instead of affine-transformed images does yield im-provement in the accuracy of registrations. In addition, if affine matrices are used, the final warped images will have better quality since only a single time of re-sampling is involved instead of two.

### 3.6

### A Hierarchical Framework

In our work, a hierarchical framework is used to increase the accuracy and stability. The resolutions of the images and the support extents of RBFs are divided in a coarse-to-fine manner. In the initial step, transformations are initially roughly estimated using

**30** **Methods**

(a) (b)

Source Image

𝜱 𝒈 𝟎.𝟓

𝜱 𝒍 𝟎.𝟓

Registered Source image

Target Image

𝜱 𝒈 −𝟎.𝟓 𝜱 𝒍 −𝟎.𝟓

Registered Target image

Source Image

𝜱 𝒈 𝟎.𝟓

𝜱 𝒍

Registered Source image

𝜱 𝒈 𝟎.𝟓

Target Image

Figure 3.8: An illustration of the proposed symmetric framework for incorporating affine registration.(a) This framework is equivalent to transforming both images simultaneously during each step of registration. (b) The simplified framework which involves only the transformation of the source image.

**3.6 A Hierarchical Framework** **31**

low-resolution images and few RBFs with wider support extents. These coarse estima-tions are then gradually refined in later steps through increasingly detailed estimaestima-tions (by using higher-resolution images and more RBFs with smaller support extents). A similar framework is used in other previous works [22, 31].

Applying the concept of hierarchical framework to (3.4), the overall velocity field can be formulated as follows: log(Φ) = VL,Kj = L X j=1 Kj X i=1 αj,iρj,i(x) (3.30) Where:

ρj,i(x) = ΨSj,i(kx − cj,ik) (3.31) L specifies the total number of scale level, and Kj denotes the number of local

diffeomor-phisms at level j. Vl,m is the cumulative sum of all RBFs from the start to the ith RBF at

level m: Vl,m = l−1 X j=1 Kj X i=1 αj,iρj,i(x) + m X i=1 αl,iρl,i(x) (3.32)

The center and the support extent of each RBF in each level are specified by the following manner. First, the algorithm finds a minimum bounding cube of the union of the brain region of source and the target image (Bs∪ Bt), so that such bounding cube encompasses

all brain tissues when applied to each of the images. Brain regions is generally given by the brain masks from the preprocessing of skull-stripping (In our experiment, we simply used the skull-stripped brain images as the brain masks, i.e. Bs = {x|Is(x) > 0}, Bt =

{x|It(x) > 0}, since skull-stripped images intrinsically contains the information of their

brain masks). The width of such bounding cube is denoted as W . In each scale level j, this
minimum bounding cube is divided into 8j−1equally-volumed cubes with width _{2}Wj−1 , and
a RBF is deployed at the center of each cube. The support extent of each RBF at each level
is set to be proportional to the edge length of their corresponding cube with a fixed ratio
r, i.e., ∀i, Sj,i = _{2}rWj−1 . r must be chosen in a way such that the support extent of every
RBF completely covers its corresponding cube. Larger r results in larger overlaps between

**32** **Methods**

different RBFs and higher accuracy, since small-scaled local deformations can propagate each other to represent large deformations.

Notice that in some circumstances the evaluation point set of a certain weighted RBF may not contain any brain tissue in neither of the source nor the target image. In these cases, the estimated weighted RBF will not provide any valid information. So it is better to skip the estimation when no brain tissue is in the both Bs(Ω) nor Bt(Ω). In the proposed

algorithm, the estimation of a local diffeomorphism is skipped if its evaluation point set (of both the forward and inverse transformation) does not intersect with the union of brain regions Bs∪ Bt. Also, brain images are re-sampled hierarchically using pyramid

represen-tation. The number of levels of the image pyramid (p) is a user-specified parameter. For each scale level l, the downsampling factor (M ) of the image used for estimation is:

M = 2p−l+1 (3.33)

Along with variable support extents, the number of the voxels within the support of ρj,i

is approximately:

∀i, card(supp(ρj,i)) ≈

4 3πrW 2

−p

(3.34) According to (3.34), the number of voxels in the support of any RBF is independent to the scale level. As a result, using down-sampled images can increase the speed of opti-mizations, and the stability can be increased by this multi-resolution set-up as well.

The vector coefficient of each RBF is estimated through the optimization algorithm con-secutively. For an RBF ρj,i(x), the optimization estimates the optimal vector coefficient

αj,i that lead to minimum objective function when this weighted RBF is added to the

cu-mulative velocity field, Vj,i−1:

αj,i= arg min

αj,i

E(Is, It, (Φ(g))0.5◦ exp(Vj,i−1+ αj,iρj,i) ◦ (Φ(g))0.5) (3.35)

After optimization, the estimated weighted RBF is then added to the cumulative veloc-ity field:

**3.6 A Hierarchical Framework** **33**

Figure 3.9: An illustration of the proposed hierarchical framework. The yellow circle repre-sents the supports of RBFs. In this framework, the transformation is first roughly estimated using low-resolution images and large RBFs. As the level increases, the transformation is gradually refined by using more detailed images and smaller RBFs. The RBFs are placed along grid points. The estimation of the coefficient of an RBF is skipped if the evaluation point set of this RBF does not contain any brain tissues.

RBFs are estimated and added to the cumulative velocity field consecutively from lower level to higher level. In addition, RBFs in the same level are estimated in the ascending order of the distance from the brain center, as described in Section 3.4.1. This is repeated until the user-specified scale level is reached. The proposed hierarchical framework is illustrated in Figure 3.9.

**34** **Methods**

Table 3.1: Summary of proposed algorithm

V ← 0

Find the minimum bounding cube ofBs∪ Btand its widthW

For scale levelj = 1 : L

Calculate RBF support extent in level j:∀i, Sj,i = _{2}rWj−1

For RBF numberi = 1 : Kj

Find the evaluation point setΩj,iandΩinvj,i

If(Bs∪ Bt) ∩ (Ωj,i∪ Ωinvj,i) 6= ∅ :

αj,i = arg minαj,iE(Is, It(Φ

(g)_{)}0.5_{◦ exp(V + α}

j,iρj,i) ◦ (Φ(g))0.5)

V ← V + αj,iρj,i

The result of non-rigid registration is:Φ(l) _{= exp(V )}

### Chapter 4

**36** **Implementation Issues**

There are some issues in detail that should be treated carefully when implementing the algorithm. For the readability of this thesis, these detailed issues were discussed in this separate section.

### 4.1

### Solving Partial Differential Equations

The exponentiation of velocity field involves solving the PDE of the flow numerically.
In general, the numerical solution of a PDE is found by incrementing the result of many
small consecutive time steps of length h. For example, the simplest way of solving PDEs,
the Euler method, is as follows (for simplicity, we denote x(t) _{as the location of x at time}

t):

x(0) _{← x}

x(t+h) = x(t)+ hV (t, x(t)) + O(h2)

ϕV(x, 1) ← x1

(4.1)

The velocity field in the equation above has two variables which indicates time and loca-tion respectively. Since the velocity field in our case is time-invariant, the first variable indicating time can be ignored. The second term in the last equation in (4.1) indicates the numerical error. It indicates how rapidly the error drops when the step size decreases. Larger power in the error term is more desirable since the error decreases more quickly when using smaller step sizes.

The O(h2) error term of the Euler method makes it impractical in applications with
strong demands for speed and accuracy. Many other methods can solve PDEs with higher
accuracies than the Euler method. Our implementation uses the modified midpoint method,
which can achieve O(h3) error term with only n + 1 steps. Euler method and fourth-order
Runge-Kutta method are also available in our algorithm. The modified midpoint method is
as follows:
x(0) _{← x}
x(h) _{=} _{x}(0)_{+ hV (t, x}(0)_{)}
x(t+h) = x(t−h)+ hV (t, x(t)) + O(h3)
ϕV(x, 1) ← 1_{2}(x1+ x(1−h)+ hV (1, x1))
(4.2)

**4.2 Re-sampling Algorithms** **37**

Fourth-order Runge-Kutta method can achiever O(h5) with 4n steps. It is formulated as
follows:
x(0) _{← x}
k(t)_{1} = hV (t, x(t)_{)}
k(t)_{2} = hV (t, x(t)+k1
2)
k(t)_{3} = hV (t, x(t)+k2
2)
k(t)_{4} = hV (t, x(t)_{+ k}
3)
x(t+h) _{=} _{x}(t) _{+}k1
6 +
k2
3 +
k3
3 +
k4
6 + O(h
5_{)}
ϕV(x, 1) ← x1
(4.3)

Smaller step size leads to higher accuracy while increases the computation time. By default, our implementation uses 16-time-step modified midpoint method to solve the PDE. This proved to be adequate to achieve sub-voxel accuracy of symmetry in the proposed algorithms.

### 4.2

### Re-sampling Algorithms

Image re-sampling is a frequently-used operation which greatly affects the quality of the result. All images in our implementation are only discrete approximations of continuous scalar fields and, as a result, finding the intensity value on a certain point usually involves some form of re-sampling (since, in most cases, this point does not fall on the lattice points with specified intensity values). Similarly, the flow fields as well as the mapping functions in our implementation are only discrete approximations of continuous vector fields. As a result, the implementation of registration algorithms usually entails a large number of re-sampling. For example, each step of calculation of the flow involves finding the velocity at a given point through re-sampling of the flow field. Also, the calculation of CR requires the intensity of each warped lattice point (Recalling (3.13) and (3.14)). The image intensity at each of these points are calculated through image re-sampling since they usually do not fall on the fixed lattice points. In addition, after the registration is done, the warped source image is generated through re-sampling from the original source image. This dependency of our algorithm on re-sampling implies that the speed and accuracy of our algorithm is

**38** **Implementation Issues**

also highly dependent on the choice of the re-sampling algorithm. Our goal is to choose a re-sampling algorithm which is as accurate as possible while not degrading the speed of our algorithm too much.

Four kinds of re-sampling algorithms were implemented and tested in our work. This in-cludes nearest-neighbor interpolation, trilinear interpolation, tricubic interpolation and sinc interpolation. These interpolation will be explained and discussed in following subsections.

### 4.2.1

### Nearest Neighbor Interpolation

Nearest-neighbor interpolation is an algorithm which choose the intensity of the nearest voxel to the point to be resampled as its estimated intensity. In here, the actual image intensity at fixed lattice points is denoted as I(·), and the estimated image intensity obtained through nearest neighbor is denoted as Inn(·). Assuming the point to be re-sampled is at

location u ∈ <3_{, and the location of the nearest voxel of u is denoted as NN(u), the nearest}

neighbor interpolation can be formulated as:

NN(u) = arg min_{x∈Ω}kx − uk2

Inn(u) = I(NN(u))

(4.4)

This re-sampling algorithm can preserve the original voxel intensities and, due to its sim-plicity, has the highest speed among all interpolation algorithms. However, the quality of the images degrade dramatically after re-sampling when using this algorithm (see Fig-ure 4.2b). As a result, nearest neighbor algorithm is seldom used in re-sampling MR images as well as flow fields and mapping functions. However, it is very useful when warping the label images since it can preserve the original label numbers.

### 4.2.2

### Trilinear Interpolation

Trilinear interpolation involves consecutive linear interpolations along each dimension
using the 23 _{= 8 adjacent voxels surrounding the point to be resampled. For a point to be}

resampled u = [x, y, z]T, we denote x0 as the X coordinate of the closest X-grid below

**4.2 Re-sampling Algorithms** **39**

defined in similar manners. The 8 adjacent voxels are denoted in the form of ua,b,c =

[xa, yb, zc]T, where a, b, c ∈ {0, 1} (e.g. u0,1,0 = [x0, y1, z0]T, u1,0,1 = [x1, y0, z1]T). The

interpolated intensity at point u using trilinear interpolation is denoted as ¯Il(u). trilinear

interpolation involves consecutive linear interpolation along X,Y and Z dimension. First, the interpolation is done along X dimension to obtain the interpolated intensity on ub,c =

[x, yb, zc]T, b, c ∈ {0, 1}: ∀b, c ∈ {0, 1}, ¯Il(ub,c) = (1 − wx)I(u0,b,c) + wxI(u1,b,c) (4.5) where: wx = (x − x0) (x1− x0) (4.6) Then, the interpolation is done along Y dimension to obtain the interpolated intensity on uc= [x, y, zc]T, c ∈ {0, 1}: ∀c ∈ {0, 1}, ¯Il(uc) = (1 − wy)I(u0,c) + wyI(u1,c) (4.7) where: wy = (y − y0) (y1− y0) (4.8) Finally, the interpolation along Z dimension yield the interpolated intensity on point u:

¯

Il(u) = (1 − wz)I(u0) + wzI(u1) (4.9)

where:

wz =

(z − z0)

(z1− z0)

(4.10) An illustration of trilinear interpolation is shown in Figure 4.1a. Trilinear interpolation is a commonly used re-sampling method due to its simplicity and speed. However, it comes with larger degradation in image quality when compared with other more complex inter-polation algorithms. Since trilinear interinter-polation involves the calculation of the weighted average of intensities, re-sampled images using this interpolation method are all ”blurred” in some measure (see Figure 4.2c). This drawback make trilinear interpolation less fa-vorable if the image quality is highly-demanded or multiple times of consecutive image re-sampling are necessary.

**40** **Implementation Issues**
(a) (b)
Y
X
Z
u
00
u
110
u_{10}
u_{001}
u
01
u
101
**u**
u
1
u_{011}
u_{11}
u
111
u_{0}
u
000
u_{010}
u_{100}
Y
X
Z

_{u}

Figure 4.1: An illustration of trilinear and tricubic interpolation. Both interpolation meth-ods involved interpolating along each dimension sequentially. (a) Trinilear interpolation. (b) Tricubic interpolation.

**4.2 Re-sampling Algorithms** **41**

### 4.2.3

### Tricubic Interpolation

Similar to trilinear interpolation, tricubic interpolation involves consecutive cubic in-terpolation along X,Y and Z dimension. Unlike trilinear inin-terpolation, which requires only adjacent lattice points from the closest two grids in each dimension, tricubic interpolation needs four closest grids along each dimension in order to calculate the re-sampled intensity. In here, x0 are x1 are defined in the same manner as in trilinear interpolation, while x−1

and x2 are denoted as the second closest X-grid below and above u, respectively. y−1, y2,

y−1 and y2 are all defined in similar manners. Tricubic interpolation require 43 = 64

lat-tice points, which are denoted as ua,b,c = [xa, yb, zc]T, where a, b, c ∈ {−1, 0, 1, 2}. Using

these lattice points, tricubic interpolation applies single-dimensional cubic interpolation along X, Y and Z dimension consecutively. In the one-dimensional case, given I−1, I0, I1

and I2 as the intensity value at -1, 0, 1 and 2 respectively, the the cubic interpolation at

point x ∈ [0, 1] is formulated as :
CINT
x (I−1, I0, I1, I2) =
1
2
−x3_{+ 2x}2_{− x}
3x3_{− 5x}2_{+ 2}
−3x3_{+ 4x}2_{+ x}
x3_{− x}2
T
I−1
I0
I1
I2
(4.11)

Tricubic interpolation can thus be formulated as:

∀b, c ∈ {−1, 0, 1, 2} :
ub,c ≡ [x, yb, zc]T, ¯Ic(ub,c) = CINT_{(}x1−x
x1−x0)

(I(u−1,b,c), I(u0,b,c), I(u1,b,c), I(u2,b,c))

uc≡ [x, y, zc]T, ¯Ic(uc) = CINT_{(}y1−y
y1−y0)
( ¯Ic(u−1,c), ¯Ic(u0,c), ¯Ic(u1,c), ¯Ic(u2,c))
¯
Ic(u) = CINT_{(}z1−z
z1−z0)
( ¯Ic(u−1), ¯Ic(u0), ¯Ic(u1), ¯Ic(u2))
(4.12)
An illustration of tricubic interpolation is shown in Figure 4.1b. Tricubic interpolation
yields higher accuracy and less degradation in image quality when compared with trilinear
interpolation. The re-sampled image using tricubic interpolation has less blurring artifact
than that using trilinear interpolation (see Figure 4.2d).