• 沒有找到結果。

使用色彩與深度感測器之磁振造影與腦磁波圖自動化對位系統

N/A
N/A
Protected

Academic year: 2021

Share "使用色彩與深度感測器之磁振造影與腦磁波圖自動化對位系統"

Copied!
74
0
0

加載中.... (立即查看全文)

全文

(1)

國立交通大學

資訊科學與工程研究所

使用色彩與深度感測器之磁振造影與腦磁波圖自動化對位系統

Automatic Co-registration of EEG-MRI data

using RGB-D Camera

研 究 生:鄭晴懌

指導教授:陳永昇 教授

(2)

國 立 交 通 大 學

資 訊 科 學 與 工 程 研 究 所

碩 士 論 文

A Thesis

Submitted to Institute of Computer Science and Engineering College of Computer Science

National Chiao Tung University in partial Fulfillment of the Requirements

for the Degree of Master

in

Computer Science

June 2013

Hsinchu, Taiwan, Republic of China

(3)

i 摘 要 本研究中我們發展一套自動化系統做為磁振造影與腦磁波圖對位。腦磁波圖 是一種非侵入式方法提供高時間解析度的腦神經活化訊號但卻缺乏解剖構造資 訊,而磁振造影可以提供卓越的空間解析度。因此整合磁振造影與腦磁波圖可以 得到兩者的資訊並且用在評估腦部活動源之研究。先前的對位方法需要額外的標 記點及易受到操作者因素影響等缺點,為解決這些問題我們使用 Kinect for Windows 作為介面,將磁振造影座標系統和數位筆(digitizer)座標系統對位在 一起。 本研究方法主要分成兩個部分,第一部分為使用臉部資訊將 Kinect 與磁振 造影的座標系統對位在一起,第二部分為利用臉上事先貼好的標記點在 Kinect 座標系統和數位筆座標系統中的相對應座標找彼此的轉換關係。經由我們的對位 系統,數位筆所點的電極位置會和磁振造影在同一個座標系統。由於記錄電擊帽 上的電極位置並不是貼緊在頭皮上面,我們利用厚度補償來調整電極的位置。最 後使用三種評估方法來驗證對位的結果並比較先前所作的方法。 本實驗中有十位受測者參與且每位做了兩次實驗。所使用的三種評估方法其 中平均感測器對位誤差為 1.67 釐米且平均十字對位誤差為 1.83 釐米。比較其他 論文的結果顯示我們的系統有足夠的準確性、重複使用性、快速等特性用於磁振 造影與腦磁波圖對位。

(4)

ii

still some drawbacks such as the need of additional markers and the manual error. Thus, by using Kinect for Windows which extracts color and depth information at the same time as interface, we aligned MRI coordinate system with digitizer coordinate system to resolve problems.

In the proposed method, a complete EEG-MRI co-registration would be achieved in two steps: (i) Aligned Kinect coordinate system with MRI coordinate system using facial information. (ii) The transformation is based on matching of corresponding markers pasted on the face during acquisition of both in the Kinect and digitizer coordinate systems. By our system, the location of electrodes which were recorded by digitization device would be found in the same coordinate system. Due to the thickness of electrodes on the EEG cap, we adjusted them with thickness compensation. Finally, using three error estimations to evaluate the results of co-registration and compared with different methods made before.

The experiment was performed with ten subjects, and each one done twice. The mean residual error of sensors was 1.67 mm, and the mean residual error of cross was 1.83 mm. These results compared with other methods show that our system is sufficiently accurate, repeatable, efficient and labor-intensive to be used to assist neuroscience studies.

(5)

iii 致 謝 首先,我要感謝我的指導教授陳永昇老師,對我這兩年來在研究上以及為人處事上 的指導,讓我學到很多也得到很大的成長。此外,感謝陳煥宗老師和莊仁輝老師在口試 中提出各方面建議與指導,也很感謝老師們對我論文的審視與指正,都讓我獲益匪淺。 實驗中的磁振造影仰賴陳麗芬老師實驗室的同學幫忙掃描才可以進行我的實驗以及參 與實驗的所有受測者,在此也獻上誠摯的感恩。很幸運這兩年能夠待在 BSP 實驗室學習, 感謝實驗室的所有夥伴們與我一起打拼與學習,也感謝所有人的關照與幫助。最後也感 謝我的家人與朋友們的陪伴、鼓勵與支持,非常感謝每個幫助過我的人。

(6)
(7)

Contents

List of Figures vii

List of Tables ix

1 Introduction 1

1.1 Introduction . . . 2

1.2 Related work . . . 4

1.3 Kinect for windows . . . 9

1.4 Thesis overview . . . 11

1.5 Thesis organization . . . 11

2 Methods 13 2.1 Overview . . . 14

2.2 Segmentation of scalp points from MRI . . . 14

2.3 Extraction of RGB-D data . . . 16

2.4 Depth threshold . . . 18

2.5 Pre-processing . . . 18

2.5.1 Skin color filter . . . 18

2.5.2 Outlier removal . . . 19

2.5.3 Down-sampling . . . 20

2.6 Co-registration . . . 22

2.6.1 Modified coherent point drift algorithm . . . 22

2.7 Estimation of centroid of markers . . . 23

2.8 Estimation of transformation relation . . . 25

2.9 Thickness compensation . . . 27 2.10 Error estimation . . . 27 3 Experimental Results 31 3.1 Experimental Setup . . . 32 3.1.1 Subjects . . . 32 3.1.2 MRI . . . 32 v

(8)
(9)

List of Figures

1.1 The co-registration of EEG and MRI data. . . 2

1.2 The result of registration of EEG and MRI data. . . 5

1.3 The concept of distance-based alignment. . . 5

1.4 The inhomogeneous digitized points. . . 6

1.5 The misalignment and the result of co-registration. . . 7

1.6 The 3D handheld laser scanner. . . 7

1.7 The result of EEG-MRI co-registration and sensor labeling. . . 8

1.8 Basic operations of Kinect for windows . . . 9

1.9 The visible depth values. . . 10

1.10 The concept of our system. . . 11

2.1 Flow chart of automatic co-registration of EEG-MRI data using RGB-D camera. . . 15

2.2 Flow chart of segmentation of scalp points from MRI. . . 16

2.3 The result of segmentation of scalp points from MRI. . . 17

2.4 Schematic diagram of radius outlier removal. . . 20

2.5 The results of preprocessing. . . 21

2.6 The different level of down-sampling facial points. . . 22

2.7 Schematic diagram of registration. . . 23

2.8 Coherent point drift algorithm. . . 24

2.9 HSL color space. . . 25

2.10 The analysis of facial surface colors included markers. . . 26

2.11 The result of estimation of centroid of markers. . . 29

2.12 Schematic diagram of thickness compensation. . . 30

3.1 Siemens MAGNETOM Trio . . . 32

3.2 Polhemus FastTrak . . . 33

3.3 Blue marker . . . 34

3.4 The Kinect-derived facial points and MRI-derived facial points. . . 36

3.5 The result of co-registration of Kinect-derived facial points and MRI-derived facial points. . . 37

(10)
(11)

List of Tables

1.1 The specification of Kinect for windows. . . 10

3.1 The results of the residual error of the face. . . 35

3.2 Comparison of the residual error of the face with other methods. . . 40

3.3 The results of the residual error of the sensor. . . 41

3.4 Comparison of the residual error of the sensor with Koessker et al.. . . 44

3.5 The results of the residual error of the cross. . . 45

4.1 Comparison of Kinect and Kinect for windows. . . 48

4.2 The results of using Kinect to do registration. . . 48

4.3 Example of the influence of co-registration by down-sampling. . . 49

4.4 In case 1, the combinations of which points that get the minimal error by using different number of points and median of error in using different number of points. . . 51

4.5 The rank of smaller errors. . . 53 4.6 Top 15 smaller error which calculated from using combinations of four points. 55

(12)
(13)

Chapter 1

(14)

Generally, the co-registration of EEG and MRI data includes two steps. First, we can know the positions of EEG sensors in the three-dimensional space by any digitization de-vices. Second, by co-registration method that transforms the positions of sensors from digitizer coordinate system to MRI coordinate system, we can localize the brain activity in MRI coordinate system. Figure 1.1 illustrates the co-registration of EEG and MRI data. Accurate anatomical localization of functional brain depends on the precise co-registration, otherwise it will influence on next steps including forward modeling or inverse problem.

Figure 1.1: The co-registration of EEG and MRI data. By co-registration method that transforms the positions of sensors from digitizer coordinate system to MRI coordinate system.

In the conventional method, the co-registration method required to manually determine at least three landmark positions on the head of subject, and which had to find the same positions both in MRI and digitizer coordinate systems. [15] This method would produce

(15)

1.1 Introduction 3

differences from variable operators or times. Another method is using external fiducial markers visible in MRI without the judgement of operators to find the same positions in two different coordinate systems. [14] But this method was discomfort for participants due to wearing fiduical markers. Besides, the relatively larger markers produce the larger error from transformation. Therefore, some techniques were developed to improve these drawbacks, such as using visible electrodes in MRI [10, 17] and distance-based method. [9, 11, 12, 16] Using visible electrodes in MRI no need to the procedure of alignment, so this method do not have error of transformation between two different coordinate systems. However, due to having to a new MRI scanning every time when experimenting, using MRI visible markers was lack of flexibility and economic benefits, and also not suitable for retrospective studies. In addition, it remains to be proven whether it works with the environment of MRI scans. There was another way that utilizes the digitization device to record over five hundreds points on face and minimize the distance between those facial points and the scalp of head from MR images to find the best transformation relation, yet this cost too much time and labor on recording facial points and the degree of distribution which would affect the final result. Moreover, the different operators also affect the result of registration.

In order to tackle the problems mentioned above, our study used Kinect for windows that could extract the depth and color information at the same time as interface to co-register between MRI and digitizer coordinate systems. In this way, we could find these points recorded by the digitization device precisely through coordinate transformation in MR images. Our method did not need additional markers and more effort of operator, and the result of alignment would not be affected by uneven distribution of these points. Certainly, the time spent in recording the facial points was saved. Furthermore, medical resources would be saved because every subject only had to accept MR scans once.

(16)

The system aligned head surface which recorded by 3D ultrasound localizing device with segmentation of MRI data. By calculating the centroid of digitized head surface and MRI-derived head surface matched together to give the good initial. The surface matching technique aims to minimizing the cost function that converges to a minimum. And the cost function can see equation 1.1. The accuracy was estimated by calculating the residual error of the surface and intra-subject and inter-subject variability. Figure 1.2 shows the result of registration of EEG and MRI data. [9]

1 n n X i=1 d (T (Pi)) (1.1)

where d is the relative distance, Pi(i = 1,2,...,n) the 3D-scanned head-surface points, and T

the rigid transformation that maps the 3D-scanned head-surface points into the coordinate system of the 3D image.

Automatic alignment of EEG/MEG and MRI data sets

This system uses distance-based alignment (See figure 1.3) that minimizes the Eu-clidean distance transform, and using 3D geometrical moments for the initial alignment. This system was tested on inhomogeneous digitized points.(See figure 1.4) And the results showed the most uniform distribution of points have better results. The average distance between points and the MRI head surface is calculated as error estimation. [12]

(17)

1.2 Related work 5

Figure 1.2: The result of registration of EEG and MRI data. Digitised points of the head surface superimposed on the points of MRI-derived head surface before (left) and after (right) surface matching. (Figure source: [9])

(18)

Figure 1.4: The inhomogeneous digitized points. Subject a: the most uniform distribution of points. (Figure source: [12])

Validation of a method for coregistering scalp recording locations with 3D structural MR images

The segmentation of MRI scalp points was chosen from the out to inside until the threshold was reached and the user could adjust the threshold value to extract the MRI scalp points. The sensor locations and facial points were obtained with a Polhemus. The MR-derived points are registered to the digitized points recorded in the coordinate system. And the algorithm uses a MarquardtLevenberg optimization routine that minimizes the sum of squares of the distances between the two data sets. After alignment, this study used two non-rigid procedures: (i) scaling that using independent linear scaling factors to reduce registration errors and (ii) scalp forcing that replacing the digitized points with the closest segmented MR scalp points. Figure 1.5 denotes the misalignment and the result of co-registration. [16]

EEG-MRI Co-registration and sensor labeling using a 3D laser scanner

This system acquires the positions of EEG sensors and the facial points by using 3D handheld laser scanner (see figure 1.6), and the scalp points was obtained from the MR images. A pre-alignment was given by using three ducial landmarks for co-registration. As to co-registration, this system used iterative closest point algorithm that minimize the distance between the two sets of points. This study uses two error estimations which are

(19)

1.2 Related work 7

(a) (b)

Figure 1.5: The misalignment and the result of co-registration. (Figure source: [16])

residual error of the face and residual error of the sensor to evaluate the accuracy of co-registration results. Figure 1.7 shows the result of EEG-MRI co-co-registration and sensor labeling. [11]

(20)

Figure 1.7: The result of EEG-MRI co-registration and sensor labeling. The MRI and headmodel co-registered with the scanned face and the positions of sensor. [11])

(21)

1.3 Kinect for windows 9

1.3

Kinect for windows

Kinect is a widely available, inexpensive device that provides color, depth and audio information. Depth data is obtained by using light coding, which mainly utilizes laser speckle to code the objects in the space. Kinect sensor projects infrared light uniformly distributed in the environment through IR emitter. When the rough object is irradiated with infrared light, it will form random reection spots, called speckle. The encoding light is read by the CMOS sensor and decoded by the chip to create a depth map. Figure 1.8 shows the basic operations of Kinect sensor.

Figure 1.8: Basic operations of Kinect for windows. (Figure source: [2])

Microsoft released latest Kinect on February 1st, 2012, called ”Kinect for windows”, which was designed to connect to the computer that makes inventors able to programs about Kinect under Windows. In addition to those functions of previous Kinect Xbox360 has, there are more update be added. The most noticeable function is that it provides ”Near mode”, which enables to view objects near to 40 centimeters in front of the device through depth camera accurately. Furthermore, these changes make the sensor less sensitive to far distance objects. When the sensor is in near mode, it is possible to view objects two meters away precisely, but out to three meters the depth value is unstable. Seeing figure 1.9 that show the visible depth values.

(22)

Figure 1.9: The depth values that Kinect for windows can view. (Figure source: [2])

Table 1.1: The specification of Kinect for windows.

Sensor

Color and depth-sensing lenses Voice microphone array Tilt motor for sensor adjustment

Field of View

Horizontal eld of view: 57 degrees Vertical eld of view: 43 degrees Physical tilt range: +-27 degrees Depth sensor range: 0.8m - 4m Near mode Depth range: 0.4m 3m

Data Streams

320x240 16-bit depth @ 30 frames/sec 640x480 32-bit color@ 30 frames/sec 16-bit audio @ 16 kHz

Skeletal Tracking System

Tracks up to 6 people, including 2 active players Tracks 20 joints per active player Ability to map active players to Xbox LIVE Avatars

Audio System

Echo cancellation system enhances voice input Speech recognition in multiple languages

(23)

1.4 Thesis overview 11

1.4

Thesis overview

Figure 1.10 shows the concept of our system. Our system used Kinect for windows that could extract the point could which owns depth and color information at the same time as interface to co-register between MRI and digitizer coordinate systems. At first, we extracted points on facial surface from MR images and aligned with facial points which extracted by Kinect for windows. Second, using the coordinate correspondences of mark-ers to find the relation between digitizer and Kinect coordinate systems. Then we could localize the EEG sensors in the MRI coordinate system.

Figure 1.10: The concept of our system.

1.5

Thesis organization

The remaining part of this thesis is organized as follows. Chapter 2 presents the meth-ods used in our system. The experiment setup and the results are given in Chapter 3. Some discussions related to our methods are in Chapter 4 and Chapter 5 is conclusions.

(24)
(25)

Chapter 2

(26)

On the other hand, apply the digitization device to record the centroid of markers on face and electrodes, then using color information to find the region of markers and calculate the locations of centroids of markers automatically. In this way we found the transforma-tion relatransforma-tion between digitizer and Kinect coordinate systems by using the two coordinate correspondences. However, due to the thickness of electrode on EEG cap, we adjusted them with the thickness compensation. Finally, we could find these points recorded by the digitization device precisely in MRI slices through coordinate transformation. The main structure of our system shows in Figure 2.1.

This chapter would be divided into the following: (i) segmentation of scalp points from MRI; (ii) extraction of RGB-D data; (iii) depth threshold; (iv) pre-processing; (v) co-registration; (vi) estimation of centroid of markers; (vii) estimation of transformation relation; (viii) error estimation.

2.2

Segmentation of scalp points from MRI

We used the gray value threshold to find the border between scalp and outer surrounding air from left-right, right- left, anterior-posterior and up-down directions, and the union of the points from all scanning directions as the scalp points (S). According to [16], the gray value threshold was defined as a fraction of the largest gray value at first (threshold = 0.04). In this procedure, the operator was allowed to adjust the threshold manually, in order to find the correct scalp points. The operator could obverse the segmentation of scalp points which is smooth enough or not (expose the inner structure of brain) to decide whether to

(27)

2.2 Segmentation of scalp points from MRI 15

Figure 2.1: Flow chart of automatic co-registration of EEG-MRI data using RGB-D camera.

(28)

Figure 2.2: Flow chart of segmentation of scalp points from MRI.

adjust the threshold. After segmentation done, we could get about 80,000 140,000 all head scalp points, and we extract about front one-third (rough 40,000 -60,000 points) to obtain the points of facial region (F) to align with facial points extracted by Kinect for windows in order to reduce the time cost for the registration and increase the accuracy of the registration result. The flow chart can see Figure 2.2.

2.3

Extraction of RGB-D data

We extracted the data of Kinect for windows by OpenNI [4] and utilized the transfor-mation formula of OpenNI to get the point and its coordinate value of x, y, z. In addition, OpenNI provides Calibration function to correct the visual angles between two cameras and align the object to the same place. We required the light of space be brightened and bal-ance to extract RGB-D data and used color information to estimate the accuracy of marker centroids.

(29)

2.3 Extraction of RGB-D data 17

(a)

(b)

Figure 2.3: The result of segmentation of Scalp points from MRI. (a) The left shows the MR image, and right shows the segmentation of scalp points that superimposed on the MR image. (b) The facial points are viewed from three angles.

(30)

At first, we could use depth threshold to separate the head point cloud(H) from data set due to the subject sat closely in front of the Kinect for windows. As to threshold value, we found the range between the smallest depth value in the point cloud and back extension about 0.15 meters as head points. The equation can see below:

H = {x|xz < minz+ 0.15 m, x ∈ points extracted by Kinect.} (2.1)

2.5

Pre-processing

We wanted facial points to align with the MRI-derived facial points without any noise or outlier to affect the result of registration. But after depth threshold, there still have points of EEG cap and markers region in the Kinect-derived head points. So we used color information to extract the points of facial region, and used some methods to remove outliers and noises. As below we partitioned into (i) skin color filter and (ii) outlier removal to explain how to get clean Kinect-derived facial points. Figure 2.5 shows the result of preprocessing.

2.5.1

Skin color filter

We could take advantage of color information to remove the part of EEG cap and mak-ers from point cloud. We transformed the color information of point cloud from RGB to YCbCr color space.(See equation 2.1) According to Chai and Ngan [7] mentioned the

(31)

2.5 Pre-processing 19

skin color region, we obtained the facial region by extracting those skin color range in the YCbCr space, and the skin range could see equation 2.2.

    Y Cb Cr     =     0.299 0.587 0.114 −0.168 −0.331 0.5 0.5 −0.418 −0.081         R G B     +     0 128 128     (2.2) Skin =        1, if ( 77 < Cb < 127 133 < Cr < 173 0, otherwise (2.3)

2.5.2

Outlier removal

Kinect for windows typically generate point cloud of varying point densities. Addi-tionally, measurement errors lead to sparse outliers which corrupt registration result even more. Some of these irregularities could be solved by performing a statistical analysis on the neighbour of each point, and removing those points which do not have a certain criteria. Some of points are dispersed like outliers or noise so we remove the points which do not have enough number of neighbuor points within a certain range.

Statistical outlier removal

The concept of statistical outlier removal is based on the computation of the distribution of distance between each point and its neighbours in the data set. For each point x, we calculated the mean distance d between it to its two hundred neighbour points.

d(x) = avg

y∈kN N

(kx − yk2) (2.4)

Following we calculated the global mean ¯d and standard deviation SD from the mean dis-tance of all points in the data set. By assuming that the data set was Gaussian distribution, the point in the data set would be considered as outlier while its mean distance was larger than the global mean plus one standard deviation and would be removed. [5]

(32)

Figure 2.4: Schematic diagram of radius outlier removal. If threshold T is one, then star point will be removed. If threshold T is two, then star and triangle points will be removed.

Erosion

Owing to the definition of skin color range is overly broad, the region connected with the edge of face such as the brim of EEG cap would be extracted whereas it was not the region of skin. Therefore, operators may make a judgement about the point cloud they extracted to eliminate the edge of face by erosion.

2.5.3

Down-sampling

In order to speed up the registration between Kinect and MRI coordinate systems using facial points, we reduced the Kinect-derived facial points uniformly to do registration. The method of down-sampling is giving threshold T and choosing the pixel at intervals of T both in the horizontal and vertical directions. Operator can adjust the threshold to control number of facial points to do co-registration. Figure 2.6 represents the different level of down sampling facial points.

(33)

2.5 Pre-processing 21

(a) (b)

(c) (d)

Figure 2.5: The results of preprocessing. (a) A original image. (b) After depth threshold. (c) After skin color filter . (d) After outlier removal.

(34)

However, it needed a good initial value and tend to result in local minimal to get poor registration effect. For our subsequent registration, the centroid of mass of the Kinect-derived facial points and the MRI-Kinect-derived facial points are matched to give an appropriate initial value. We use modified coherent point drift (CPD) algorithm [13] to do registration.

2.6.1

Modified coherent point drift algorithm

Unlike ICP, Coherent Point Drift Algorithm uses soft assignment of correspondences that establishes correspondences between all combinations of points. Coherent point drift algorithm consider the alignment of two data sets as a probabilistic method, and consid-ering the alignment of two data sets as a probability density estimation problem, where we fit Kinect-derived facial points as the Gaussian Mixture model (GMM) centroids and MRI-derived facial points as the data points. At the optimum, two data sets become aligned

Figure 2.6: The different level of down sampling facial points. The leftmost is original facial points, and the number of points are 29,770. The left two are 7,443. In the middle are 3,296. The right two are 1,855, and the rightmost are 1,185.

(35)

2.7 Estimation of centroid of markers 23

and the correspondence is obtained using the maximum of the GMM posterior probability for a given data set. And Andriy et al. [13] derive a closed form solution of the maxi-mization step of the EM algorithm. The algorithm can see below: Since Kinect-derived face points contain slightly noise and outliers, we had to reduce the poor effect of regis-tration and speed up computing. We added considering the average minimal distance of Kinect-derived faces points that were larger than three standard deviations as outliner in the iterative process. The outliers were not included in the registration process to increase the speed of convergence. After registration, the MRI-derived scalp points were aligned with the Kinect-derived face points in the same coordinate system and the transformation relation was known.

2.7

Estimation of centroid of markers

At first, we converted point cloud which has color information from RGB color space to HSL color space (See figure 2.9). HSL is an acronym for hue-saturation-lightness, and the value of hue would not be changed by the different lightness we photo every time, so we adopted this color space to extract blue region. By testing the range of hue, we could extract the markers easily from point cloud. We extract markers region by subtracting the histogram with a predefined face-only histogram. (Seeing figure 2.11(b) the red block.) The range of hue values would include the white color, and next made out the blue region and remove white color by analysis of histograms from red and green domains. We found the clear boundaries to distinguish between blue and white in red domain and green domain.

Figure 2.7: Schematic diagram of registration. Given two point sets, assign the transfor-mation relation that maps one point set to the other.

(36)
(37)

2.8 Estimation of transformation relation 25

Figure 2.11(d)(e) shows the histogram of red domain and green domain.

Figure 2.9: HSL color space. (Figure soure: [1])

After using color information to extract the points of blue region, we used connected component to distinguish which points belong which regions from the point cloud, and calculate the respective centroids of regions by averaging their coordinate values. Finally, we could get the coordinate value of the markers’ centroid. Figure 2.11 shows the result.

2.8

Estimation of transformation relation

In the experiment procedure, we have digitized the centroid of markers that stuck on the subjects face, and we could get the coordinate value of the points after finding the centroid by our procedure in the Kinect coordinate system. We took advantage of their correspon-dence between the coordinate values of each other point to find the transformation matrix. Our approach used least square method that minimizing the sum of squared differences between the corresponding positions. The equation can see below:

ˆ T = arg min T N X i=1 kxi− Tpik2  ! (2.7)

where xi∈ centroid points, and pi∈ digitized centroid points for i =1 to N.

First, setting those coordinates values of corresponding points in respective matrices. Second, calculating the pseudo-inverse of digitizers matrix, and then left multiplied by the Kinects matrix. After that we could obtain the transformation matrix from digitizer coordi-nate system to Kinect coordicoordi-nate system. With this transformation matrix, the position of digitized electrodes could convert their coordinate value into Kinect coordinate system.

(38)

(a) (b) (c)

(d) (e)

Figure 2.10: The analysis of facial surface colors included markers. (a) The facial surface. (b) The histogram of hue value of facial surface from (a), including the blue region. The blue region is in the red block. (c) The histogram of hue value of facial surface from (a), which is only skin region. (d) The histogram of red domain. (e) The histogram of green domain.

(39)

2.9 Thickness compensation 27

2.9

Thickness compensation

Because the electrodes on the EEG cap have a certain thickness, the electromagnetic digitizer digitizes the sensor position taped on the upper part of this support. To correct the electrode thickness so that digitized sensor coordinates lie on the scalp surface, the posi-tions of the digitized target sensors points were adjusted. After coordinate transforms, all the point clouds (MRI-derived scalp points, Kinect-derived scalp points and digitized sen-sor points) in one coordinate system and were aligned together. For each digitized sensen-sor points, found the closest MRI-derived scalp point and its neighbour points and calculated those points normal. To determine the correct sign for the normal, we calculate inner product between the vector from the centroid of MRI-derived scalp points to the closest MRI-derived scalp point and normal. If the result is positive, change the sign of normal, otherwise unchanged. By this way, we can adjust the direction of normal toward the inward direction from the scalp. After determine the correct sign for the normal, translate the digi-tized sensor point with a distance equal to the electrode thickness along the axis defined by the normal to obtain the scalp sensor point. Figure 2.12 is schematic diagram.

As for the electrode thickness, we measure all electrodes’ thickness on the EEG cap by using localization device ten times. First, place the paper which draw the electrode-sized circle and its centroid on the table, and use localization device to record the centroid coordinate on the paper. Second, for each electrode on the EEG cap, aligned with the circle in the paper and record the electrode position coordinate. Third, calculate the distance between two positions coordinates and then average those distances as electrode thickness.

2.10

Error estimation

In order to evaluate the accuracy of registration and compared with different methods made before, we refer to Koreeler et al. [11] and used three criteria: one is residual error of face (REF) that is the average of Euclidean distance between each Kinect-derived facial points and the closest MRI-derived scalp point.

EREF= 1 |a| X a min b ka − bk (2.8)

(40)

In our case, the ”true” error, or called map error which measure a distance between two real corresponding points does not exist because it requires the knowledge of which MRI point corresponds to which digitized point. And according to Whalen et al. [16], the target residual error (TRE), using the markers visible in MRI scans could replace this problem. Due to not accessible the TRE, we consult the residual error of sensor proposed by Koreeler et al. [11], as evaluation criteria that is more suitable for our method even though it under-estimates the true registration errors. Besides, in order to check the error of transformation, we used digitization device to point the cross on the head in the digitizer coordination system, and after transformation we calculated the average of Euclidean distance between the cross points and the closest points of MRI-derived scalp points as three criteria called residual error of cross.

EREC = 1 |a| X a min b ka − bk (2.10)

where EREC is the residual error of cross, a ∈ digitized cross points and b ∈ MRI-derived

scalp points. That criterion does not consider the thickness compensation. Moreover, the intra- and inter- subject variability was calculated.

(41)

2.10 Error estimation 29

(a) (b)

Figure 2.11: The result of estimation of centroid of markers. (a) The positions of markers on face. (b) After automatic estimation of centroid of markers, the centroids would found. The pink points are centroid of markers.

(42)

Figure 2.12: Schematic diagram of thickness compensation. The red point floating on the scalp surface is digitized sensor point Pa. At first, finding the closest MRI-derived

scalp point Pb (blue point) and its neighbourhood (grey points in the blue circle) and

cal-culated those points normal (blue arrow). To determine the correct sign for the normal, we calculated inner product between the vector (navy blue dotted line) from centroid of MRI-derived scalp points Pc to the closest MRI-derived scalp point Pb and normal. And

translate the digitized sensor point with a distance equal to the electrode thickness along the axis defined by the normal to obtain the scalp sensor point Ps(red point that near scalp

(43)

Chapter 3

(44)

the subject would wear a head-sized EEG cap which used for EEG recordings. In our case, The EEG cap contains 31 electrodes, and one of that is ground.

3.1.2

MRI

The MRI scans were acquired on Siemens MAGNETOM Trio, A Tim System 3T scan-ner (Siemens Medical Solution, Erlangen, Germany) with 12-Channel head coil. (See Fig-ure 3.1.) We used a magnetization Prepared Rapid Gradient Echo (MPRAGE) sequence (TR = 2530ms, TE = 3.03ms, TI = 1100ms, field of view = 224 x 256, matrix size = 224 x 256 and 192 continuous slices). The slice thickness is 1 mm, and voxel size is 1 x 1 x 1 mm.

(45)

3.1 Experimental Setup 33

3.1.3

Digitization

Any localization device that could acquire the locations of electrodes was able to be used in our experiment. Our study used a Polhemus FastTrak that contains two receivers and one transmitter. (See Figure 3.2) Two receivers include a stylus which points to the desired locations for collection and three receivers which are fixed to the head of subject to endure slight displacements of the head without producing digitization errors. The subject sits before the fixed transmitter which orientation with X pointing up and Y pointing to the right.

Figure 3.2: Polhemus FastTrak.

3.1.4

Experimental environment

We setup a pair of studio lamp beside the subject because of getting uniform color images and reducing the effect of the shadow. The Kinect for windows mounted in front of the subject and toward the subject, and subject near Kinect for windows as close as possible. The proposed system is implemented in C/C++ language, compiled by Microsoft Visual Studio 2010 compiler. OpenCV [3], OpenNI [4] and PCL [5] library are also applied in our system.

3.1.5

Blue marker

Before the experiment started, the subject would be pasted some markers that used to find the relation of transformation between Kinect and digitizer coordinate systems. In order to extract the markers conveniently, we choose a specially crafted marker that was 11 mm blue round, white outer ring and white cross line in the central. There were two reasons for choosing this design of makers: (i) we wanted the color be great different to

(46)

Figure 3.3: Blue marker.

3.2

Results

Our study used three error estimations to evaluate the accuracy of co-registration, and we calculated them in root mean square (R.M.S) form and arithmetic mean form respec-tively. Figure 3.4 shows the Kinect-derived facial points and MRI-derived facial points in the same coordinate system before co-registration, and Figure 3.5 displays the result of co-registration of Kinect-derived facial points and MRI-derived facial points in the form of point cloud. Then Figure 3.6 and 3.7(a)-(i) shows example of Kinect-derived facial points superimposed on the different slices after co-registration. In this example, we could ob-serve some noise and outlier that extracted by Kinect for windows floating in the air. Table 3.1 shows the result of the residual error of the face. The mean of residual error of the face was equal to 0.89 mm ± 0.09 mm (Mean ± SD). Table 3.2 shows our method compared with other methods made before. It demonstrates that our method has better results by

(47)

3.2 Results 35

using facial points to do registration.

Table 3.1: The results of the residual error of the face. (unit: mm) R.M.S Mean Std. dev. intra-subject

S1 1.00 0.90 0.58 0.02 1.01 0.86 0.53 S2 1.00 0.87 0.54 0.045 0.99 0.78 0.50 S3 0.94 0.80 0.52 0.06 1.00 0.92 0.53 S4 1.09 0.97 0.64 0.11 1.06 0.75 0.54 S5 1.13 1.02 0.61 0.02 1.22 1.06 0.63 S6 1.05 0.87 0.57 0.025 1.07 0.92 0.65 S7 0.99 0.82 0.56 0.00 0.98 0.82 0.52 S8 1.07 0.85 0.74 0.025 1.09 0.90 0.74 S9 1.01 0.85 0.53 0.015 0.99 0.82 0.54 S10 1.34 1.15 0.74 0.065 1.13 1.02 0.64

The results of the residual error of the sensor were listed in Table 3.3. The mean of the residual error of the sensor was equal to 1.67 mm ± 0.46 mm (Mean ± SD). The max value of the residual error of the sensor by different subjects were about 2.67 - 4.76 mm due to the hair that cause the digitized points floating on the head and the sized difference between the EEG cap and the head. In addition, The thickness compensation also affected the residual error of the sensor. On the other hand, we compared with Koessler et al. [11]

(48)

(a)

(b)

Figure 3.4: The Kinect-derived facial points and MRI-derived facial points (red points). There are put in the same coordinate system. (a) and (b) are two different perspectives.

(49)

3.2 Results 37

(a)

(b)

Figure 3.5: The result of co-registration of Kinect-derived facial points and MRI-derived facial points. (a) Before co-registration. The centroids of Kinect-derived facial points (white points) and MRI-derived facial points (red points) were matched together by trans-lation. There are different perspectives. (b) After co-registration. The Kinect-derived facial points(white points) superimposed on the MRI-derived facial points (red points).

(50)

(a)

(b) (c)

Figure 3.6: Example of the Kinect-derived facial points superimposed on MRI after co-registration. The red points are the Kinect-derived facial points. (a) The view from the right side. (b) and (c) are one of slices in vertical and horizontal tangent direction.

(51)

3.2 Results 39

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 3.7: Example of Kinect-derived facial points on the MR slice. The red points are Kinect-derived facial points. (a) to (h) are different slices in vertical tangent direction.

(52)

Min = 0.94 Max = 1.19

that listed in Table 3.4, and our method had smaller errors. Figure 3.8 shows the example of digitized sensor points co-registered with MRI head surface, and Figure 3.8(c) can see the four makers on the face. Moreover, Figure 3.8(d) shows the sensors and the name of sensors, and Figure 3.9 shows example of digitized sensor locations on the MR slices.

By residual error of the cross, we could evaluate our system that use Kinect for windows as interface whether produce the great transformation errors. The results of the residual error of the cross were listed in Table 3.5. The mean of the residual error of the cross was equal to 1.83 mm ± 0.43 mm (Mean ± SD), and in R.M.S form was equal to 2.29 mm ± 0.48 mm. Then Figure 3.10 shows example of digitized cross locations on the head surface, and Figure 3.11 shows the example of digitized cross locations on the different MRI slices. From ten subjects, we could observe that s9 and s10 have relative larger errors that because of our EEG cap only having medium and big size, the head of subjects s9 and subjects s10 were not fit the medium or big sized EEG cap. Simultaneously, the hair of subjects would influence the locations of digitized sensors. The hair of subject s5 was short and the head was medium size, so the results were relatively small.

(53)

3.2 Results 41

Table 3.3: The results of the residual error of the sensor. (unit: mm) R.M.S Mean Std. dev. Max Min intra-subject

S1 2.00 1.48 1.34 4.63 0.08 0.025 1.76 1.43 1.01 3.84 0.21 S2 2.00 1.14 1.64 5.84 0.05 0.21 2.31 1.56 2.15 4.87 0.21 S3 1.57 1.27 0.86 3.01 0.17 0.025 1.68 1.32 1.12 2.67 0.13 S4 2.11 1.93 0.86 3.87 0.11 0.105 2.18 1.72 2.31 4.76 0.88 S5 2.56 0.92 2.39 3.43 0.10 0.065 2.17 1.05 3.14 4.33 0.17 S6 2.4 1.97 3.41 2.43 0.03 0.03 2.07 1.91 1.74 3.33 0.12 S7 3.03 2.54 1.82 4.49 0.28 0.77 1.57 1.00 1.21 3.23 0.12 S8 2.78 1.93 1.34 3.49 0.15 0.205 1.83 1.52 1.44 3.23 0.21 S9 3.2 2.34 2.35 3.49 0.16 0.165 3.17 2.01 2.27 3.34 0.15 S10 3.47 2.31 3.51 5.47 0.35 0.075 3.76 2.16 3.64 5.58 0.12

(54)

(a) (b)

(c) (d)

Figure 3.8: Example of digitized sensor points co-registered with MRI head surface. The red points are the locations of EEG sensors. (a) The right side of head surface. (b) The left side of head surface. (c) The front side of head surface. There are four markers on the face. (d)Looking down form height. The name of sensors were labelled.

(55)

3.2 Results 43

(a)

(b)

Figure 3.9: Example of digitized sensor locations on the MR slice. The red points are the locations of EEG sensors.

(56)

R.M.S Max = 3.76 Max = 3.49 Min = 1.57 Min = 1.80

(a) (b)

Figure 3.10: Example of digitized cross locations on the head surface. The red points are the digitized cross points. (a) The right side of head surface. (b) The left side of head surface.

(57)

3.2 Results 45

Table 3.5: The results of the residual error of the cross. (unit: mm) Points R.M.S Mean Std. dev. intra-subject

S1 192 2.2 1.57 1.64 0.04 208 1.93 1.49 1.37 S2 152 2.34 2.01 1.28 0.07 113 1.91 1.87 1.33 S3 132 1.76 1.38 1.22 0.1 127 1.97 1.58 1.37 S4 134 2.34 2.31 2.17 0.035 127 1.83 1.62 1.74 S5 162 1.54 1.15 1.01 0.345 173 2.07 1.32 1.59 S6 158 2.33 1.83 2.17 0.165 127 1.65 1.38 0.90 S7 181 1.78 1.47 1.36 0.225 170 2.56 2.09 1.48 S8 212 2.42 1.97 1.41 0.478 179 2.07 1.34 1.64 S9 164 2.48 2.2 1.61 0.01 178 2.79 2.22 1.69 S10 157 3.07 2.78 1.64 0.16 161 3.24 2.46 1.58

(58)

(a) (b)

(c)

Figure 3.11: Example of digitized cross locations on the MRI slices. The red points are the digitized cross points. (a), (b) and (c) are digitized cross locations on different MR slices.

(59)

Chapter 4

(60)

for windows was equal to 0.89 mm ± 0.03 mm (mean ± SD), and using Kinect was equal to 1.24 mm ± 0.07 mm. This experiment showed under the same subjects, using Kinect for windows to extract the RGB-D data to do registration had better results. The results of using Kinect to do registration can refer to Table 4.2.

Table 4.1: Comparison of Kinect and Kinect for windows. (unit: mm) Kinect for windows Kinect

0.89 ± 0.03 1.24 ± 0.07

Mean Max = 0.95 Max = 1.31

Min = 0.83 Min = 1.13

Table 4.2: The results of using Kinect to do registration. (unit: mm) Case 1 Case 2 Case 3 Case 4

Mean 1.23 1.31 1.3 1.13

Standard deviation 0.61 0.67 0.63 0.67

Maximum 3.04 3.3 2.97 3.41

4.2

The influence of co-registration by down-sampling

In order to speed up the registration between Kinect and MRI coordinate systems using facial points, we reduced the Kinect-derived facial points uniformly as GMM centroids to

(61)

4.3 Estimation of transformation relation 49

do registration. But using down-sampling points may influence the results of registration. Therefore, we used the different degrees of down-sampling Kinect-derived facial points to align with MRI-derived facial points, and observed whether influence the results of regis-tration and how much time cost. Table 4.3 represents case about this discussion. It shows that the larger degree of down-sampling is, the lower time is. Using down-sampling points almost does not affect the results of registration, and greatly enhances the speed.

Table 4.3: Example of the influence of co-registration by down-sampling. (unit: mm) Points Time cost RMS Mean Std dev.

Kinect for windows 17,392 1’44’38 1 0.86 0.5 Downsampling

1/22 4,327 ’25’28 1 0.86 0.51

1/32 1,927 ’11’33 1 0.86 0.51

1/42 1,082 ’06’53 1.01 0.86 0.52

1/52 683 ’04’29 1.04 0.86 0.59

4.3

Estimation of transformation relation

In our study, these positions of markers were used to align with Kinect and digitizer coordinate systems. However, it was unknown that how many markers should be used and where the positions of markers should be pasted on to bring well transformation relation be-tween Kinect and digitizer coordinate systems. Therefore, we provided some information about using of the markers in this section.

In this experiment, the subject was pasted on with nine markers. Figure 4.1(a) shows these positions of markers. The criteria about choosing the position of marker includes: (i) Flat area to use the color information of Kinect for windows to extract the entire round of markers and to calculate its centroids. (ii) Positions where were able to express the ups and downs and characteristics of human face. Our error estimation was defined as averaged these correspondence distances between these coordinate values of nine centroids of mark-ers in the Kinect coordinate system and nine centroids of markmark-ers transfer from digitizer

(62)

q∈θ

There are three subjects in this experiment, and everyone received twice. Figure 4.1(a) shows one of these cases. The horizontal axis represents using various numbers of markers, and the vertical axis indicates the value of error. The data were values of error estimation calculated by different transformation matrices, and that estimated by different numbers and positions of markers. For example, we chose four from nine markers and using every combination to calculate transformation matrix to evaluate error estimation, and the value of error would be showed on the axis of using four markers.

In Figure 4.1(b), we could observe a case of the errors that calculated by using different number of points to estimate transformation matrix. With using number of points increases, the selection of combinations of different points had little effect on the error. For example, seeing Figure 4.1(b) presents using different number points of median, Q1 and Q3. We could find different error for each other little difference by selecting seven different points to produce, and selecting four different points produce the errors disparity is quite large.

However, we found that using different number of points to calculate transformation matrix and evaluating the error estimation (for example: Choosing i number of points to do transformation matrix, for i=4 to 9), these errors we got from some combinations that would reach similar performance (the minimal error of registration), and even less than the errors we got from choosing more points. For example, the minimal of error in using four points was 0.69 mm which is the smallest in case 1. Besides, we integrated six data and it showed these features, seeing Figure 4.2. As a result, we were interested in the condition of using few points, which points were able to get the less error. Seeing Table 4.4, which shows the combinations of which points that get the minimal error by using

(63)

4.3 Estimation of transformation relation 51

different number of points and median of error in using different number of points in case 1.

Table 4.4: In case 1, the combinations of which points that get the minimal error by using different number of points and median of error in using different number of points.

Points # Minimal of error (mm) Used points Median of error (mm)

9 0.76 all 0.76 8 0.74 1 2 4 5 6 7 8 9 0.79 7 0.74 1 2 4 6 7 8 9 0.84 6 0.72 2 4 5 7 8 9 0.97 5 0.71 2 4 7 8 9 1.21 4 0.69 2 4 8 9 1.97

In the all combinations of four points, we selected the minimum error that were within fifteen percent (and the errors less than 1 mm) and polled these combinations to find those four points that occurs most frequently. (See Figure 4.3) Among those points, we found point #9, #8 and #4 having the higher frequency of appearance (higher than 0.4 of each case), and point #1, #2, #3 also performs well. However, considering the convenience for operating, the symmetry and facial feature of our face, we chose point #2. In order to more understanding which combination of four points could get the less error, we listed these top 15 smaller case errors of each six case. We found that the combination of (#2, #4, #8, #9) were top 3 in each case (except the case3 were second and third in case3 1, case3 2, the other two cases ranked first.) See Table 4.5.

For the purpose of verifying the fourth point accurately, we added a point to forehead. That is there were 10 markers on the face, and Figure 4.4 shows the positions of markers on face. We used the same method to estimate the transformation relation of different numbers and positions. Figure 4.5 shows the data of integrating four subjects which also owns the features of previous experiment. We also listed four data that were top 15 smaller error which calculated from using combinations of four points.(See Table 4.6) We found the combination of (#2, #5, #9, #10) and (#3, #5, #9, #10) were ranked in the top few, and positions of these points were the same as the previous experiment on the face. So we decided four positions on the face of subject in our study where one on the tip of

(64)

(a) (b)

Figure 4.1: Example of estimation of transformation relation. (a) The positions of nine markers on face. (b)The error that calculated by using different number of points to estimate transformation matrix.

(65)

4.3 Estimation of transformation relation 53

(a) (b)

Figure 4.3: Frequency of occurrence of points. In the all combinations of four points, we selected the minimum error that were (b) within fifteen percent (a) and the errors less than 1 mm and polled these combinations to find those four points that occurs most frequently.

Table 4.5: The rank of smaller errors.

Rank Case1 1 Case1 2 Case2 1 Case2 2 Case3 1 Case3 2 1 2 4 8 9 2 4 8 9 2 4 8 9 2 4 8 9 1 4 8 9 1 4 8 9 2 2 4 7 9 2 5 8 9 3 4 8 9 2 4 7 9 2 4 8 9 1 4 7 9 3 2 5 7 9 2 4 7 9 2 4 7 9 3 4 8 9 1 4 7 9 2 4 8 9 4 2 5 8 9 2 5 7 9 3 4 7 9 2 5 8 9 2 4 7 9 2 4 7 9 5 1 4 8 9 4 6 8 9 1 4 8 9 3 4 7 9 2 5 8 9 1 5 8 9 6 1 5 7 9 1 5 8 9 2 5 8 9 1 4 8 9 1 5 8 9 3 4 7 9 7 1 5 8 9 1 4 8 9 1 4 7 9 1 4 7 9 2 5 7 9 1 3 7 9 8 1 4 7 9 2 4 6 8 1 3 4 9 2 5 7 9 1 5 7 9 3 4 8 9 9 4 6 8 9 5 6 8 9 3 5 8 9 3 5 8 9 3 4 8 9 2 5 8 9 10 3 4 7 9 2 5 6 8 2 5 7 9 1 3 4 9 3 4 7 9 1 5 7 9 11 2 4 6 8 1 5 7 9 2 3 4 9 3 5 7 9 3 5 8 9 1 3 4 9 12 4 6 7 9 1 4 7 9 3 5 7 9 1 2 7 9 1 3 4 9 2 5 7 9 13 3 4 8 9 4 6 7 9 1 5 8 9 2 3 4 9 3 5 7 9 1 3 8 9 14 3 5 7 9 5 6 7 9 1 3 5 9 1 2 8 9 1 3 7 9 3 5 8 9 15 1 2 5 9 1 2 5 9 1 3 7 9 1 2 4 9 1 3 8 9 3 5 7 9

(66)

Figure 4.4: The positions of ten markers on face.

4.4

Source of error

The errors come from several sources, and we discussed the sources of errors divided into three error estimation individually. The influence of residual error of the face is (i) The error of points of coordinate values that extracted from Kinect for windows (includ-ing outliers and noise), (ii) scalp segmentation from MRI and (iii) the algorithm to do co-registration. The results showed that the residual error of the face relatively small. Con-cerning the influence of residual error of the sensor: (i) the sensor thickness and thickness compensation: different thickness compensation could generate different result due to dif-ferent calculation of the normal. Certainly, the positions of sensors on the scalp would be different by different thickness compensation. (ii) EEG cap size and head of subject: the EEG cap must fit the head of subject, otherwise could influence the digitized coordinate val-ues of sensors. (iii) Hair: The hair could bring the sensors on the EEG cap not fit to scalp of head, and influencing the digitized coordinate values. (iv) Camera resolution: Kinect for windows produces data stream about 640 x 480 and the resolution is lower compared

(67)

4.4 Source of error 55

Table 4.6: Top 15 smaller error which calculated from using combinations of four points. Rank Case1 1 Case1 2 Case2 Case3

1 3 5 9 10 2 5 9 10 2 5 9 10 3 5 9 10 2 2 5 9 10 2 6 9 10 4 6 9 10 2 5 9 10 3 3 6 9 10 3 5 9 10 2 6 9 10 1 5 9 10 4 2 6 9 10 3 6 9 10 3 5 9 10 4 5 9 10 5 4 6 9 10 2 3 9 10 1 4 6 10 3 6 9 10 6 2 4 6 10 2 3 6 10 3 6 9 10 2 4 5 10 7 4 5 9 10 2 3 5 10 1 3 9 10 1 4 5 10 8 2 4 5 10 4 6 9 10 2 4 6 10 2 4 6 10 9 1 4 6 10 4 5 9 10 1 4 9 10 4 6 9 10 10 2 3 6 10 3 5 8 10 1 6 9 10 2 6 9 10 11 1 4 5 10 3 6 8 10 1 5 9 10 1 3 5 10 12 3 5 8 10 2 4 6 10 1 4 8 10 1 4 9 10 13 2 5 8 10 2 5 8 10 3 6 8 10 1 3 9 10 14 1 6 9 10 2 4 5 10 2 5 8 10 3 4 5 10 15 1 5 9 10 2 6 8 10 3 5 8 10 1 4 6 10

(68)
(69)

4.4 Source of error 57

(70)
(71)

Chapter 5

(72)

speed and accuracy. Second, we estimated the centorids of markers that using color in-formation provided by Kinect for windows and connected component automatically. By any kind of digitization device to record corresponding points on face, we used the co-ordinate correspondences to find the transformation relation between digitizer and Kinect coordinate systems. On the other hand, thickness compensation was applied to adjust the digitized points floating in the scalp surface. By the way, our work could be included any scalp-placed sensor co-registration.

According to the error estimation by Koessler et al. [11], the results of error estima-tion showed that our system was sufficiently accuracy. Our system was more accurate than Koessler et al. [11], and no need to additional labor and time to record facial feature. Besides, our system was less influence caused by variation of locating the position of the markers. Based on the above motioned, our system was sufficiently accurate, repeatable, efficient and labor-intensive to be used to assist neuroscience studies.

(73)

Bibliography

[1] Hsl color space. http://www.workwithcolor.com/

hsl-color-schemer-01.htm.

[2] Kinect for windows blog. http://blogs.msdn.com/b/

kinectforwindows/.

[3] opencv. http://opencv.org/.

[4] Openni tutorial. http://openni.org/docs2/Tutorial/index.htmll.

[5] Pcl tutorial. http://pointclouds.org/documentation/tutorials/.

[6] Paul J Besl and Neil D McKay. Method for registration of 3-d shapes. In Robotics-DL tentative, pages 586–606. International Society for Optics and Photonics, 1992.

[7] Douglas Chai and King N Ngan. Face segmentation using skin-color map in video-phone applications. Circuits and Systems for Video Technology, IEEE Transactions on, 9(4):551–564, 1999.

[8] Iok-Long Chan. 3-d object model reconstruction and immersive interaction using kinect sensor. Master’s thesis, 2012.

[9] H-J Huppertz, M Otte, C Grimm, R Kristeva-Feige, T Mergner, and CH L¨ucking. Estimation of the accuracy of a surface matching technique for registration of eeg and mri data. Electroencephalography and clinical neurophysiology, 106(5):409–415, 1998.

(74)

data sets. Clinical Neurophysiology, 112(8):1553–1561, 2001.

[13] Andriy Myronenko and Xubo Song. Point set registration: coherent point drift. Pat-tern Analysis and Machine Intelligence, IEEE Transactions on, 32(12):2262–2275, 2010.

[14] GV Simpson, ME Pflieger, JJ Foxe, SP Ahlfors, HG Vaughan, J Hrabe, RJ Ilmoniemi, and G Lantos. Dynamic neuroimaging of brain function. Journal of Clinical Neuro-physiology, 12(5):432–449, 1995.

[15] Furlong P et al. Singh K, Holiday I. Evaluation of mri-eeg/meg coregistration strate-gies using monte carlo simulation. Electroencephalography and clinical neurophysi-ology, 102:81–85, 1997.

[16] Christopher Whalen, Edward L Maclin, Monica Fabiani, and Gabriele Gratton. Val-idation of a method for coregistering scalp recording locations with 3d structural mr images. Human brain mapping, 29(11):1288–1301, 2008.

[17] Seung-Schik Yoo, Charles RG Guttmann, John R Ives, Lawrence P Panych, Ron Kikinis, Donald L Schomer, and Ferenc A Jolesz. 3d localization of surface 10–20 eeg electrodes on high resolution anatomical mr images. Electroencephalography and clinical neurophysiology, 102(4):335–339, 1997.

數據

Figure 1.1: The co-registration of EEG and MRI data. By co-registration method that transforms the positions of sensors from digitizer coordinate system to MRI coordinate system.
Figure 1.3: The concept of distance-based alignment. (Figure source: [12])
Figure 1.4: The inhomogeneous digitized points. Subject a: the most uniform distribution of points
Figure 1.5: The misalignment and the result of co-registration. (Figure source: [16])
+7

參考文獻

相關文件

Wang, Unique continuation for the elasticity sys- tem and a counterexample for second order elliptic systems, Harmonic Analysis, Partial Differential Equations, Complex Analysis,

 Promote project learning, mathematical modeling, and problem-based learning to strengthen the ability to integrate and apply knowledge and skills, and make. calculated

Robinson Crusoe is an Englishman from the 1) t_______ of York in the seventeenth century, the youngest son of a merchant of German origin. This trip is financially successful,

fostering independent application of reading strategies Strategy 7: Provide opportunities for students to track, reflect on, and share their learning progress (destination). •

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it

By exploiting the Cartesian P -properties for a nonlinear transformation, we show that the class of regularized merit functions provides a global error bound for the solution of

This kind of algorithm has also been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems [15, 16, 20–22], symmetric

This paper is based on Tang Lin’ s Ming Bao Ji (Retribution after Death), which is written in the Early Tang period, to examine the transformation of the perception of animal since