• 沒有找到結果。

The Alignment Method

High-Resolution Volume Reconstruction

4.1.2 The Alignment Method

A projected feature point is a pronounced mark in an X-ray projection image. Alignment is accomplished by first aligning the projected feature points in the vertical direction and then in the horizontal direction. The projected feature points should be maintained in the vertical direction. Thus, vertically aligning the projected feature points in the second image to the previous image is sufficient. However, the location of the feature points in the horizontal direction varies among projection images. Calculating the horizontal location of the feature points is a more difficult task than alignment in the vertical direction. The following subsections describe these steps.

Vertical Direction Alignment

For each pair of projection images, the sum of the intensity values on each row is calcu-lated. The sums of the rows form histograms that should be similar in a pair of consecutive images. The vertical displacement can be calculated by minimizing the difference between the histograms.

Given an N × M image I(x, y), 0 ≤ x < N , 0 ≤ y < M and I(x, y) ∈[0, 1]. The vertical histogram h is calculated by

h(I, y) = 1

Assume that Ia is the unaligned image and Ib is the reference image. The vertical cor-rection of Ia is ˆy, which can be estimated by

To achieve the most favorable results, the image is preprocessed to enhance the features. In this experiment, the estimated correction is more accurate when the images are enhanced

by applying the edge detection method[43].

Horizontal Direction Calibration

The horizontal calibration is based on the projected feature points forming a sine-shaped locus in the x-θ coordinate system. This calibration involves three steps: detecting pro-jected feature points, matching propro-jected feature points to construct a set of loci from the matched projected feature points, and fitting the loci to sine curves to adjust the horizontal displacement of images.

Detecting Projected Feature Points

Feature point extraction is a fundamental step in image stitching, object recognition, and feature-based image alignment [44]. Researchers have proposed many feature detection methods. The corner detection method proposed by Harris and Stephen [45] is commonly used to extract corner-shape regions in an image. To achieve scale invariance, Kadir and Brady [46] selected the salient region from the image scale-space as the feature that possesses the maximum entropy. Lowe [47] proposed the scale-invariant feature transform (SIFT) algorithm to select local extrema from the differences of a Gaussian (DoG) pyramid in an image. The SIFT algorithm uses the gradient location-orientation histogram as a feature descriptor to achieve rotation invariance and illumination invariance. Researchers have proposed several improved versions of the SIFT algorithm. Bay et al. used the Haar wavelet to expedite feature detection [48]. Rady et al. proposed entropy-based feature detection [49], and Suri et al. combined mutual information alignment with the SIFT algorithm [50].

In this study, a modified SIFT algorithm was employed to extract automatically the projected feature points contained in X-ray images. The typical SIFT implementation in-volves describing a feature according to its location, size, the orientation of the sampling region, and the image gradient histogram in the sampling region. Because the proposed method matches the projected feature points in two X-ray images based on mutual in-formation [51, 52, 53], each projected feature point in this study contained the entropy

of the sampling region rather than the image gradient histogram. To reduce the noise and the number of low-contrast projected feature points, the entropy of each selected projected feature point must exceed a given threshold. The experiments in this study entailed setting a threshold between 0.5 and 1.0. Because the features in the objects are gold nanoparticles, the size and orientation of the sampling region were fixed in this implementation.

Matching Projected Feature Points

Let Fi, i = 1, . . . , m be the sets of projected feature points in m projection images. The projected feature points are classified into k groups. In the ideal case, each group is the set of projected feature points, which are the projections of a feature (i.e., gold nanopar-ticle) in the object from various angles. Because the rotation angle of the object is small, the projected feature points are in proximity and have similar mutual information in two consecutive images. However, the distance between the two matched projected feature points depends on the distance between the feature and the rotation axis of the object.

This means that an affine transform cannot match the projected feature points in two im-ages. Therefore, this study presents a greedy method for classifying the projected feature points. For each pair of images, the random sample consensus (RANSAC) method[54] was first applied to compute an initial alignment of the two images, and a tracking method was then employed to match the projected feature points in the next image.

Several feature tracking methods are available [44]. The proposed method is designed based on the Shafique and Shah’s method [55], which is a greedy algorithm for tracking moving objects in videos, and the Tang and Tao’s method [56], which integrates the hidden Markov model to eliminate unreliable matches.

Given the projected feature points sets Fi−1and Fi, the RANSAC method was applied to compute a translation matrix Ti,i−1, so that a sufficient number of projected feature points p and q in Fi−1 and Fi respectively, |Ti,i−1(q) − p| is less than a given threshold .

Applying translation matrices Ti,i−1, i = 2, . . . , m to the consecutive images achieves the initial alignment of the m projection images. All of the images are aligned based on the

first image.

Given the initially aligned projected feature points Fi, i = 1, . . . , m, the following procedures yield a set of possible loci of the projected feature points produced by feature points in the object.

1. Every projected feature point in F1 is the starting point of a locus.

2. Iteratively process Fi, i = 2, . . . , m;

(a) Let L be the set of the loci computed. For each locus l ∈ L, compute Ti−1,i(p) where p is the final point of l and Ti−1,i is the inverse of Ti,i−1. Let Λ be a region in Ii centered at Ti−1,i(p). Search in Λ for the projected feature points (Fig. 4.2). If this region contains only one projected feature point q, then that point q is selected as the final point of l. If the region contains more than one projected feature point, then select the q that has the greatest M , where M is the average entropy of q and the previous t points on l. If t is greater than i − 1, then M is the average entropy of q and the points of the entire locus l.

(b) If Fi contains unmatched projected feature points, then each of these points creates a new locus.

3. Reverse the image orders and repeat Step 2, but do not include 2b to backtrack all loci.

The X-ray images used in this study measured 1024 × 1024 pixels, and 128 × 32 pixels was the size of search region Λ, and five previous points for t.

Because two loci could intersect (i.e., two projected feature points on two loci could overlap or be extremely close), the average entropy must be computed to select the best-matching projected feature point in Step 2a. In this step, some projected feature points with a high entropy in Fiare not included in any locus. These significant projected feature points should not be disregarded, and Step 2b entails creating a new locus for each of them.

After Step 2, the forward feature tracking required to construct the set of loci is complete. To verify the correctness of the loci and complete the loci added in the Step 2b, backtrack all loci in the final step.

相關文件