• 沒有找到結果。

Surface registration from a sequence of 2D images

Image registration is essential to evaluating the global surface texture. Registration aligns two or more images from different sensors or viewpoints. Some studies have used a sequence of 2D images to reconstruct the surface of an object for specific applications such as:

(1) Reconstructing a large object surface: printed circuit board (PCB) {Perng et al. [22]}, cathode ray tube (CRT) panel {Perng et al. [23]}, organic light-emitting diode (OLED) panel {Perng et al. [24]}, or thin film transistor liquid crystal display (TFT-LCD) panel {Chen and Kuo [25]};

(2) Reconstructing a non-flat object surface: bore hole {Biegelbauer et al. [18]} or router {Perng and Chen [26]};

(3) Reconstructing an object surface with high resolution: integrated circuit (IC) chip {Perng et al. [27]}.

Registration techniques have been developed for many different types of problems. In general, the alignment methods can be separated into two categories according to whether two or more aligned images overlap or not. Both categories require close coordination of the sensor and an associated motion unit. For the non-overlapping method, the region of interest (ROI) must exist in two successive but non-overlapping images {Biegelbauer et al. [18], Perng et al. [24]}. For the overlapping method, the ROI must exist in two successive images that overlap by a specified percentage. The users must predefine the overlapping region in the first image as a template and then apply the pattern matching algorithm to the neighboring image {Lewis [28], Fitch et al. [29]}. In the matching process, the predefined template will slide over the entire target image on a pixel-by-pixel or sub-pixel-by-sub-pixel basis so that the maximum matching score can be found and the corresponding alignment coordinate can be determined {Perng et al. [22, 23, 27]}. Although the registration image of the former method may be rougher than the latter one, it is highly computationally efficient and so was used in this dissertation.

11 2.4 Surface inspection for directional textures

A directional textured surface, such as machined part, semiconductor, natural wood, fabric textile, etc. is an object surface which composes of a set of line primitives in some regular or repetitive arrangement over an entire appearance. Detecting local defects embedded in a directional texture surface is one of the popular researches of computer vision. Numerous approaches to auto-inspect the directional textured surface have been proposed, including statistical, structural, global, and model-based approaches {Kumar [30], Xie [31]}. The global approaches are based on image restoration procedure such as using discrete Fourier transform (DFT) {Tsai and Hsieh [32]}, discrete cosine transform (DCT) {Chen and Perng [33]}, discrete wavelet transform (DWT) {Tsai and Chiang [34]}, singular value decomposition (SVD) {Lu and Tsai [35]}, principal component analysis (PCA) {Perng and Chen [36]}, and independent component analysis (ICA) {Lu and Tsai [37]}. Each of these approaches may be a good choice for inspecting directional textured surfaces. Because these approaches require neither textural features nor any reference image for comparison, they are immune to the limitations inherent in local feature extraction or golden template matching methods.

In general, global approaches to inspecting defects in directional textured surfaces usually start with a forward transform and filtering, followed by an inverse transform and thresholding {Kumar [30]}. These approaches all involve implicit qualitative inspection algorithms {Newman and Jain [38]}. Figure 8 shows a flowchart of such a global approach.

The input image in the spatial domain is first converted to the transform domain where textures exhibit significant high-energy characteristics that can be detected by some pre-defined criteria. After those specific high-energy components have been suppressed in the transform domain, the inverse transform restores the image to the spatial domain. These global approaches thus preserve only the local defects that existed in the original input image, and remove all directional textures.

12

Figure 8: Flowchart of global approach to detecting defects in directional textures

Detecting defects on the directional textured surfaces is merely one of the basic capabilities. It is prominent, but insufficient, to be discharged in our complex world. So other auxiliary capabilities should be also taken into consideration. Based on the experimental results of existing references [32-37] and experiences, some curial features are summarized in Table 2 to compare the auxiliary abilities of the existing approaches.

As shown in Table 2, the ICA-based approach additionally need golden template and can neither indicate the defect location nor preserve the defect shape. It is also the least adaptive one to tackle the unexceptional evens. The DWT-based approach is good, except it cannot detect the defects that parallel to the texture. Both the texture and such parallel type of defect maps into similar wavelet bank(s) and then be suppressed which usually yields a missed-detection. The remaining approaches, DFT-based, DCT-based, SVD-based, and PCA-based, are relatively outstanding. They can provide with all of the mentioned auxiliary capabilities. Among of them, ICA-based approach needs less memory locations and shows more efficient in computation due to it all deals with real number and has fast DCT algorithm.

Based on this reason, an image restoration process based on DCT was used in this dissertation.

13

Table 2 Ability comparison of the state-of-the-art global directional textured surface defect detectors

DFT DCT DWT SVD PCA ICA

Indicate the defect location ○ ○ ○ ○ ○ X

Preserve the defect shape ○ ○ ○ ○ ○ X

Detect the defects that parallel to the texture ○ ○ X ○ ○ ○

Template free ○ ○ ○ ○ ○ X

Shift invariance ○ ○ ○ ○ ○ X

Rotation invariance ○ ○ ○ ○ ○ X

Illumination invariance ○ ○ ○ ○ ○ ○

Suit for line scan system X X X X X ○

Note: “○” represents “Yes”; “X” represents “No”.

14

3 Research Method

3.1 OTPG hardware

The hardware system for internal thread extraction is shown in Figure 9. The sequences of internal thread images were captured by a TELI CS8320 black and white camera with a resolution of 640 × 480 and a Matrox Meteor II frame grabber. A CCD with an illumination less than 0.4 lux is recommended for this task. A 7-in Hawkeye Slim rigid industrial endoscope connected to a 90° side-view mirror tube (as shown in Figure 10) was used as the lens. The outside diameter of the endoscope was only 0.20-in and included a compact illumination fiber. The Moritex MHF-G150LR halogen light source supplied white light to the endoscope to enable the CCD to receive clear images in dark cavities. To overcome the line of sight limitation of the 90° side-view adaptor, the mechanism included a rotational servo motor (SM3416D_PLS) and a linear actuator (SmartT integrated module) to observe all the surfaces of the internal thread at different successive angles and depths. The apparatus was connected to a computer. A flowchart of the proposed vision inspection method is given in Figure 11.

The details of this method are described below.

15

Figure 9: Proposed OTPG hardware

Figure 10: The 90° side-view adaptor (from http://www.gradientlens.com/)

16

Image processing by cosine transformation

Are there pixels out of upper control limit? Grade C: collapse or flaw of defect , if N1 = 0 and N2 = 1 Grade D: mixed type defects , if N1 = 1 and N2 = 1

Are there pixels out of lower control limit?

Yes N2 = 1 No

Figure 11: Flowchart of the proposed OTPG algorithm

17 3.2 Registration of the 2D unwrapped image

As shown in Figure 12, the inner surface of internal thread can be fully observed by the proposed OTPG. Even so, image distortion and non-uniform illumination will occur due to the inherent endoscope structure and the cylindrical geometry of the internal threads. The farther away the pixels are from the center of the captured image, the more significant the above phenomena will be. Thus, only the region with little distortion near the center of each image for various angles and depths are used. With appropriate control of the sensor and the associated motion unit, a sequence of 150 × 150 pixel low-distortion images by rotating the fixture in 15° steps could be extracted. The ROI of two successive images under these conditions is restricted to only two non-overlapping neighbors. After the fixture has rotated 360°, the linear actuator raises the fixture to the next level and the rotation is repeated. The procedure continues until images of all the inner surfaces of the internal thread have been captured. Finally, the captured sequence of low-distortion images is used to reconstruct a 2D unwrapped image with size 3600 × 1500. It needs 24 × 10 = 240 sub-images to completely registrate one internal thread with 15.3mm in diameter and 20mm in length. Figure 13 shows the reconstructed 2D unwrapped image of an internal thread obtained by the described approach. In this figure, the crest and root of the internal thread correspond to the wide and narrow white bands, respectively. The flanks of the internal thread appear as gray bands. A complete thread pattern is composed of a wide white band, a gray band, a narrow white band, and another gray band in that order. The inner surface of an internal thread is indeed a type of directional texture that comprises repetitive and periodic thread patterns.

18

Figure 12: A partial wall image of internal thread

Figure 13: Reconstructed 2D unwrapped image of an internal thread

19 3.3 OTPG algorithm

The most common defects in internal threads are collapses on the crest, scratches on the flank, and flaws in the root, as shown in Figure 14. A scratch usually appears as a bulge that will cause an internal thread to bind with an external one, while a collapse or flaw appears as a cavity that will decrease the tight fit of a thread. This section focuses on developing an auto-inspection software, OTPG algorithm, to detect those defects that are embedded in homogeneous thread patterns. An implicit qualitative inspection algorithm is used to detect the embedded defects. The OTPG algorithm includes four major operations: unwrapped image normalization, normalized image segmentation, thread pattern blurring, and defect extraction. These are discussed below.

3.3.1 Unwrapped image normalization

Because the internal thread is at some arbitrary orientation in relation to the OTPG fixture during defect inspection, the start point of the tapping process in the reconstructed unwrapped image will appear at some random location. To ensure that the relative position of the global structure of each unwrapped image coincide, a process that can automatically reorient the start point of the tapping process in the unwrapped image was developed so it is always on the right-hand side of the image. The procedure of normalizing the unwrapped image is described below and illustrated in Figure 14.

The key to normalizing an unwrapped image is to find the start point of the tapping process of the internal thread and place it on the right-hand side of the image. To do this requires a good binary image where the foreground is composed of white bands (crests and roots) and the background is composed of gray bands (flanks). In addition, to locate the initial root in the binary image is necessary. The initial root is generated early in the thread tapping process. Therefore, if the intensity of each pixel of the binary image was tracked one-by-one, scanning from left to right and top to bottom, the frontier foreground element and the corresponding eight-connected elements must be the initial root.

20

Figure 14(a) shows an initial reconstructed unwrapped image of an internal thread. The grayscale closing operator with structure element size k1 × k1 was first applied to fix the interspaces and fill up the holes; the result is shown in Figure 14(b). Then the grayscale image of Figure 14(b) was converted to a binary image using a threshold value calculated with Equation (1) to separate the crests and roots from the flanks,

( )

k2 max

value

threshold = G − (1) where G is the universal set of gray values of Figure 14(b) and k2∈[1, max(G) – 1] is an offset constant. This produces the binary image of Figure 14(c). In Figure 14(c), each eight-connected foreground element can be regarded as a blob and apply a row-by-row labeling algorithm from left to right and top to bottom. The row-by-row labeling algorithm is guaranteed to find the initial root of an internal thread because it is the first one to be produced in the tapping process. The blob of the initial root is labeled as the index one and its corresponding right-bottom coordinate (x*, y*), the start point of the tapping process, is recorded as shown in Figure 14(d). Then the coordinate (x*, y*) was mapped onto the unwrapped image of Figure 14(a) and this image was divided into left and right sub-images, as shown in Figure 14(e), based on the x* coordinate. The unwrapped image can be rounded arbitrarily due to the intrinsic cylindrical structure of the internal thread. A normalized image by rounding the coordinate of the start point to the right-hand side was obtained, as shown in Figure 14(f).

(a)

21 (b)

(c)

(d)

(e)

22 (f)

Figure 14: Unwrapped image normalization procedure: (a) unwrapped image, (b) morphological image, (c) binary image, (d) labeled image where the blob with index one is regarded as the initial root of the thread and its corresponding right-bottom coordinate (x*, y*) is the start point of the tapping process, (e) division of the unwrapped image into left and right sub-images based on the (x*, y*) coordinate, and (f) normalized image generated from rounding the coordinate of the start point in (e) to the right-hand side

3.3.2 Normalized image segmentation

Note that the first two and the last thread patterns of an internal thread can be ignored because the beginning and ending stages of the tapping process form relatively unstable and abnormal patterns. Moreover, these thread patterns are not important in the interaction with an external thread for creating a firm fastening. Here, only the remaining thread patterns, named the inspected image, in the normalized image are focused and an automatic segmentation process can be developed. The normalized image segmentation procedure is explained below and illustrated in Figure 15.

The inner surface of an internal thread comprises repetitive and periodic thread patterns.

A scan along the line that extends from the top to the bottom in the vertical direction of (x*, y*) in Figure 14(d) will touch on the initial root first, followed by the initial crest, followed by the second root and then the second crest, etc. This is the ideal case, however, because some noise blobs may interrupt the regularity. Thus an efficient sub-operation for eliminating the

23 noise blobs is necessary.

Based on the results of Figure 14(d), a binary image where each blob has a unique labeling index was obtained. However, there are still some noise blobs in Figure 14(d). To eliminate them, two properties of each blob: the area and the angle were extracted. The areas of blobs were conveyed into the 1D and two groups clustering formula of {Liu and Tsai [39]}

to find out the representatives of noises and crests/roots. The blobs with areas that were closer to the smaller area representative were regarded as ignoring areas and could be removed.

Then the mode of the angles of the remaining blobs was estimated. Each angle was rounded to the second decimal place. The remaining blobs with angles that were distinct with the mode were regarded as blobs with irregular angle and could be removed. In sum, the noise blobs with a small area or irregular angle relative to the angle of the crests and roots are eliminated.

When the blob elimination sub-operation was applied on Figure 14(d), a clear labeled image can be obtained as shown in Figure 15(a). Then by scanning along a line that stretches from the top to bottom of the image in the vertical direction of (x*, y*), the top coordinates of the third- and second-last roots could be recorded. Based on the coordinates of the third- and second-last roots, upper and lower segmented bounds along the horizontal direction, as shown in Figure 15(b) could be determined. Finally, the normalized image is trimmed based on these two segmented bounds, as shown in Figure 15(c), and the corresponding inspected image of size m × n is auto-segmented, as shown in Figure 15(d).

(a)

24 (b)

(c)

(d)

Figure 15: Normalized image segmentation procedure: (a) start with a clear labeled image where the noise blobs have been eliminated, (b) find the third and second-last roots, (c) map two segmented bound onto the normalized image of Figure 14(f), and (d) obtain final image to be inspected

3.3.3 Thread pattern blurring

Up to the present, a set of directional textures in the image was obtained and can be inspected. The DCT-based image restoration technique is well-suited for detecting defects in directional textures. Intuitively, the dominating direction of the thread pattern in the inspected

25

image will correspond to orthogonal straight lines throughout the origin of the DCT spectrum.

The lines associated with high-energy frequency components in the spectral domain are eliminated by reducing them to zero and transforming back to the spatial domain. The procedure will blur all thread patterns and preserve only local defects if they are initially embedded in the inspected image.

3.3.3.1 Inspected image processing by discrete cosine transformation

As shown in Figure 15(d), the thread patterns of the inspected image appear as a type of periodic directional texture. The periodically occurring thread patterns were first characterized according to their frequency components. Several DCT variants have been proposed. These were categorized by Wang into four slightly different transformations: DCT-I, DCT-II, DCT-III, and DCT-IV [40]. The DCT-II was used in this research because of its ability to process images with uneven boundaries. Let f(x, y) be the gray level of the pixel at (x, y) in the inspected image of size m × n. The discrete 2D DCT is,

( ) ( ) ( ) ( ) ( ) ( )

As shown in Figure 16(a), the global thread patterns are easily distinguishable as a concentration of high-energy lines in the spectrum that are orthogonal to the direction of thread pattern in Figure 15(d).

26

The forward DCT was then applied on Figure 15(d) and the spectra was obtained as shown in Figure 16(a); Figure 16(b) showed the corresponding 3D energy plots. The dominant directions in Figure 15(d) were compacted to orthogonal straight lines through the direct current (DC) component, as shown in the corresponding Figures 16(a) or 16(b). In addition, Figures 16(a) or 16(b) clearly showed that the high-energy frequency components are packed around the top left regions. These are all inherent characteristics of the DCT basis function.

3.3.3.2 High-energy frequency components elimination

Since thread patterns and scratch defects are oriented in the same direction in the inspected image, the thread patterns are mixed together with the orthogonal lines in the spectrum. Since orthogonal lines may be due to both thread patterns and scratches, using a band-rejected filter to eliminate the orthogonal is not good approach. In this dissertation, the wide dynamic range of C(u, v) was first mapped into the narrow range of P(u, v) by a logarithmic transformation, and scaled its intensity into an eight-bit gray level using

( )

,

[

log

(

1 C

( )

, 2

) ]

Pu v =S + u v (4) where S(·) is a scaling operation. Some high-energy frequency components in the spectrum image can then be determined in terms of a high-energy threshold, and are set to zero described in, where k3∈[1, 255] is a high-energy threshold. The resulting image is shown in Figure 16(c) and the corresponding 3D energy plot is shown in Figure 16(d).

3.3.3.3 Image restoration using inverse discrete cosine transformation (IDCT)

After eliminating the specific high-energy frequency components, the spectrum image was back transformed into the spatial domain using the IDCT,

27 Figure 16(e), it is observed that the homogenous thread patterns have been blurred and the associated gray levels have been compressed into a uniform and limited range. Conversely, the gray levels of scratches on the flank in Figure 15(d) have been retained, as shown in Figure 16(e). Meanwhile, the gray levels of collapses on the crest or flaws in the root in Figure 15(d) have been reduced, as also shown in Figure 16(e).

3.3.4 Defect extraction

Since the scratches are relatively brighter and the collapses or flaws are relatively darker than the blurred thread patterns in the restored image, as shown in Figure 16(e), the statistical process control (SPC) binarization method [32] could be used to set the upper and lower control limits for determining defects from the uniform thread patterns. The SPC binarization method can be described by,

( ) ( )

respectively, of the gray level in the restored image. If a pixel has a gray level that falls between the upper and the lower limits, it is shown as white and is considered to be a thread element that should be removed. Otherwise, it is shown as black and is considered to be a

respectively, of the gray level in the restored image. If a pixel has a gray level that falls between the upper and the lower limits, it is shown as white and is considered to be a thread element that should be removed. Otherwise, it is shown as black and is considered to be a