• 沒有找到結果。

Automated SMD LED inspection using machine vision

N/A
N/A
Protected

Academic year: 2021

Share "Automated SMD LED inspection using machine vision"

Copied!
13
0
0

加載中.... (立即查看全文)

全文

(1)

ORIGINAL ARTICLE

Automated SMD LED inspection using machine vision

Der-Baau Perng&Hsiao-Wei Liu&Ching-Ching Chang

Received: 12 December 2010 / Accepted: 7 April 2011 / Published online: 21 April 2011 # Springer-Verlag London Limited 2011

Abstract Light-emitting diodes (LED) are used in many different applications. However, some LED defects are unavoidable in large-volume fabrication and taping processes. These defects may include missing compo-nents, incorrect orientations, inverse polarity, mouse bites, missing gold wires, and surface stains. Human visual inspection has traditionally been used in LED-packaging factories. However, it is subjective, time consuming, and lacking consistent inspection results. This paper proposes a machine vision system combining an automatic system-generated inspection regions (IR) method to inspect two types of LED surface-mounted devices (SMDs). Experimentation revealed that the proposed automatic inspection method could successfully detect defects with up to 95% accuracy for both types (Types 1 and 2) of SMD LEDs. The online inspecting speed was on average under 0.3 s per image.

Keywords LED . Vision inspection . Machine vision . Defect detection

1 Introduction

Increasingly, light-emitting diodes (LEDs) have been replacing conventional lamps owing to their excellent

characteristics such as high efficiency, fast response time, long life, and environmental friendliness [1]. LEDs are now widely used in many applications, including car lights, general illumination, street lamps, and backlights in liquid crystal displays [2]. A surface-mounted device (SMD) LED generally comprises a base, LED chip, two pads, and a circuit pattern that can be formed on the bottom and in the inner circumferential surface of the LED chip-mounted recess [3,4]. SMD LEDs can be categorized into two types, Types 1 and 2, as shown in Fig.1. The orientation regions, which are highlighted by the dashed rectangles, are designed to distinguish the electronic polarity of the LED. For a Type 2 LED, there are three gold wires and a chip in the phosphor region. The quality of the phosphor region influences the LED illuminating efficiency. However, the lighting phosphor region of a Type 1 LED cannot be seen due to its physical placement when it is packaged during the taping process.

Both types of LEDs have to undergo fabrication and taping processes. The fabrication process for an SMD LED involves die attachment, wire bonding, encapsulat-ing, curencapsulat-ing, and punching. After fabrication, the SMD LED is placed on tape to protect it from moisture adsorption during handling or transporting. However, several defects can be produced when the SMD LEDs are packaged on the tape. These defects include missing components, wrong orientations, inverse polarity, mouse bites, missing gold wires, and surface defects. Figure 2 shows batches of one non-defective and five defective samples for both types of LED. Sample No. 1 is the non-defective LED in Fig.2a and b. Both types of LED share four common defects caused by missing component, wrong orientation, inverse polarity, and surface stains, corresponding to sample Nos. 2, 3, 4, and 6 of Fig.2a and b, respectively. However, among the five defective samples, D.-B. Perng (*)

Department of Multimedia and Game Science, Yu-Da University, Miao-Li 36143 Taiwan, People’s Republic of China

e-mail: perng@ydu.edu.tw H.-W. Liu

:

C.-C. Chang

Department of Industrial Engineering and Management, National Chiao Tung University,

Hsin-Chu 30010 Taiwan, People’s Republic of China DOI 10.1007/s00170-011-3338-y

(2)

one of them was different for both types of LED (No. 5). Sample No. 5 of Fig.2ashows a mouse bite in the middle of the right-hand side while sample No. 5 of Fig.2bhas three gold wires missing. The defects of sample Nos. 2, 3, and 4 in both Fig.2a and bwere produced during the taping process, while the defects of sample Nos. 5 and 6 were produced during the fabrication process.

Currently, packaging factories rely on operators or quality assurance (QA) experts to inspect for defects before the packaged LEDs are shipped. Human visual inspection is

subjective, time-consuming, and cannot assure consistent inspection quality. In mass production, packaging factories still lack a standard specification to detect and recognize flaws; consequently, the inspection results are inconsistent. In contrast, an automatic optical inspection (AOI) system can effectively identify defects and relieve the tedious tasks of human inspectors [5–7]. Machine vision in particular can elevate productivity, improve quality management, and offer competitive advantages [6–8]. At the same time, it can provide a highly accurate and robust inspection results. Fig. 1 Illustration of the

struc-ture of Types 1 and 2 LEDs

Fig. 2 One non-defective and five defective samples collected on carrier tape with six bins: a Type 1 LED; b Type 2 LED. In both (a) and (b), sample No. 1 is non-defective, sample No. 2 has a missing component, sample No. 3 is wrong orientation, and sample No. 4 has

inverse polarity. Sample No. 5 of (a) has a mouse bite in the middle of the right-hand side while sample No. 5 of (b) has three gold wires missing. Sample No. 6 of (a) and (b) has surface stains

(3)

However, an AOI system has not yet been explored for SMD LED packaging inspection. The objective of this paper was to develop an AOI system to inspect the above defects of Types 1 and 2 LEDs.

This paper is organized as follows. In Section2, related studies concerning AOI in industrial applications and the use of image processing are reviewed. Details of the proposed algorithms are presented in Section 3. The AOI hardware and experimental results are given in Section4. Concluding remarks and suggestions for further work are presented in Section5.

2 Related researches

Defect inspection using AOI systems in industry has been a popular topic recently [9–15]. For printed circuit board (PCB) inspection, Wu et al. [9] proposed a two-stage PCB automated inspection system. They used a direct subtrac-tion and an eliminasubtrac-tion procedure to detect defects and then used three indices to classify the types of each detected defect. Perng et al. [10] proposed a vision inspection machine for a surface-mounted device on a PCB. The concept of a virtual charge-coupled device (CCD) and a three-tier inspection scheme was devised to simplify the PCB inspection process. Rau and Wu [11] discussed using an AOI approach to detect defects on the PCB inner layer. Jiang et al. [12] proposed a background remover-based inspection method for solder defects. Various industrial AOI applications for internal thread defects [13], float glass fabrication [14], vessels and fiber quality in pulp production [15], and wire bonding defects [16] have been proposed. These studies developed prototype hardware systems and used different image processing methods to assist human inspectors in identifying defects.

Gray-image segmentation methods are based on either intensity discontinuity or intensity homogeneity in a region [17]. Approaches based on intensity discontinuity detect abrupt changes in the gray value of adjacent image pixels. Approaches based on homogeneity detect smooth or

homogeneous characteristics of neighboring image pixels. The process of merging pixels or splitting a region is usually governed by a homogeneity criterion according to certain features, such as gray level, texture, or color [18– 20]. However, there is no universal segmentation algorithm that can be adopted by AOI systems and applied to different domains. For the application of an AOI system in industrial domains, the approach is to focus on some defective regions of an object as inspection regions (IR), or regions to be inspected, and to treat non-defective regions as background. The IR is varied and depending on the product shape or the quality inspector. In practice, a flexible method of specifying the IR is helpful for the defect inspection. Fig. 3 Illustration of the proposed AOI system for LED defect inspection: a configuration diagram; b implemented prototype

Fig. 4 Flowchart of the proposed pre-training phase for LED defect inspection

(4)

3 Approach for inspecting LED defects 3.1 Hardware system

The hardware system implemented to auto-inspect the two types of LEDs is shown in Fig. 3. The structure of the proposed AOI system is shown in Fig. 3a. Major components of the proposed computer vision system included a front illuminating white light LED source with a white diffused filter, a CCD camera, a telecentric lens, a motion control mechanism containing two reels, and a mounting tape, as shown in Fig.3b.

3.2 Algorithms for LED defect inspection

A two-phase algorithm, including a pre-training phase and a testing phase, was proposed to inspect for LED defects. The details of the proposed inspection algorithm are discussed in the next subsection.

3.2.1 Manually specified IRs in the pre-training phase The pre-training phase specified the IR and obtained the related parameters. Figure4shows a flow chart for the pre-training phase. First, a non-defective (typical) image was loaded and denoted as I(x,y). The IR on the typical image must be manually specified by a QA expert; hence, the image must first be auto-normalized. To obtain the bright region on I(x,y), Otsu’s auto-thresholding method [21] and closing operation [22] were used to filter out the back-ground. The center point P of the LED region was calculated and used as a reference point of rotation.

Next, to determine the rotation angle, we calculated the spatial moments, including second-order row moment, second-order column moment, and second-order mixed moment of the LED region. The equation of the approxi-mated ellipse of the LED region was then calculated. Since the coefficients of the equation were determined, the rotation angle θ can be calculated and defined as the Fig. 5 Demonstration of typical

image normalization for both types of LEDs: a1 and b1, respectively, was the typical image of Types 1 and 2 LED. a2 and b2, respectively, showed the center reference point p on the binary image obtained by ap-plying Otsu’s auto-thresholding method and closing operation; a3 and b3, respectively, showed the rotation angleθ calculated based on the column axis and the major axis of the ellipse; a4 and b4, respectively, was the final normalization result that corresponded to (a1) and (b1) by using Eq. (1)

Fig. 6 Illustration of the IR as I (x′,y′): a sample of a Type 1 LED with two IRs having an orientation region as indicated with a dashed line and a mouse bite region as delineated with a solid line; b sample of a Type 2 LED with four IRs having ori-entation regions as indicated with a dashed line and three gold wire regions as delineated with a solid line

(5)

orientation angle between the major axis of the ellipse and the column axis [23]. Based on the center reference point P and the rotation angle θ, we could normalize the typical image I(x′,y′) by using the affine transform [24] with bilinear interpolation [25], as Eq. (1),

x0 y0   ¼ cos q  sin q sin q cos q   x y   ¼ x cos q  y sin q x sin q þ y cos q   : ð1Þ Figure 5 is a demonstration of the normalized typical images of Types 1 and 2 LEDs. Figure 5, a1 and b1, respectively, shows the typical image. The processed images of applying Otsu’s auto-thresholding method and closing operation with the center reference point P are shown in Fig. 5, a2 and b2. Then the orientation angle θ could be calculated between the major axis of the ellipse and the column axis, as shown in Fig. 5, a3 and b3. As shown in Fig.5, a4 and b4, the normalized typical images could be obtained by Eq. (1), as compared to the typical images shown in Fig.5, a1 and b1.

It was important to have the IR on I(x′,y′) evaluated by a QA expert in order to ensure that the proposed algorithm was flexible enough to inspect the two different types of LEDs. Note that two IRs must be specified for I(x′,y′) of a Type 1 LED, whereas four IRs must be specified for I(x′,y′) of a Type 2 LED, as illustrated in Fig.6. The specified IRs containing the orientation regions of both types of LED are boxed-in with a dashed line, the mouse bite region of a Type 1 LED is delineated with a solid line, and the gold wire region of a Type 2 LED is also boxed-in with a solid line. In this research, the proposed algorithm could provide 8.33 and 10.53 μm per pixel for Types 1 and 2 LED, respectively.

Finally, the associated parameters of the specified IR, including the vector of the upper left points, (U1,V1), the

vector of the lower right points, (U2,V2), and the center

reference point P (see Fig. 7) were saved for subsequent processes. The pre-training phase focused on enhancing the

flexibility of the proposed inspection in order to assess the different defect patterns present on the two types of LEDs. The obtained parameters of the IR were implemented in the testing phase.

3.2.2 Testing phase

After executing the training phase, the defects were inspected. Both taping and fabrication defects were evalu-ated during the testing phase. A flowchart of the proposed inspection method is given in Fig. 8. The testing image is denoted as T(x,y). The details of testing phase are discussed below.

Fig. 7 Illustration of a specified IR and its associated parameters I(x′,y′) for a Type 1 LED

Fig. 8 Flowchart for the proposed testing phase for LED defect inspection

(6)

3.2.3 Image preprocessing

Because an object in the LED image has a lighter gray value than the background, in this stage, we also applied Otsu’s auto-thresholding method [21] and the closing operation [22] to the images. Figure 9, a1–a6 shows the testing images with missing components and wrong orientations, and without defects. Figure 9, b1 to b6 shows the processed brighter regions that corresponding to Fig. 9, a1 to a6, respectively. The areas of the brighter regions were calculated and denoted as a. Because the values of a for the cases of a missing component or a wrong orientation were less than the non-defective cases, we used the threshold k1as a criterion. If the value of a on

the testing image was less than k1, then the testing image

was regarded as a defect. If the value of a on the testing

image was larger than k1, then we proceeded to the next

inspection process.

3.2.4 Automatically system-generated IR method for Image normalization and segmentation

The operations of normalization and segmentation were applied on the testing images of inverse polarity, mouse bites, missing gold wires, and surface defects and on the testing images without defects. The testing images were normalized by rotating by angle θ counterclockwise with respect to the center reference point and denoted as normalized images T(x′,y′). The center reference point and orientation angleθ were obtained by the same process as in the pre-training phase. The center reference point in T(x′,y′) was denoted as p′.

Fig. 9 a1–a3 and a4–a6 are the input images of a Type 1 LED and a Type 2 LED, respectively, having a missing component, wrong orientations, and no defects. The brighter regions in (b1–b6) are obtained by applying Otsu’s auto-thresholding method and the closing operation to (a1–a6)

Fig. 10 Illustration of the method used to derive IRs by the system-generated IR method: a two specified IRs (dashed rectangles) on the normalized typical image with its center reference point p; b normalized testing image with its center reference point p′; c

computed shift Δμ and Δv between point p and point p′ on the normalized testing image; d specified IRs (dashed line rectangles) and derived IRs (solid line rectangle); e segmented IRs, including

(7)

A system-generated IR method was adopted in order to have the IR segmented automatically. First, we obtained the saved parameters of I(x′,y′) in the pre-training stage. We obtained the specified rectangles in I(x′,y′) by using Eq. (2), rect Points upperleft; Pointslowerright¼ rect U½ð 1; V1Þ; Uð 2; V2Þ:

ð2Þ Next, according to P in I(x′,y′) and p′ in T(x′,y′), we calculated the shift distances Δμ and Δv in the x and y directions. Finally, the shift distance was used to segment the corresponding IR in T(x′,y′) using Eq. (3),

rect Δm; Δnð Þ ¼ U½ð 1þ Δm; V1þ ΔnÞ; Uð 2þ Δm; V2þ ΔnÞ:

ð3Þ Let the segmented IR in T(x′,y′) of a Type 1 LED and a Type 2 LED be denoted as IRorientation, IRmousebite, and

IRgoldwire. Figure 10 illustrates the process of deriving IRorientationand IRmousebitein the testing image of a Type 1 LED. Figure 10a shows two specified IRs on the normalized typical image with its center reference point P; Fig. 10b shows a normalized testing image with its center reference point p′. Figure 10c gives both center reference points P of I(x′,y′) and p′ of T(x′,y′), as well as the shift distance between the two points. According to the shift distance, we can obtain the corresponding IR in the normalized image. The rectangles with dashed lines in Fig. 10d are the IRs manually specified by a QA expert whereas the rectangles with solid lines are the IRs automatically determined by using the system-generated IR method. Figure10eshows the segmented IRorientationand IRorientation.

3.2.5 LED defect extraction

The defective features were extracted from the segmented IR. First, we calculated the mean intensity of the orientation region by using Eq. (4),

b ¼ P r P c IRorientationð Þr; c r c ; ð4Þ

where IRorientationis the gray value of the coordinate (r,c) on the IRorientation. Generally, since the β value of an inverse polarity image was larger than a non-defective one, we established a threshold parameter k2 to detect an inverse

polarity defect.

Second, the area of a mouse bite region of a Type 1 LED was calculated, and the shortest length of the three gold wires of a Type 2 LED was calculated. For a Type 1 LED, to determine the area of the mouse bite region, we subtracted IRmousebiteof T(x′,y′) from IRmousebiteof I(x′,y′). If there was any evidence of a mouse bite appearing in the subtracted region, this area was measured and denoted as γ1. For IRgoldwire, we obtained the three gold wire regions

by using Otsu’s auto-thresholding method and then calcu-lated the every length of the three gold wires in these regions. Among the three lengths, the shortest length was defined asγ2. The missing gold wire defect could happen in

any of the three gold wires. So, we used the shortest length of γ2 as a reference to inspect for a missing gold wire

defect. Figure 11 illustrates the processed images of the mouse bite and gold wire regions for both types of LEDs. Figure 11, a1 and b1 shows the normalized images. Figure 11, a2 and b2 shows the segmented mouse bite and gold wire regions, respectively, which were derived using the system-generated IR method. The black regions of Fig. 11, a3 and b3 are the mouse bite and gold wire regions, respectively. Since the area of the mouse bite Fig. 11 Demonstrations of mouse bite region extraction for a Type 1

LED and gold wire region extraction for a Type 2 LED. a1 and b1 are the normalized images. The regions of (a2) and (b2) are the specified IRs. The black region obtained from (a2) in the IR is the mouse bite region in (a3). The black region obtained from (b2) in the IRs are the gold wire regions in (b3)

Fig. 12 Process of applying the DoG filter to a quasi-surface defect image of a Type 1 LED

(8)

region and the shortest length of the three gold wire regions were extracted, the thresholds of these defects could be set as a specification. In this research, the area of the mouse bite region,γ1, was defined as less than 10 pixels, and the

shortest length of gold wire,γ2, was defined as greater than

40 pixels.

3.2.6 Development of a control chart for surface quality Because of the lack of an explicit specification for surface defects using human inspection, we used a control chart with an upper limit to assess surface quality. First, we highlighted the defective region on the surface by using the difference of Gaussian (DoG) filter [22]:

DoGðx; yÞ ¼ 1 2ps2 1 e x2þy2 2s21 þ 1 2ps2 2 e x2þy2 2s22 ; ð5Þ

where the suggested standard deviation ratio was set at 1.6:1 [22]. Figure 12 illustrates the result of applying a DoG filter on the quasi-surface defect image for a Type 1 LED. Then, we extracted the stains from the processed image by using a threshold, k3. Next, the area of the stains

in the binary image was calculated and used as a parameter in the quality control chart. Let the calculated area of the surface stain of the testing image beδ. Because the area of a surface defect will be somewhat larger than that of a non-defective surface, the concept of statistical process control (SPC) was used to set the upper control limit (UCL) [26] to

distinguish a surface defect image from a non-defective image. The SPC concept is described by Eq. (6),

UCL ¼ m þ k4 s; ð6Þ

where μ and σ represent the mean and standard deviation, respectively, of the area of the surface stain. The parameter k4 is a control constant. Generally, μ and σ can be

determined from the non-defective training samples. In Section 4, different values of k4 will be evaluated

experimentally and assessed against the desired degree of control.

4 Experimentation and discussion

In Section 4.1, we describe some preliminary experiments used to evaluate the impact of different values of the four parameters k1, k2, k3, and k4. In Section 4.2, we describe

and discuss our experimental results. Our algorithm was programmed in the Visual Basic 2008 environment com-bined Halcon 9.0 image processing software [27].

4.1 Sensitivity analysis for parameter setting

Our proposed testing phase algorithm contains four parameters that influence the inspection outcome. They are the threshold value of the brighter region k1, the

threshold value of the mean intensity k2of the orientation Fig. 13 Evaluated index with different k1area threshold values for both types of LED

(9)

region, a binary threshold k3 for the processed DoG

images, and a control constant k4for the defective surface

area. Two experimental phases were carried out to establish k1and k2. Each used 15 training images containing missing

component, incorrect orientation, inverse polarity, and

non-defective samples for both types of LEDs (Section 4.1.1). Another 15 non-defective images were used to establish the threshold value k3, μ, and σ (Sections 4.1.2 and 4.1.3).

Fifty testing images (30 images of surface defects and 20 images of non-defective samples) were used to gauge the effect of the control constant k4 for both types of LEDs

(Section4.1.3).

4.1.1 Determining the threshold values k1and k2

To inspect the taping defects of missing component, wrong orientation, and inverse polarity defects, two phases of experiments were conducted. In phase 1, we obtained the ranges of thresholds of k1 and k2 from the first training

images. In phase 2, we calculated an index by changing the threshold value within the range from the second training images. Then, a threshold value could be flagged using the higher evaluated index. The process of identifying the two thresholds k1and k2was the same. Each of the 15 images

of missing component, wrong orientation, and non-defective samples was used to obtain the range of k1.

Similarly, each of the 15 images of inverse and non-inverse polarity was used to obtain the range of k2. Then, the other

set of images of missing component, wrong orientation, inverse polarity, and non-defective samples was used to determine the threshold values of k1and k2.

In phase 1, because a non-defective image has a larger bright region than a defective image, we computed the area of the extracted bright region to detect missing component and wrong orientation defects. In addition, because the gray value in the orientation region for a non-defective sample image is lower than that for an inverse polarity image, we Fig. 15 Area of the extracted region generated with different values

of k3for both types of LED

Fig. 16 Experimental results for different k3thresholds ranging from−1 to 3 for both types of LED images that were processed using the DoG

(10)

computed the mean intensity of the orientation region to detect inverse polarity defects. We then obtained the limited ranges of area and mean intensity from the first training set of images. These limited ranges were used to set the minimum value of a non-defective group and the maximum value of a defective group based on the dispersed value between these two groups.

In phase 2, the index evaluated from the second training images was used to obtain a stable threshold. We calculated the index by gradually changing the threshold value of k1

and k2over identical intervals:

Evaluated indexðEIÞ ¼ϖ  ϖ6  100%; ð7Þ whereϖ and ϖ are the number of images successfully and unsuccessfully detected, respectively, under the threshold value, 6 is the total number of testing images, and 6 ¼ ϖ þ ϖ. The index was used to examine whether the area threshold values of k1and k2were reliable.

Figures 13 and 14 illustrate the effect of the various thresholds of k1and k2for Types 1 and 2 LEDs. There is a Type 1 LED Type 2 LED

μ σ μ σ

530 140.5 1455 337 Table 1 μ and σ values derived

from samples of both types of LED

Fig. 17 SPC inspection rate of LED surface defects with re-spect to the control constant k4

(11)

marked plateau in both figures. Generally, the higher the evaluated index, the more reliable the threshold. Figure13 indicates that the optimal index had a value of 100% in the stable interval (125,000–165,000 for the Type 1 LED and

140,000–215,000 for the Type 2 LED). The k1parameter

was determined by choosing the median value between the intervals, that is, k1=145,000 pixels for the Type 1 LED

and k1=177,500 pixels for the Type 2 LED. In Fig. 14,

considering the mean intensity threshold of k2, the optimal

index fell between 31 and 43 for the Type 1 LED and between 55 and 57 for the Type 2 LED. The thresholds for k2were then flagged at 37 for the Type 1 LED and 56 for

the Type 2 LED.

4.1.2 Effects of k3on highlighting the defective region

Applying the threshold k3to the image that was processed

by the DoG filter of Eq. (5) highlighted the surface stains Table 2 SMD LED inspection results

Items Inspection rate (%)

Type 1 LED Type 2 LED Accuracy rate 93.8% (980/1045) 98.0% (1004/1024) Misdetection rate 6.1% (58/946) 0.95% (9/949) False alarm rate 7.1% (7/99) 14.7% (11/75)

Fig. 18 Examples of misdetec-tions and false alarms for Types 1 and 2 LEDs: a1–h1 normal-ized images; a2–h2 results of applying the surface defect method to (a1–h2), respectively

(12)

in the images. A different threshold value affected the area of the extracted image. Figure15shows the change in the area of the extracted image after we applied various k3

values to the 15 processed images for both types of LED. It is clear that both types of LED behaved similarly. Figure16shows visually how the images for both types of LED changed with different values of k3. When the value

of k3was equal to −1, the extracted region consisted of

most of the LED. Some of the extracted region appeared as false stains, increasing the extracted area to close to the entire area of the LED. When the k3 value was greater

than 0, the surface stains were gradually highlighted, as shown progressively in Fig.16. The trend for the average extracted area started to level off at k3= 1 and continued to

decrease with increasing k3. When the threshold k3value

was equal to 3, the defective extracted area was shrunk and close to zero, as shown in Figs. 15 and 16. The extracted area related to the region of interest could be effectively identified when k3= 2 for both types of LED, as

shown in Figs.15and16.

4.1.3 Effect of the control constant k4

Because the area of the LED surface defect affected the illumination quality of the LED, a control chart with an upper limit was used, as Eq. (6), to discriminate the defective surface area of the LED from non-defective areas. To obtain a reliable control constant, a supervised two-stage procedure was followed. In the first stage, 15 non-defective samples for each type of LED were used to calculate the associated μ and σ values using Eq. (6); the results are shown in Table1. In the second stage, the derived values of μ and σ were fixed, and the inspection rates for different values of k4 were determined for 50 other samples (30

defective images and 20 non-defective images) for each of the two types of LED.

Generally, the control constant k4affected the severity of

the associated control limit. A smaller value of k4resulted

in a tighter limit that could trigger false alarms in a surface quality inspection. On the contrary, too large of a value of k4 caused defects to be missed. Figure 17 shows the

detection outcome based on 50 samples of the Types 1 and 2 LEDs. For the Type 1 LED, the number of false alarms decreased gradually as the k4value increased to 3.2. For the

Type 2 LED, the number of false alarms decreased more rapidly, and no more occurred when k4 was greater than

1.4. It is evident that for both types of LED, the rate of missed detection and the rate of false alarms were inversely related to k4. Satisfactory k4 limits were in the range of

3.2–3.8 for Type 1 LEDs and 1.2–1.8 for Type 2 LEDs. The recommended values of k4 are, therefore, 3.5 and 1.5 for

the Types 1 and 2 LEDs, respectively, i.e., the midpoint of the ranges.

4.2 Experimental results and discussion

In this subsection, experimental results are presented that confirm the function of the proposed inspection algo-rithm for online SMD LED inspection. A number of samples of both types of LED were used to evaluate the accuracy and time of the proposed AOI system. The test set comprised 1,045 sample images of the Type 1 LED (946 defective images and 99 non-defective images) and 1,024 sample images of the Type 2 LED (949 defective images and 75 non-defective images). The parameters were as follows: k1= 145,000, k2= 37, and UCL= 1,021.75

for the Type 1 LED; k1= 177,500, k2= 55, and UCL=

1,960.5 for the Type 2 LED; k3= 2 for both types of LED.

The experiments revealed that the average accuracy of the proposed inspection system was more than 95%. The details are listed in Table 2. This implies that the vision inspection system not only reduced the number of samples to be inspected but also reduced the inspection time and labor cost. The inspection times for the Types 1 and 2 LEDs were 0.228 and 0.225 s per image, respectively. Concerning the misdetections for both types of LED, Fig. 18, a1–d2 illustrates the case when the surface defective area was above the UCL, while Fig. 18, e1–h2 illustrates the case when the surface defective area was below the UCL. The unsuccessful detection was the inconsistent result of machine–human inspection. However, any ambiguous or implicit defect inspection must be rechecked by a human.

5 Conclusions

An AOI system was proposed to inspect the defects of two types of SMD LEDs. The proposed AOI system could successfully detect defects with up to 95% accuracy, and the inspection speed was less than 0.3 s per image, fast enough to work synchronously with a LED production line. The number of LED-based products has grown rapidly as new types of LEDs have become available [28]. It is, therefore, worthwhile to develop new systems to detect defects in various types of LEDs.

References

1. Chang YC, Ou CJ, Tsai YS, Juang FS (2009) Nonspherical LED packaging lens for uniformity improvement. Opt Rev 16(3):323– 325. doi:10.1007/s10043-009-0059-7

2. Reinhard E, Khan EA, Akyüz AO, Johnson G (2010) Color imaging: fundamentals and applications. AK Peters, Cambridge, p 518 3. Orton KR (1994) Sufrace-Monut LED. Available via Google

Patents.http://www.google.com.tw/patents?id=3A0aAAAAEBAJ. Accessed 10 Dec 2010

(13)

4. Watanabe S, Ogawa Y (2006) Surface Mount LED. Available via Google Patents. http://www.google.com.tw/patents/about?id=x

P65AAAAEBAJ. Accessed 10 Dec 2010

5. Newman TS, Jain AK (1995) A Survey of automated visual inspection. Comput Vis Image Underst 61(2):231–262. doi:10.1006/cviu.1995.1017

6. Malamas EN, Petrakis EGM, Zervakis M, Petit L, Legat JD (2003) A survey on industrial vision systems, applications and tools. Image Vis Comput 21(2):171–188. doi:10.1016/S0262-856 (02)00152-XDOI:dx.doi.org

7. Chang CY, Lin SY, Jeng MD (2009) Application of Two Hopfield Neural Networks for Automatic Four-Element LED Inspection. IEEE Trans Syst Man Cybern Part C-Appl Rev 39(3):352–365.

doi:10.1109/TSMCC.2009.2013817 DOI:dx.doi.org

8. Golnabi H, Asadpour A (2007) Design and application of industrial machine vision systems. Robot Comput-Integr Manuf 23(6):630–637. doi:10.1016/j.rcim.2007.02.005

9. Wu WY, Wang M-JJ, Liu CM (1996) Automated inspection of printed circuit boards through machine vision. Comput Ind 28 (2):103–111. doi:10.1016/0166-3615(95)00063-1 DOI:dx.doi.org

10. Perng DB, Liu CP, Chen YC, Chou CC (2002) Advanced SMD PCB vision inspection machine development. 15th IPPR Conf Comput Vis Graph Image Process, pp 311–317

11. Rau H, Wu CH (2005) Automatic optical inspection for detecting defects on printed circuit board inner layers. Int J Adv Manuf Technol 25(9–10):940–946. doi:10.1007/s00170-004-2299-9

12. Jiang BC, Wang CC, Hsu YN (2007) Machine vision and background remover-based approach for PCB solder joints inspection. Int J Prod Res 45(2):451–464. doi:10.1080/ 00207540600607184

13. Perng DB, Chen SH, Chang YS (2010) A novel internal thread defect auto-inspection system. Int J Adv Manuf Technol 47(5– 8):731–743. doi:10.1007/s00170-009-2211-8

14. Peng XQ, Chen YP, Yu WY, Zhou Z, Sun G (2008) An online defects inspection method for float glass fabrication based on machine vision. Int J Adv Manuf Technol 39(11–12):1180–1189. doi:10.1007/s00170-007-1302-7

15. Carvalho P, Araujo H, Dourado A (1999) An automatic optical sensor for vessels and fibbers quality inspection in pulp production. Comput Ind Eng 37(1–2):355–358. doi:10.1016/ S0360-8352(99)00092-3

16. Perng DB, Chou CC, Lee SM (2007) Design and development of a new machine vision wire bonding inspection system. Int J Adv Manuf Technol 34(3–4):323–334. doi:10.1007/s00170-006-0611-6

17. Cheng HD, Jiang XH, Sun Y, Wang J (2001) Color image segmenation: advances and prospects. Pattern Recognit 34 (12):2259–2281. doi:10.1016/S0031-3203(00)00149-7

18. Munoz X, Freixenet J, Cufi X, Marti J (2003) Strategies for image segmentation combining region and boundary information. Pattern Recognit Lett 24(1–3):375–392. doi:10.1016/S0167-8655(02) 00262-3

19. Revol C, Jourlin M (1997) A new minimum variance region growing algorithm for image segmentation. Pattern Recognit Lett 18(3):249–258. doi:10.1016/S0167-8655(97)00012-3

20. Haralick RM, Shapiro LG (1985) Image segmentation techniques. Comput Vis Graph Image Processing 29(1):100–132.

doi:10.1016/S0734-189X(85)90153-7

21. Otsu N (1979) A threshold selection method from gray-level histogram. IEEE Trans Syst Man Cybern SMC 9(1):62–66 22. Gonzalez RC, Woods RE (2008) Digital image processing (3rd

edn). Prentice Hall, Upper Saddle River, pp 635–639

23. Haralick RM, Shapiro LG (1992) Computer and Robot Vision. Addison-Wesley, Massachusetts 1:73–75

24. Shapiro LG, Stockman GC (2001) Computer Vision. Prentice Hall, Upper Saddle River, p 330

25. Rosenfeld A, Kak AC (1982) Digital Picture Processing (2nd ed). Academic Press, New Jersey 2:33–36

26. Montgomery DC (2005) Introduction to statistical quality control (5th ed). Wiley, New York, p 153

27. Carsten S, Markus U, Christian W (2008) Machine Vision Algorithms and Applications. Wiley, Weinheim

28. Chang CC (2010) An AOI system for side-view SMD-LED defect inspection. Master thesis, Department of Industrial Engineering and Management, National Chiao Tung University, Taiwan

數據

Fig. 2 One non-defective and five defective samples collected on carrier tape with six bins: a Type 1 LED; b Type 2 LED
Fig. 4 Flowchart of the proposed pre-training phase for LED defect inspection
Fig. 6 Illustration of the IR as I (x ′,y′): a sample of a Type 1 LED with two IRs having an orientation region as indicated with a dashed line and a mouse bite region as delineated with a solid line; b sample of a Type 2 LED with four IRs having  ori-enta
Fig. 8 Flowchart for the proposed testing phase for LED defect inspection
+7

參考文獻

相關文件

T transforms S into a region R in the xy-plane called the image of S, consisting of the images of all points in S.... So we begin by finding the images of the sides

[This function is named after the electrical engineer Oliver Heaviside (1850–1925) and can be used to describe an electric current that is switched on at time t = 0.] Its graph

For ex- ample, if every element in the image has the same colour, we expect the colour constancy sampler to pro- duce a very wide spread of samples for the surface

• Richard Szeliski, Image Alignment and Stitching: A Tutorial, Foundations and Trends in Computer Graphics and Computer Vision, 2(1):1-104, December 2006. Szeliski

• Richard Szeliski, Image Alignment and Stitching: A Tutorial, Foundations and Trends in Computer Graphics and Computer Vision, 2(1):1-104, December 2006. Szeliski

Al atoms are larger than N atoms because as you trace the path between N and Al on the periodic table, you move down a column (atomic size increases) and then to the left across

• Non-vanishing Berry phase results from a non-analyticity in the electronic wave function as function of R.. • Non-vanishing Berry phase results from a non-analyticity in

We cannot exclude the presence of the SM Higgs boson below 127 GeV/c 2 because of a modest excess of events in the region. between 115 and 127