• 沒有找到結果。

行政院國家科學委員會專題研究計畫 成果報告

N/A
N/A
Protected

Academic year: 2022

Share "行政院國家科學委員會專題研究計畫 成果報告"

Copied!
22
0
0

加載中.... (立即查看全文)

全文

(1)

行政院國家科學委員會專題研究計畫 成果報告

太陽能矽晶片微隱裂自動光學檢測系統開發 研究成果報告(精簡版)

計 畫 類 別 : 個別型

計 畫 編 號 : NSC 99-2221-E-216-036-

執 行 期 間 : 99 年 08 月 01 日至 100 年 07 月 31 日 執 行 單 位 : 中華大學機械工程學系

計 畫 主 持 人 : 邱奕契 共 同 主 持 人 : 游坤明

計畫參與人員: 碩士班研究生-兼任助理人員:黃彥儒

碩士班研究生-兼任助理人員:徐曟洧

報 告 附 件 : 出席國際會議研究心得報告及發表論文

處 理 方 式 : 本計畫可公開查詢

中 華 民 國 100 年 09 月 19 日

(2)







成 成 成 果 成 果 果 報 果 報 報 告 報 告 告 告 行政院國家科學委員會補助專題研究計畫 行政院國家科學委員會補助專題研究計畫 行政院國家科學委員會補助專題研究計畫 □ 行政院國家科學委員會補助專題研究計畫 □ □ □期中進度報告 期中進度報告 期中進度報告 期中進度報告

太陽能矽晶片微隱裂自動光學檢測系統開發

計畫類別:  個別型計畫 □整合型計畫

計畫編號:NSC 99- 2221- E - 216 -036 -

執行期間: 99 年 08 月 01 日至 100 年 07 月 31 日 執行機構及系所:中華大學機械工程學系

計畫主持人:邱奕契 教授 共同主持人:游坤明 教授

計畫參與人員:徐曟洧、黃彥儒

成果報告類型(依經費核定清單規定繳交):  精簡報告 □完整報告

本計畫除繳交成果報告外,另須繳交以下出國心得報告:

□赴國外出差或研習心得報告

□赴大陸地區出差或研習心得報告

 出席國際學術會議心得報告

□國際合作研究計畫國外研究報告

處理方式:除列管計畫及下列情形者外,得立即公開查詢

□涉及專利或其他智慧財產權,□一年□二年後可公開查詢

中 華 民 國 100 年 09 月 17 日

(3)

太陽能矽晶片微隱裂自動光學檢測系統開發

Development of Automatic Optical Inspection System for Discovering Invisible Micro Crack of Silicon Solar Wafer

中文中文 中文中文摘要摘要摘要 摘要

檢測多晶矽太陽能晶片中的不可見微裂紋並不是一 件容易的事,這是因為其所特有的異方向性紋理背景。

此困難可從兩方面來說明。首先取像設備必需看的到隱 藏於晶片內部的微裂紋。其次,軟體程式必需將微裂紋 從影像中抽取出來。本研究首先建立一套能夠攫取到微 裂紋影像的近紅外線取像系統解決了第一個問題。接著 我們以區域成長法為基礎,發展出有能力從攫取所得影 像中抽取出微裂紋的瑕疵偵測演算法。實驗結果顯示,

本研究所提出之微裂紋檢測系統除了可以有效偵測出微 裂紋外,也可以用來檢查矽晶片是否有玷污、針孔、異 物、及裂痕等瑕疵。系統之整體精確度為 99.85%,所提 供之優點包括傑出的裂紋偵測敏感度、察覺隱裂的能 力、以及低成本。

關鍵詞 關鍵詞 關鍵詞

關鍵詞:微裂紋、瑕疵偵測、區域成長、太陽能晶片、

近紅外線取像。

Abstract

To discover invisible micro cracks from a multi-crystalline silicon solar wafer image is not an easy task because of its heterogeneously textured background. The difficulty is twofold.

First, invisible micro cracks must be visualized to imaging devices. Second, an image processing sequences capable of extracting micro cracks from cracked images must be developed.

To solve the problems, a near infrared imaging system was first set up to capture images of invisible micro cracks. After being able to see invisible micro cracks, a region-growing-based flaw detection algorithm was developed to extract micro cracks from the captured images. The experimental results showed that the proposed micro cracks inspection system is effective in detecting micro cracks. Besides, it is also applicable to inspect silicon solar wafers for stain, pinhole, inclusion, and macro crack. The overall accuracy of the defect detection system is 99.85%. The advantages afforded by the system including excellent crack detection sensitivity, ability to detect hidden subsurface micro cracks, and low cost.

Keywords: Micro Crack, Flaw Detection, Region Growing, Solar Wafer, NIR Imaging.

1. 前言前言前言 前言

對矽晶片而言,無論是可見微裂紋(微顯裂)或不 可見微裂紋(微隱裂),如果未能及時被發現,極有可能 在後續製程中因受力而成長成巨觀裂紋(即一般的裂 痕),甚至破片。根據統計,矽晶片在製程中的破片率大 約是 2%,而矽晶片在太陽能電池的成本結構中佔了近 66%。可見,未能儘早將具有微裂紋的矽晶片偵測出來,

所導致的成本損失是相當驚人的。此外,即使具微裂紋 的矽晶片並未導致破片並順利製成電池,則該電池的光 電轉換效率也會降低,甚至影響整片太陽能面板的效率。

圖 1、具微裂紋之多晶矽太陽能晶片影像:圖中的兩個微 裂紋實際上是隱藏於晶片內部的不可見微裂紋,然而在 本研究建立的 NIR 設備取像下,不可見微裂紋變可見了。

值得一提的是,目前大約 86%的太陽能光電是由結 晶矽太陽能電池所產生的,因此檢測多晶矽太陽能晶片 是否具有微裂紋的重要性是不言可喻的。再者,除微裂 痕外仍有許多類型的瑕疵需要被檢出。例如電池中的異 物可能導致電池的短路,因此雜質的檢出也很重要。

根據裂紋的大小,裂紋可歸類為巨觀裂紋或微觀裂 紋,寬度小於 30 µm 的裂紋通常稱為微裂紋。此外,根 據裂紋出現的位置,裂紋又可分成可見裂紋或不可見裂 紋,出現在表面者一般稱為可見裂紋,隱藏於內部者即 稱為不可見裂紋(或稱為隱裂)。雖然隱裂存在於矽晶片 內部,但並非無法被察覺,事實上紅外線取像技術已普 遍用來檢測內部瑕疵[1-2]。圖 1 所示為利用近紅外線取 像技術所取得之多晶矽太陽能晶片之微裂紋影像。造成 微裂紋的因素是多方面的,當晶片受到外力作用時,在 晶片內部很容易造成裂紋,同時厚度愈薄的晶片愈容易 產生裂紋,甚至會導致破片[3]。雷射切割過程當中很容 易在晶片內部引發裂紋[4]。許多深層之裂紋是在矽晶錠 切片(ingot cutting)時所產生的[5]。

2. 文獻探討文獻探討文獻探討文獻探討

偵測裂紋的方法相當多,將聲、光、熱、射線(X- 射線、Gamma-射線)等導入待測物並觀察其反應都可以 偵測裂紋是否存在。最常見的裂紋檢測方法包括染料檢 測法、渦電流檢測法[6]、聲學檢測法[7-8]、超音波檢測 法[9]、輻射熱像圖(RHT)檢測法[10-11]、掃描聲波顯微鏡 (SAM)檢測法[12-13]、光致螢光(PL)檢測法[14-15]、電致 發光(EL)檢測法[16-17]及共振超音波振動(RUV)檢測法 [18-19]. 將深色染料塗抹在矽晶片上,透過觀察可很容易 得知是否有裂紋,可惜染料檢測法是破壞性檢測法。渦

(4)

電流法檢測法可用來檢測導電性物體,透過感應電流的 差異得知是否有淺層裂紋或包含空洞或雜質等瑕疵。對 深層裂紋可採用超音波進行檢測。一般來說,具有裂紋 之晶片其頻率反應與正常晶片會有差異。此外,隨著裂 紋長度的增加,頻率及振幅的大小也隨之加大[20]。輻射 熱像圖檢測法是將熱導入矽晶片或電池使其溫度上升,

隨後再以熱影像攝影機(thermal camera)取像,最後透過熱 影 像 的 分 析 判 斷 是 否 有 裂 紋 存 在 。 超 音 波 熱 像 圖 (ultrasonic thermography)檢測法[21]是將高功率之超音波 脈衝導入待測物使其產生振動,溫度開始上升。如果待 測物有裂紋則裂紋處溫度的上升,會比其它正常位置來 得快,因此檢查熱影像溫度的分佈,即可得知是否有裂 紋以及裂紋的位置。超音波碰到不同介質時所產生的反 射波會有所不同,SAM檢測法即是利用此原理將高功率 的超音波脈衝打在待測物上,並將接收到的反射波根據 其強弱轉換成不同明暗度的影像。最後透過影像分析,

即可得知待測物是否含有裂紋、空洞、及汽泡等瑕疵。

3. 研研研究研究究究目目目目的的的

如前所述,將具有微裂紋的太陽能晶片在其進入電 池製程前攔截下來,對電池之品質及生產成本相當重 要,然而探討如何偵測微裂紋的文獻卻相當少。Fu 等人 [22]應用直方圖均質化、二值化、及 LoG 邊界偵測等技 術進行裂紋的偵測。Dallas 等人所提出之 RUV 微裂紋檢 測法,檢測一片晶片的時間少於 2.0 秒,是相當有潛力的 方法。Tsai 等人[23]採用 LED 照明進行取像,並運用異 方向性非線性擴散處理技術檢查微裂紋。該研究對 95 片 太陽能晶片進行實驗,所獲得的卓越結果証明了該方法 的有效性。此外,檢測一張 1000 × 1000 影像只需 0.28 秒,因此也是個有效率的方法。然而受限於該方法使用 可見光進行技術,因此只能檢測可見之表面微裂紋。

本研究提案的 主要 目的是發 展太陽 能晶 片檢測 技 術,欲檢測之標的物為完成蝕刻及拋光後之多晶矽太陽 能晶片(Multi-Crystalline Silicon Solar Wafer)。如前所述,

太陽能矽晶片所需檢測的項目相當多,有些項目之檢測 極具挑戰性,本研究將僅針對出現在太陽能晶片表面之 玷污、孔洞、裂痕及隱藏於太陽能晶片內部之異物及微 隱裂進行檢測。有鑑於微隱裂是最重要而且是最具挑戰 性的檢查項目之一,因此本研究的重心將著重在微隱裂 的檢測。值得一提的是,只要微隱裂檢查的出來,其他 瑕疵也可一併被檢查出來。

4. 研究方法與設備研究方法與設備研究方法與設備 研究方法與設備

有些接觸型檢測法,例如 染料檢測法及聲學檢測 法,在檢測時可能會造成矽晶片的損傷。另一方面,有 些檢測法檢測費時,例如 RHT 檢測法及 SAM 檢測法。

在滿足線上檢測及不得損傷晶片的前提下,本研究決定 採用非接觸式之自動光學檢測技術進行檢測。欲從複雜 之多晶矽太陽能晶片中找出不可見之微裂紋並不是容易 的事。本研究分兩步驟解決此困難:微裂紋偵測及微裂 紋抽取。微裂紋偵測是關鍵的第一步,目標是要能夠攫

取到微裂紋的影像。第二步微裂紋抽取的目標是從攫取 所得的微裂紋影像中,透過適當的演算法將可能的微裂 紋抽取出來,並判斷待測晶片是否有微裂紋。值得一提 的是,唯有微裂紋確實出現在攫取到的影像中,第二步 微裂紋抽取才有意義。換言之,攝影機看的見微裂紋是 本研究成功與否的關鍵。

4.1 微裂紋偵測微裂紋偵測微裂紋偵測 微裂紋偵測

微裂紋偵測的目標是建立一套能夠攫取到多晶矽太 陽能晶片內部微裂紋影像的取像設備。在研究的過程當 中,我們分別測試過三種不同的取像設備。圖 2 所示為 第一取像系統,是由 USB 攝影機、鏡頭、及白光環形光 源所構成,屬於可見光取像系統。由於大多數的微裂紋 都 是 隱 藏 於 晶 片 內 部 , 有 必 要 使 用 近 紅 外 線 (Near Infrared,NIR)取像設備予以揭露。矽晶片在波長介於 750

~ 1400nm 之近紅外線照射下會呈現透明狀,換言之使用 NIR 可以穿透矽晶片看到內部的微裂紋。有鑑於此,本研 使用德州儀器(Texas Instruments)所生產的 MC-781P 近 紅外線攝影機、鏡頭及 MORITEX 公司的 MHAB-100W-IR 之鹵素燈源,建立了第二取像系統(圖 3)。鹵素燈源的 發光波長為 1100 nm,因此第二取像系統為近紅外線取像 系統。第三取像系統是由 NIR 攝影機、鏡頭、及 LED 背 光板所構成。背光板之尺寸為 180 mm × 165 mm,由 528 顆波長落在 940 nm 附近的近紅外線 LED 所構成,排列 成 24 × 22 之陣列式背光板,因此也是近紅外線取像系統。

圖 2、第一取像系統。

(5)

圖 3、第二取像系統。

圖 4、第三取像系統。

圖 5、使用第一及第三取像系統所攫取之影像: (a) 及 (b) 為第一取像系統所取得之正面及背面影像;(c) 為第三取

像系統所取得之影像。

圖 6、使用第二及第三取像系統攫取所得之影像:(a)與(b) 分別代表使用第二取像系統所取得之正面及背面影像;(c) 與(d)為使用第三取像系統所取得之正面及背面影像。

圖 5 所示為使用取像系統一與三攫取所得之影像。

從圖中可以清楚地看出,使用取像系統一取像,無論是 正面影像(圖 5(a))或背面影像(圖 5(b)),都無法看到內部 的微裂紋。有鑑於可見光無法穿透晶片內部,本研究於 是改採近紅外線取像系統。圖 6(a)與 6(b)是使用取像系統 二取得之正反面影像,也是無法看到隱藏於內部的微裂 紋。值得一提的是,圖中顏色較深的不規則圖案,是人

(6)

工加註在晶片正反面的記號,目的是為了標示微裂紋的 位置。如圖所示,正面影像只能看到標示在正面的記號;

背面影像只能看到標示在背面的記號。換言之,無論從 正面取像或背面取像都無法攫取到隱藏於晶片內部的微 裂紋。此結果說明了使用近紅外鹵素燈的第二取像系統 依然不適用,究其原因主要是由光纖導管所傳送出之近 紅外線不僅照度不足,面積也無法覆蓋攝影機系統的視 野。為了提供亮度較高且面積較廣的照明,本研究自製 如圖 4 所示之陣列形近紅外線 LED 背光板。搭配自製 NIR 背光照明之第三取像系統,可順利攫取太陽能晶片內部 之微裂紋影像,如圖 5(c)、6(c)、及 6(d)所示。近紅外線 可以穿透太陽能矽晶片,因此無論從正面或背面取像,

都可以同時看到正面與背面的記號以及內部的微裂紋。

換言之,第三取像系統可以將隱藏於晶片內部的不可見 微裂紋變成可見的微裂紋,讓後續微裂紋抽取變可能。

4.2 微裂紋抽取微裂紋抽取微裂紋抽取微裂紋抽取

單晶矽晶片影像中的微裂紋可以很容易地被抽取出 來,因為單晶矽晶片具有單純的背景。然而對多晶矽晶 片影像而言,由於背景是由形狀、大小及方向均不相同 的晶格所構成(請參考圖 5),影像相當複雜。事實上,

想要以肉眼分辨何者為微裂紋已經有些難度,更遑論透 過軟體程式將其抽取出。此外,微裂紋的幾何形態及灰 階特徵也不相同,以圖 5(c)所示的四個微裂紋為例,其形 狀及大小都不相同。由此可見本研究所具有的挑戰性。

影像分割(segmentation)是最常被用來將物體與背景 分離的影像處理技術之一。Zhang 與 Gerbrands[24]指出,

至 1994 年為止已發表的影像分割法超過 1000 種以上,

而且據信從那時後到現在,這個數目也許已倍增。影像 分割法可分成閥值法(thresholding)、聚積法(clustering)、 以區域為基之分割法(region-based segmentation)、及以 邊界為基之分割法(edge-based segmentation)。閥值法[25]

是最普遍且最有效率的分割法。閥值法可進一步分成全 域閥值法或局部閥值法。全域閥值法是使用一個門檻值 分割整張影像。實驗結果顯示,我們所測試的方法[26-28]

中,並沒有任何一個方法可以成功的應用在所有的微裂 紋影像。至於局部閥值法則是在不同的子區域使用不同 的閥值。一般來說,小區域比較容易達到照明均勻的要 求,因此局部閥值法的主要優點是可以克服照明不均的 問題。Niblack 所提出的分割法[29]可說是局部閥值法的 典型例子。以區域為基之分割法泛指那些利用區域特性

(灰階平均值、灰階標準差、…),將影像分割成許多 具有最大同質度的子區域的方法。常見以區域為基的分 割法包括區域成長(region growing)[30-32]、區域分裂 法(region splitting)[33]、區域合併(region merging)[34], 分裂與合併(split-and-merge)[35-36]及分水嶺(watershed)

[37-38]。

為了滿足線上高速檢測的要求,使用一序列簡單快 速卻能夠有效將微裂紋抽取出來的處理流程,是系統能 否實用的關鍵。基於上述原則,本研究採用以區域成長 法為基礎的微裂紋抽取演算法。

圖 7、以區域成長為基之微裂紋抽取演算法流程圖。

圖 8、以區域成長為基之微裂紋偵測結果:(a)原始多晶矽 晶片影像;(b)梯度運算後之結果影像;(c)區域成長後之 結果影像;(d)閉合處理、物件標號、及尺寸濾波後之結 果影像,其中偵測所得之微裂紋以紅色表示。

4.3 區域成長為基區域成長為基區域成長為基之區域成長為基之之之微裂紋抽取法微裂紋抽取法微裂紋抽取法 微裂紋抽取法

以區域成長為主要處理程序之微裂紋抽取法,其處 理流程請參考圖 7。本方法首先使用梯度運算銳化測試影 像。接下來使用區域成長法讓相鄰且性質相近的像素點 聚集在一起,成為具有類似性質(例如灰階值)的區域。

完成區域成長後,施以型態學中的閉合運算將近距離的 區域連接起來,成為同一個區域。最後則利用物件標號、

物件分析、及尺寸濾波技術判斷待測影像中是否含有瑕 疵。在接下來的幾個小節中,我們將以圖 8(a)所示的原始 多晶矽太陽能晶片影像作為處理範例,輔助說明流程中 幾個主要處理程式及其效果。以下就微裂紋抽取流程進 行詳細的說明。

4.3.1. 梯度運算

梯度運算的目的是強化邊界,亦即強化區隔物體及 背景的像素點。邊界點的特徵通常是以該點的梯度大小 p 及梯度方向 φ 來表示。對連續之二維影像 f(x,y)來說,各 點的梯度大小及梯度方向可利用下式得到。

2 2





∂ + ∂



 

= ∂

y f x

p f ;





∂ ∂

= x f

y f tan 1

φ . (1)

(7)

本研究使用 Prewitt 梯度運算子對不連續之數位影像進行 邊界點強化的處理。Prewitt 用以強化水平及垂直方向邊 界點的迴旋積遮罩如下所示。





=

1 0 1

1 0 1

1 0 1 Px





− − −

=

1 1 1

0 0 0

1 1 1 Py

4.3.2. 區域成長

區域成長是由種子像素點(seed pixel)開始,檢查 其四個近鄰或八個近鄰,將與種子像素具有相似性質的 近鄰加入目前種子像素所屬的區域中。是否相似可以用 灰階、色彩、紋理、或其組合來評估。成長過程一般是 以迭代的方式重複進行,直到沒有新的像素點加入為 止。當一個區域的成長停止後,再挑選下一個種子像素 並重複上述之成長過程,直到所有的種子像素都已完成 成長為止。最後再利用區域合併演算法,將鄰近之相似 區域合併成為一個較大的區域。區域成長演算法的種類 相當多,並沒有那一個演算法絕對比另一個演算法來得 好。事實上,應該選擇那一種演算法與應用領域的影像 特性息息相關的。

如圖 8(b)所示,經過梯度運算後微裂紋的邊界比其他 周圍晶格的邊界來的亮,因此我們將灰階值大於平均灰 階值µgrad,且差值超過 Ts的像素點當作是種子像素,其中 µgrad為梯度運算後之結果影像的平均灰階值;Ts稱為種子 門檻(seed threshold)。在成長的過程當中,與種子像素 相鄰,而且與µreg的灰階差的絕對值小於 Tm者,則將該 近鄰加入種子像素所屬的區域,其中µreg代表種子像素所 屬區域目前的灰階平均值;Tm稱為合併門檻值(merge threshold)。本研究所使用之 Ts 及 Tm分別設定 在 70 及 3。值得一提的是,雖然上述兩個門檻值是經過測試後所 得到的,但是在光源沒有顯著改變的情形下,此二數值 是不需改變的。梯度影像(圖 8(b))經區域成長後所得到 的結果影像如圖 8(c)所示。

4.3.3. 形狀處理

膨脹(⊕)及侵蝕(ϴ)是形態處理的兩個基本運算,至於 斷開(ο)及閉合(•)則是由膨脹及侵蝕所衍生出來的運算,

其中斷開是先侵蝕一次再膨脹一次的運算,至於閉合則 是先膨脹一次再侵蝕一次的運算。當 A 影像先被結構元 素 B 侵蝕,所得到的結果再被結構元素 B 膨脹,此操作 即稱為斷開,因此斷開也可以表示成(AϴB)⊕B。當 A 影 像先被結構元素 B 膨脹,所得到的結果再被結構元素 B 侵 蝕 , 此 操 作 即 稱 為 閉 合 , 因 此 閉 合 也 可 以 表 示 成 (A⊕B)ϴB。斷開的主要作用包括消除小島、打斷窄橋及 平滑物體輪廓。閉合同樣具有平滑物體輪廓的功用,此 外還能填補小洞及小缺口。形態處理可以根據四近鄰或 八近鄰的原則進行。針對可能的瑕疵本研究先施以四近 鄰膨脹,之後再施以八近鄰侵蝕。上述之非標準閉合運 算,具有將物體與物體之間的小縫隙填補起來的效果。

膨脹及侵蝕分別所使用之結構元素 B1及 B2 如下所示:

4.3.4 物件分析

尺寸濾波前,有必要先進行物件標號與物件分析,

以得知影像中每一個物件的尺寸。物件標號的目的是要 知道影像中有多少個俗稱為 blob 的物件,並賦予每一個 物件一個唯一的標號(label)。一個物件可以看成是由一 群相連通的像素點所構成,至於像素與像素間是否相連 通,通常是根據四近鄰或八近鄰的連通準則來認定。物 件標號常用的方法是連通元件標號(connected-component labeling)[39-40]。完成標號後屬於同一物件的像素點會 獲得相同的標號。物件分析即是利用這些標號資訊獲得 尺寸、形心、方向、矩量、離心率及 L-S 因子等特徵。

4.3.5 尺寸濾波

為了降低假警報的機率,本研究採用尺寸濾波(size filtering)影像技術,忽略面積太小或太大的物件,或者 直接將其移除掉。一般說來,面積太小的物件是雜訊所 造成。本研究是以設定面積下限值的方式,找到面積小 於此下限值的物件,並將其組成像素之灰階值直接以背 景像素的灰階值取代(相當於移除掉)。本研究所設定之 面積下限值為 36 個像素(大約是一個半徑 50µm 小圓點 的面積)。圖 8(d)所示為微裂紋檢測最終得到的結果影 像,從圖中可以發現部份小雜訊已經被移除了。

圖 9、微裂紋檢測結果。

(8)

圖10、微裂紋自動光學檢測雛型機。

4.4 性能分析性能分析性能分析 性能分析

圖 9 展示另外 12 張微裂紋影像及其檢測結果。從顯 示之結果來看,本研究所提出之微裂紋偵測法在抽取微 裂紋的表現上相當不錯。就檢測速度而言,本系統在 2.4GHz CPU 及 2GB RAM 的個人電腦上,檢查一張 640

× 480 大小的影像需時約 0.18 秒,但是檢查一片 6 吋矽 晶片需時 90 秒。因此,檢測費時是本系統的主要缺點。

5. 結果結果結果與討論結果與討論與討論與討論

本研究建立一套有效的NIR取像設備,可以順利攫 取可見光取像系統所看不到的微隱裂,並成功整合軟體 程式與移動平台完成如圖10所示之自動化光學檢測系統 的雛型機。本系統可手動操作檢測,也可以在放置矽晶 片後全自動檢測。圖11所示為檢測時系統所顯示的畫 面。檢測系統之硬體設備包括NIR取像設備及XY平台兩 大部份。NIR取像設備是由MC-781P攝影機、TL-10M telecentric 鏡頭、自製半球型NIR LED照明(圖12)、及 安裝在個人電腦內的影像攫取卡所組成。自製之半球形 近紅外線照明是由32顆940 nm的LED所組成。XY平台的 X-軸是用來移動承载太陽能晶片的檢測平台;Y-軸是設 計來同步移動攝影機及照明設備。MC-781P攝影機配備 有解析度為780 × 488、對近紅外線敏感、2/3英吋之影像 感知器。實驗時攝影機系統所設定之工作距離為37.0 mm、視野為8.3 mm × 6.2 mm、解析度為640 × 480,在 此設定下,影像解析度為13.4 µm/pixel,檢查一片6寸矽 晶片需取像500次,檢測時間約90秒。

圖11、微裂紋自動光學檢測之使用者介面。

本研究使用四片具有微裂紋之六吋多晶矽太陽能晶 片進行系統的測試,從這 4 片樣本中我們以全自動檢測 的方式進行檢測,總共攫取 2000 張影像,其中 33 張影 像具有瑕疵。檢測結果如表 1 所示,除了 3 張微裂紋影 像沒有被偵測出來外,其餘 30 張瑕疵影像都成功地被檢 查出來,更重要的是並沒有假警報(false alarm)的情形 發生。從 99.85%的高精確度來看,本檢測系統確實有效。

為了探究導致漏測(false negative)的原因,我們仔 細檢視漏測的 3 張微裂紋影像,結果發現其共同點為微 裂紋都是細長形的。由於寬度小,在經過型態學的侵蝕 處理後就被侵蝕掉了,導致檢測軟體無法察覺它們的存 在。本研究所發展之微裂紋檢測系統,主要是用來檢查 微裂紋(含微顯裂及微隱裂),然而結果顯示本系統也能 檢測出其他一些經常出現在晶片表面或內部的瑕疵,包 括玷污、針孔、異物、及裂痕等,如圖 13 所示。

圖 12、自製之 NIR LED 照明。

(9)

圖 13、由上而下分別代表玷污、針孔、裂痕、及異物之 檢測結果,左圖為原始瑕疵影像;右圖為以紅色點標示

檢出瑕疵之結果影像。

6. 結論結論結論與建議結論與建議與建議與建議

在太陽能矽晶片進入電池製程前,逐片檢查以確保 沒有微裂紋是重要的。本研究在多晶矽太陽能晶片微裂 紋檢測設備的自主發展上向前邁進了一步。雖然本研究 自行設計組裝之半球型近紅外線光源,其成本不到新台 幣5000元,卻可以讓看不見的微裂紋在近紅外線攝影機 的取像下變可見。為了滿足線上即時檢測的要求,本研 究發展出一套簡單的微裂紋檢測法,可以有效的將微裂 紋從攫取所得之影像中抽取出來。整體而言,本研究是 成功的,雖然仍有許多值得進一步改進的地方。

本研究目前所發展之雛型機能夠找出寬度 13.4 µm 以上的微裂紋,就檢測解析度而言是足夠的,但是每片

長達 1.5 分鐘的檢測時間,則是令人無法接受的。幸好,

透過更換高解析度攝影機的方式,即可解決檢測速度慢 的致命缺點。舉例來說,採用 Sensovation 公司出品的 coolSamBa HR-830 NIR 攝影機,在 8.3 百萬像素的高解 析度下,只要取像一次就可完成六吋矽晶片的檢測。值 得注意的是,本文所提出之檢測系統仍在發展中,目前 檢測軟體只能判斷矽晶片是否有瑕疵,並無法分辨瑕疵 的類別。因此,本研究接下來的工作就是利用真圓度、

離心率、灰階平均值等特徵對偵測所得之瑕疵進行分類。

參考文獻 參考文獻參考文獻 參考文獻

[1] J.R. Hodor, H.J.J. Decker and J. Barney, “Infrared Technology Comes to State-of-the-art Solar Array Production,” in Infrared technology XIII; Proceedings of the Meeting, San Diego, CA, Aug. 18-20, pp. 22-29, 1987.

[2] Y.C. Chiou and W.C. Li, “Flaw Detection of Cylindrical Surfaces in PU-packing by Using Machine Vision Technique,” Measurement, vol. 42, no. 7, pp. 989-1000, 2009.

[3] G. Coletti, C.J.J. Tool and L. J. Geerligs, “Mechanical Strength of Silicon Wafers and Its Modeling,” in 15th Workshop on Crystalline Silicon Solar Cells and Modules:

Materials and Processes, Colorado, USA, Aug. 7-10, pp.

117-120, 2005.

[4] Y. Hayafuji, T. Yanada, Y. Aoki, “Laser Damage Gettering and Its Application to Lifetime Improvement in Silicon,” J. Electrochem. SOC 128, no. 9, pp. 1975-80, 1981.

[5] Y.K. Park, M.C. Wagener, N. Stoddard, M. Bennett and G.A. Rozgonyi, “Correlation Between Wafer Fracture and Saw Damage Introduced During Cast Silicon Cutting,” in 15th Workshop on Crystalline Silicon Solar Cells and Modules: Materials and Processes, Colorado, USA, Aug.

7-10, 2005, pp. 178-181.

[6] G. Zenzinger, J. Bamberg, W. Satzger and V. Carl,

“Thermographic Crack Detection by Eddy Current Excitation,” Nondestructive Testing and Evaluation, vol.

22, no. 2-3, pp. 101-111, 2007.

[7] C. Hilmersson, D.P. Hess, W. Dallas and S. Ostapenko,

“Crack Detection in Single-crystalline Silicon Wafers Using Impact Testing,” Applied Acoustic, vol. 69, no. 8, pp. 755-760, 2007.

[8] K. Yagi, H. Kanishi, Y. Kawagoe, “Substrate Crack Inspection Method, Substrate Crack Inspection Apparatus, and Solar Battery Module Manufacturing Method,” US Patent, 7,191,656 B2, Mar. 20, 2007.

[9] K. Reber, M. Beller, “Ultrasonic In-line Inspection Tools to Inspect Older Pipelines for Cracks in Girth and Long-seam Welds,” Pigging Products and Services Association, 2003

[10] M. Pilla, F. Galmiche and X. Maldague,

“Thermographic Inspection of Cracked Solar Cells,” in Proceedings of SPIE, vol. 4710, pp. 699-703, 2002.

[11] J.W. Devitt, E. bantel, J.M. Sparks and J.S. Kania,

“Apparatus and Method for Detecting Fatigue Cracks Using Infrared Thermography,” US Patent, 5,111,048, May 5, 1992.

[12] D. Knauss, T. Zhai, G.A.D. Briggs, and J.W. Martin,

(10)

“Measuring Short Cracks by Time-resolved Acoustic Microscopy,” Advances in Acoustic Microscopy, vol. 1, pp. 49-77, 1995.

[13] Z.M. Connor, M.E. Fine, J.D. Achenbach, and M.E.

Seniw, “Using Scanning Acoustic Microscopy to Study Subsurface Defects and Crack Propagation in Materials,” J. Microscopy, vol. 50, no. 11, 1998.

[14] T. Trupke, R.A. Bardos, M.D. Abbott, F.W. Chen, J.E.

Cotter, and A. Lorenz, “Fast Photoluminescence Imaging of Silicon Wafers,” in Proceedings of the 4th WCPVSEC, Hawaii, USA, May 2006, pp. 928-931.

[15] T. Trupke, R.A. Bardos, M.C. Schubert, W. Warta,

“Photoluminescence Imaging of Silicon Wafers,”

Applied Physics Letter, vol. 89, no. 4:044107 (2006).

[16] Y. Takahashi, Y. Kaji, A. Ogane, Y. Uraoka and T.

Fuyuki, “Luminoscopy- Novel Tool for the Diagnosis of Crystalline Silicon Solar Cells and Modules Utilizing Electroluminescence,” in Proceedings of the 4th WCPVSEC, Hawaii, USA, May 2006, pp. 924-927.

[17] F. Dreckschmidt, T. Kaden, H. Fiedler, H.J. Möller,

“Electroluminescence Investigation of the Decoration of Extended Defects in Multicrystalline Silicon,” in Proceedings of the 22nd European Photovoltaic Solar Energy Conference, 3-7 September 2007, Milan, Italy, pp. 283-286.

[18] Polupan, S. Ostapenko, “Theoretical Modeling of Full-size Silicon Wafers with Micro Cracks for The Purpose of Defect Diagnostics,” University of South Florida, REU symposium, April 6th, 2006.

[19] W. Dallas, O. Polupan and S. Ostapenko, “Resonance Ultrasonic Vibrations for Crack Detection in Photovoltaic Silicon Wafers,” Measurement Science and Technology, vol. 18, no. 3, pp. 852-858, 2008.

[20] Belyaev, O. Polupan, W. Dallas, S. Ostapenko, and D.

Hess, “Crack Detection and Analyses Using Resonance Ultrasonic Vibrations in Full-size Crystalline Silicon Wafers,” Applied Physics Letters, vol. 88(111907) 2006.

[21] J.G. Thompson and C.T. Uyehara, “Ultrasonic Thermography Inspection Method and Apparatus,” US Patent, 7,075,084, July 11, 2006.

[22] Fu, Z., Y. Zhao, Y. Liu, Q. Cao, M. Chen, J. Zhang, and J. Lee, “Solar Cell Crack Inspection by Image Processing,” in Proceedings of the Int. IEEE Conf.

Business of Electronic Product Reliability and Liability, Shanghai, China, Apr.27-30, 2004, pp.77-80.

[23] D.-M. Tsai, C.C. Chang and S.M. Chao, “Micro-crack Inspection in Heterogeneously Textured Solar Wafers Using Anisotropic Diffusion,” Image and Vision Computing, vol. 28, no. 3, pp. 491-501, 2010.

[24] Y.J. Zhang and J.J. Gerbrands, “Objective and Quantitative Segmentation Evaluation and Comparison,”

Signal Processing, vol. 39, no. 1-2, pp. 43-54, 1994.

[25] P.K. Sahoo, S. Soltani, A.K.C. Wong and Y.C. Chen, “A Survey of Thresholding Techniques,” Computer Vision, Graphics, and Image Processing, vol. 41, no. 2, pp.

233-260, 1998.

[26] T.W. Ridler and S. Calvard, “Picture Thresholding Using An Iterative Selection Method,” IEEE Transactions on Systems, Man and Cybernetics, vol. 8, no. 8, pp.

630-632, 1978.

[27] N. Otsu, “A Threshold Selection Method from Gray-level Histograms, IEEE Transactions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979.

[28] W.H. Tsai, “Moment-preserving Thresholding: A New Approach,” Computer Vision, Graphics and Image Processing, vol. 29, no. 3, pp. 377-393, 1985.

[29] W. Niblack, “An Introduction to Digital Image Processing,” Prentice-Hall, 1985, ISBN:87-872-0055-4.

[30] R. Adams and L. Bischof, “Seeded Region Growing,”

IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 16, no. 6, pp. 641-647, 1994.

[31] Mehnert and P. Jackway, “An Improved Seeded Region Growing Algorithm,” Pattern Recognition Letters, vol.

18, no. 10, pp. 1065-1071, 1997.

[32] R.D. Stewart, I. Fermin and M. Opper, “Region Growing with Pulse-coupled Neural Networks: An Alternative to Seeded Region Growing,” IEEE Transactions on Neural Networks, vol. 13, no. 6, pp.

1557-1562, 2002.

[33] T.A. Dutra, A.P. Pires and P.G. Bedrikovetsky, “A New Splitting Scheme and Existence of Elliptic Region for Gasflood Modeling,” SPE Journal, vol. 14, no. 1, pp.

101-111, 2009.

[34] F. Moscheni, S. Bhattacharjee and M. Kunt,

“Spatiotemporal Segmentation Based on Region Merging,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 9, pp. 897-915, 1998.

[35] L. Liu and S. Sclaroff, “Deformable Model-guided Region Split and Merge of Image Regions,” Image and Vision Computing, vol. 22, no. 4, pp. 343-354, 2004.

[36] I.N. Manousakas, P.E. Undrill, G.G. Cameron and T.W.

Redpath, “Split-and-merge Segmentation of Magnetic Resonance Medical Images: Performance Evaluation and Extension to Three Dimensions,” Computers and Biomedical Research, vol. 31, no. 6, pp. 393-412, 1998.

[37] L. Vincent and P. Soille, “Watersheds in Digital Spaces:

An Efficient Algorithm Based on Immersion Simulation,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 13, no. 6, pp. 583-598, 1991.

[38] L. Shafarenko, M. Petrou and J. Kittler, “Automatic Watershed Segmentation of Randomly Textured Color Images,” IEEE Transactions on Image Processing, vol. 6, no. 11, pp. 1530-1544, 1997.

[39] Rosenfeld, and P. Pfaltz, “Sequential Operations in Digital Picture Processing,” Journal of the ACM, vol. 13, no. 4, pp. 471-494, 1966.

[40] M.B. Dillencourt, H. Samet and M. Tamminen, “A General Approach to Connected-component Labeling for Arbitrary Image Representations,” Journal of the ACM, vol. 39, no. 2, pp. 235-280, 1992.

(11)

!"#$%&'()*+,$-.%'/01$2'3*456

!"#$%&'()*+,$-.%'/01$2'3*456

!"#$%&'()*+,$-.%'/01$2'3*456

!"#$%&'()*+,$-.%'/01$2'3*4567 7 7 7

7777777777777777777777777777777777777777777777777777777778997:7; <7=9 >7 56?@A7 7

BCD7

EFGH7 IJK7

LMN'OPQ7 RS7 TU7

*4VW7

100/7/26 –100/7/28 Hangzhou, China

X*YZ7 +,[\7

7

*47 AK7

(L[) 2011 :]^_`abc3$2de*

(f[) 2011 International Conference on Multimedia Technology (ICMT 2011) gh7

i[7 jk7

(L[) lmnopqrstuvwx

(f[) Projection of Shape Features for 3D Model Retrieval

y y

y yz z z z!!!! {|*4}~ {|*4}~7 {|*4}~ {|*4}~ 7 7 7

]^_`abc3$2de*`abc3d€‚Lƒ„…$2*4†‡:ˆx… i[‰Š‹ 250 `Œ†Ž†‡:Š‹s Workshops Š‘ ICMT 2011 ’!†“”• 3S&DC 2011zWCEUP 2011 I IWPM 2011†–—†‡:N*Š‹‰˜™Œ…i[gh†š›œ

žNŸICMT 2011 … ¡¢£¤N*¥1 Prof. Aly A. Farag ¦§¨…©ª«¬†­®¯!

]yš… Keynote Speech†X_*4°*44±LŠ²³´µš¶r`abz·¸q¹c 3z`abOº»d€‚d€‘¼½‚r¾„¿Àr.j56†“”•: (1). Computer Vision in Virtual Reality”z(2). 3D visualization and its Applicationsz(3). Advances on Topological Registration Approaches with Applicationsz(4). Navigating large image databasesz(5). Facial Shape, Texture and Reflectance from a Single View z (6). 6:Quantum Nanophysics and Nanoengineering of Low-Dimensional Devices and Circuits ÁI (7). Relativistic Danger for Spacecraft from Fast Satellites of the Solar-System Planets†“”¤ Prof. Demetri Terzopoulos (University of California, USA) zProf. Jie Yang(Shanghai Jiaotong University, China)zProf.Aly A. Farag(University of Louisville,USA)zDr.Gerald Schaefer(Loughborough University,U.K.)z Prof. Edwin Hancock(University of York,U.K.)zProf.Vijay K. Arora(Wilkes University,USA)ÁI Prof. Alexander P. Yefremov (Peoples’ Friendship Univ. of Russia, Russia)ÂÃą.j56Ÿ

(12)

^

^

^

^z z z z!!!! ‘*ÅÆ ‘*ÅÆ7 ‘*ÅÆ ‘*ÅÆ 7 7 7

77`abc3$2de*`abc3d€‚Lƒ„…$2*4†ÇÈɈ…i [œ`†Ê¥Ë¨¸°*4…̱²³ÍÎÏÐ0}хÒӆ¦Ô°*4¯!…Ì

±Í‹ÕÖ׆ØÙ´*4ÂڜÛ܆ݕÞLÒÓrßŸà‘—¶*4r'/Ï áâ‘ãä$%r'/åæd€ÅƆ°`abz·¸q¹c3z`abOº»rçd

€‚èœÂéê…ei†ëìÏá—G*í‚Lrïðd€'/†ñò—¬

óô—‚rd€‹œr,õŸ7

s s

s sz z z z!!!! ö÷{øùú ö÷{øùúû ö÷{øùú ö÷{øùú û û ûüýùú/þÿ üýùú/þÿ üýùú/þÿ üýùú/þÿ 7 7 7 7

üŸ7

! !

! !z z z z!!!! "4 "47 "4 "4 7 7 7

üŸ7 7

#z $%OºAKI-& $%OºAKI-& $%OºAKI-& $%OºAKI-& 7 1. N*4±

2. The Proceeding of 2011 The 2011 International Conference on Multimedia Technology d e*i['(Vol. 1 r CD ‘ Hard Copy)Ÿ

7

"

"

"

"# # # #!!!! ãä ãä ãä ãä

üŸ

(13)

The 2nd International Conference on Multimedia Technology (ICMT2011)

IEEE㄀Ѡሞ໮ၦԧᡔᴃ೑䰙Ӯ䆂

Acceptance Notification

May. 30th, 2011 Dear Author,

Congratulations! It is our great pleasure to inform you that your paper Paper ID:IC14376

Author(s):Lee Chang-Hsing,Shih Jau-Ling,Yu Kun-Ming, Title:Projection of Shape Features for 3D Model Retrieval

has been accepted for presentation at The 2nd International Conference on Multimedia Technology, ICMT2011.

Please complete all registration procedures before Jun. 5th, 2011 by the registration information attached. Otherwise your paper will be excluded in the proceedings and can not be submitted to EI Compendex.

Thank you for submitting paper(s) to ICMT 2011 and we hope you can attend the conference. We also appreciate that you can contribute your excellent work to future ICMT conferences.

For more information, please visit the conference website:

www.icmtconf.org Best regards,

ICMT 2011 Organizing Committee

www.icmtconf.org 2011 ᑈ 5 ᳜ 30 ᮹ Hangzhou , China ICMT Organizing Committee IEEE Catalog Number: CFP1153K-CDR ISBN: 978-1-61284-773-3

IEEE Catalog Number: CFP1153K-PRT ISBN: 978-1-61284-772-6

(14)

Projection of Shape Features for 3D Model Retrieval

Chang-Hsing Lee, Jau-Ling Shih, Kun-Ming Yu, Hsiang-Yuen Chang

Department of Computer Science and Information Engineering Chung Hua University

Hsinchu, Taiwan

Yih-Chih Chiou

Department of Mechanical Engineering Chung Hua University

Hsinchu, Taiwan

Abstract—In this paper, the combination of different projected shape features is proposed for 3D model retrieval. The projection features include the elevation value (depth), the radial distance, and the angle of a surface mesh. For each of the characteristic values (elevation value, radial distance, and angle value), six projection planes represented as gray-level images will be generated. The MPEG-7 angular radial transform (ART) is then used to compute the feature vector from each projection plane.

Experiments conducted on the Princeton Shape Benchmark (PSB) database have shown that the proposed approach outperforms the state-of-the-art descriptors in terms of the DCG score.

Keywords-3D model retrieval; angle value; radial distance;

elevation value; ART.

I. INTRODUCTION

Rescent development in advanced techniques for modeling, digitizing and visualizing of 3D models has made 3D models as plentiful as images and video. Therefore, it is necessary to design a 3D model retrieval system which enables the users to efficiently and effectively search interested 3D models. The primary challenge to a content- based 3D model retrieval system is how to extract the most representative features to discriminate the shapes of various 3D models [1].

Vranic et al. applied Fourier transform to the sphere with spherical harmonics to generate embedded multi-resolution 3D shape features [2]. To be rotation invariant, however, pose normalization must be conducted prior to feature extraction.

Therefore, Funkhouser et al. proposed a modified rotation invariant shape descriptor based on the spherical harmonics in which no pose normalization is needed [3].

Some popular features used to represent the 3D models are based on the histograms of geometric statistics [4]-[7].

Ankerst et al. tried to search similar 3D models using shape histograms which characterize the area of intersections of a 3D model with a collection of concentric shells and sectors [4].

The MPEG-7 shape spectrum descriptor (SSD) [5] calculates the histogram of the curvatures of all points on the 3D surface.

Osada et al. [6] proposed five features, A3, D1, D2, D3, and D4, to represent 3D models by the probability distributions of some geometric properties computed from a set of randomly selected points located on the surface of the model. However, these features are sensitive to tessellation of 3D polygonal models. Thus, Shih et al. [7] proposed grid D2 (GD2) to improve D2. A 3D model is first decomposed into a voxel grid. The distribution of distances between any two randomly

selected valid grids is measured to represent a 3D model.

In general, the 3D models also can be described by its 2D silhouettes from different views [8]-[10]. Users can find similar 3D models by 2D shape features. Super and Lu [8]

exploit 2D silhouette contours for 3D object recognition.

Curvature and contour scale space are extracted to represent each silhouette. Chen et al. [9] proposed the LightField descriptor (LFD) to represent 3D models. LFD is computed from 10 silhouettes. Each silhouette is represented by a 2D binary image. In fact, 2D silhouettes represented by binary images cannot describe the altitude (depth) information of the 3D model from different views. Thus, Shih et al. [10]

proposed the elevation descriptor (ED) to represent the altitude information of a 3D model from six different views.

Kuo and Cheng [11] proposed a 3D shape retrieval system based on principal plane analysis. First, each 3D model is projected onto its principal plane. As a result, each 3D model can be represented by a 2D binary image. The feature vectors are then extracted from the binary shape image. However, using only one 2D binary image cannot effectively represent a complex 3D model. Therefore, Shih et al. [12] proposed the principal plane descriptor (PPD) to describe a 3D model with three 2D binary images by projecting it on the principal, second and third planes. Feature vectors are then extracted from these three binary images for 3D model retrieval.

Papadakis et al. [13] proposed two shape descriptors for 3D model retrieval. The 3D model was first aligned by continuous PCA (CPCA) or normal PCA (NPCA). In CPCA, the traditional one, the principal component is analyzed based on the covariance matrix computed from the coordinate vectors of the vertices, whereas in NPCA the covariance matrix is computed from the unit normal vectors of the mesh surfaces. The spherical harmonics was then applied on the filled 3D model to extract two feature vectors from the CPCA and NPCA aligned models separately. Vranic and Saupe proposed a modified PCA which used the corresponding triangle areas as weighting factors for covariance matrix computation [14]. The directions of 20 vertices on dodecahedron and the distances computed from the center point to the farthest intersections were used as features to search similar 3D models.

In this paper, the combination of different projected shape features, including the elevation value (depth) [10], the radial distance [15], and the angle value of a surface mesh, will be proposed for 3D model retrieval. The rest of the paper is organized as follows. In Section 2, the proposed 3D model retrieval system will be described. Section 3 gives some

This research was supported in part by the National Science Council of R.O.C. under contract NSC-99-2221-E-216-048.

634 978-1-61284-774-0/11/$26.00 ©2011 IEEE

(15)

experimental results to show the effectiveness of the proposed features. Finally, conclusions are given in Section 4.

II. PROPOSED 3DMODEL RETRIEVAL SYSTEM

First, each 3D model is decomposed into a number of voxels. Second, the principal planes method [12] will be used for pose alignment of each 3D model. Third, different features describing variant shape characteristics of each decomposed voxel will be projected onto six viewing planes. Finally, the MPEG-7 ART will be applied to each projection plane to extract the feature values of each 3D model.

A. 3D Model Normalization and Alignment

Given a 3D model, its pose is first aligned by the principal planes method [12]. The smallest bounding cube that circumscribes the 3D model is then decomposed into a voxel grid of size 100×100×100. A voxel located at coordinates (x, y, z) will be defined as an opaque voxel, notated as Voxel(x, y, z)

= 1, if there is a mesh located within this voxel; otherwise, the voxel is defined as a transparent voxel, notated as Voxel(x, y, z)

= 0. To be robust for translation and scaling, the 3D model is transformed such that the model’s mass center becomes (0, 0, 0) and the average distance from all non-zero voxels to the mass center is 25.

Once the pose of a 3D model is aligned, the angle value, radial distance, and elevation (depth) value of each opaque voxel will be projected onto six projection planes from which the feature value will be extracted to represent each 3D model.

These six projection planes denote the six different views of the 3D model. The angle value describes the angle between the normal vector n of the mesh and the ray connecting the mass center of the 3D model and the center point of the mesh (see Fig. 1). The radial distance denotes the distance from the opaque voxel to the mass center of the 3D model (see Fig. 2).

The elevation value describes the distance from the opaque voxel to the projection plane (see Fig. 2). These values can capture different shape characteristics (the orientation and location) of each opaque voxel. Each projection plane is represented by a gray level image in which the gray value denotes the angle value, radial distance, or elevation value.

B. Angle Value Projection

The angle value tries to capture the orientation of the model’s surface. For each voxel located at (x, y, z), let r denote the vector connecting the mass center of the 3D model and the center point of the surface mesh. The angle between the vector r and the normal vector n of the mesh serves as one of the characteristics of the surface mesh (see Fig. 1). The cosine of the angle between r and n will be treated as the projected angle value of the voxel located at (x, y, z):

(

, ,

)

= || |||| ||×255 r n

r nT z

y

θx (1) α + β = χ. (1) (1)

Let the six angle projection planes be notated as IkA, k = 1, 2, …, 6. Then, the gray value, indicating the projected angle value, of each pixel on these projected images is defined as follows:

50 50

- )), , ( , , ( ) ,

( max

1A x yx y z x yx,y

I (2)

50 50

- ), ), , ( , ( ) ,

( max

2A x zx y xz zx,z

I (3)

50 50

- ), , ), , ( ( ) ,

( max

3A y zx y z yzy,z

I (4)

50 50

- )), , ( , , ( ) ,

( min

4A x yx y z x yx,y

I (5)

50 50

- ), ), , ( , ( ) ,

( min

5A x zx y x z zx,z

I (6)

50 50

- ), , ), , ( ( ) ,

( min

6A y zx yz y zy,z

I (7)

where

)) , , ( ( max ) , (

50

max x y 0 zV x y z

z

z≤

= (8)

)) , , ( ( max ) , (

50

max x z 0 yV x y z

y

y≤

= (9)

)) , , ( ( max ) , (

50

max y z 0 xV x y z

x = x≤ (10)

)) , , ( ( min ) ,

( 50 0

min x y zV x y z

z = z≤ (11)

)) , , ( ( min ) , (

0

min xz 50 yVlx y z

y = y≤ (12)

)) , , ( ( min ) , (

0

min y z 50 xV x yz

x = x≤ (13)

Fig. 1 The angle between the normal vector n of the surface mesh and the vector r that connects the mass center of the 3D model and the center point of

the surface mesh.

C. Radial Distance Projection

The radial distance tries to capture the location of the model’s surface. For each voxel located at (x, y, z), the radial distance is measured as its distance from the mass center of the 3D model. The radial distance is defined as follows (see Fig. 2):

(

x,y,z

)

x2 y2 z2

RD = + + (14)

Let the six radial distance projection planes be notated as IkR, k = 1, 2, …, 6. Then, the gray value, indicating the projected radial distance, of each pixel on these projected images is defined as follows:

50 , 0 5 )), , , ( ) , , ( ( max ) , (

50

1 =0 − ≤ ≤

RD x y zV x yz x y

y x

z

IR (15)

50 , 0 5 )), , , ( ) , , ( ( max ) , (

50

2 =0 − ≤ ≤

RDx y zV x y z x z

z

x y

IR (16)

50 , 0 5 )), , , ( ) , , ( ( max ) , (

50

3 =0 − ≤ ≤

RD x y zV x y z yz

z y

x

IR (17)

50 , 0 5 )), , , ( ) , , ( ( max ) , (

0

4 = 50 − ≤ ≤

RDx y zV x y z x y

y x

z

IR (18)

50 , 0 5 )), , , ( ) , , ( ( max ) , (

0

5 = 50 − ≤ ≤

RD x y zV x yz xz

z

x y

IR (19)

50 , 0 5 )), , , ( ) , , ( ( max ) , (

0

6 = 50 − ≤ ≤

RD x y zV x y z y z

z y

x

IR (20)

D. Elevation Projection

The elevation value tries to capture the altitude (depth) information of the model’s surface to each viewing (or projection) plane. For each voxel located at (x, y, z), the

635

(16)

elevation value is measured as its distance from the projection plane (see Fig. 2) Let the six elevation projection planes be notated as IkE, k = 1, 2, …, 6. Then, the gray value, indicating the projected elevation value, of each pixel on these projected images is defined as follows:

50 , 0 5 )), , , ( ) 51 ((

max ) , (

50

1 =0 − − ≤ ≤

zV x y z x y

y x

z

IE (21)

50 , 0 5 )), , , ( ) 51 ((

max ) , (

50

2 =0 − − ≤ ≤

yV x y z x z

z

x y

IE (22)

50 , 0 5 )), , , ( ) 51 ((

max ) , (

50

3 =0 − − ≤ ≤

xV x y z y z

z y

x

IE (23)

50 , 0 5 )), , , ( ) 51 ((

max ) , (

0

4 = 50 + − ≤ ≤

zV x y z x y

y x

z

IE (24)

50 , 0 5 )), , , ( ) 51 ((

max ) , (

0

5 = 50 + − ≤ ≤

yV x y z xz

z x

y

IE (25)

50 , 0 5 )), , , ( ) 51 ((

max ) , (

0

6 = 50 + − ≤ ≤

xV x y z y z

z y

x

IE (26)

Fig. 2 The lengths of PO , QO , and RO represent the radial distance from the points P, Q, and R of the 3D model surface to the mass center O. The lengths of PP , ' QQ , and ' RR represent the elevation values from P, Q, and '

R to the left projection plane.

For each of the characteristic values (angular value, radial distance, and elevation value), six projection planes represented as gray-level images can be generated. In total, 18 (6×3) projection planes are used to represent each 3D model.

MPEG7’s angular radial transform (ART) [16] will then be used to extract the feature vector from each projection plane.

E. ART Feature Extraction

The MPEG-7 angular radial transform (ART) is an orthogonal unitary transform. ART consists of a complete set of orthonormal sinusoidal basis functions which are defined on a unit disk in the polar coordinate system. Let f(ρ, θ) denote the gray level of the pixel located at (ρ, θ) on the projection image I. The ART coefficient of the projection image I can be computed as follows:

! !

=

=

π ρ θ ρ θρ ρ θ

θ ρ θ ρ

2 0

1 0 , ,

) , ( ) , (

) , ( ), , ( ) , (

d d f

V f V

m n F

m n m

n (27)

where F(n, m) is the ART coefficient of order n and m, Vn,m(ρ, θ) is the complex ART basis function.

The ART descriptor is formed by the magnitudes of all complex ART coefficients. The default ART descriptor consists of 36 coefficients, |F(n, m)|for 0 ≤ n ≤ 2 and 0 ≤ m ≤ 11. In summary, the ART vector extracted from the projection image I can be represented as follows:

T T

]

| ) 11 , 2 (

| , ,

| ) 0 , 2 (

|

,|

) 11 , 1 (

| , ,

| ) 0 , 1 (

| ,|

) 11 , 0 (

| ,

|, ) 0 , 0 ( [|

)]

36 ( , ), 2 ( ), 1 ( [

F F

F F

F F

x x x

!

!

!

"

=

= x

(28)

LetxA=[(x1A)T, !,(x6A)T]T, xR=[(x1R)T, !,(x6R)T]T, and

T T 6 T

1) , ,( ) ]

[( E E

E x x

x = ! denote respectively the feature

vectors extracted from the six projection planes indicating the angle value, radial distance, and elevation value of the query model. In the same way, let yA=[(y1A)T, !,(y6A)T]T ,

T T 6 T

1) , ,( ) ]

[( R R

R y y

y = ! , and yE=[(y1E)T, !,(y6E)T]T denote respectively the feature vectors extracted from the corresponding six projection planes of the matching model in the database. The distance between the query model and the matching model corresponding to the angle value, radial distance, and elevation value are defined as follows:

""

"

= =

=

=

=

6

1 36

1 6

1 1 () ()

) , (

k i

A k A k k

A k A k A A

A x i y i

d x y x y (29)

""

"

= =

=

=

=

6

1 36

1 6

1 1 () ()

) , (

k i

R k R k k

R k R k R

R

R x i y i

d x y x y (30)

""

"

= =

=

=

=

6

1 36

1 6

1 1 () ()

) , (

k i

E k E k k

E k E k E

E

E x i y i

d x y x y (31)

The overall distance between the input query model and the matching model is defined as the sum of the angle distance, radial distance, and elevation distance:

) , ( ) , ( ) , ( ) ,

( dA A A dR R R dE E E

d xy = x y + x y + x y (32)

The matching models that have the minimum overall distances will be regarded as the retrieved similar models.

III. EXPERIMENTAL RESULTS

To demonstrate the effectiveness of the proposed method for different 3D models, some experiments have been conducted on the Princeton Shape Benchmark (PSB) database [17]. The PSB database contains 1814 models (161 classes) which are divided into 907 training models (90 classes) and 907 test models (92 classes). The discounted cumulative gain (DCG) [18], will be employed to compare the performance of different approaches. DCG at the k-th rank is defined as follows:

, 1 ,

2 ), ( DCG log

= DCG

1

2 1

#$

#%

&

=

+

k L

k k Lk

k

k (33)

where Lk = 1 if the k-th model in the ranked retrieval list and the query one belong to the same class; otherwise, Lk = 0. The overall DCG score for a query model is defined as DCGkmax, where kmax is the total number of models in the database. It is clear that if the models appearing in the head of the retrieval list have the same class label as the query one, the evaluated DCG score will be larger than the DCG score associated with the retrieval result in which models with identical class label to the query one appear in the tail of the retrieval list.

In our experiments, each model in the database will be presented as a query one to measure the DCG score. Table I

636

參考文獻

相關文件

Thus, the proposed approach is a feasible and effective method for process parameter optimization in MIMO plastic injection molding and can result in significant quality and

The final results of experiment show that the performance of DBR system declines when labor utilization increases and the CCR-WIP dispatching rule facilitate to

(1995), Land Mosaics: The Ecology of Landscape Ecology and Regions, Cambridge: Cambridge University Press. Weaver.(1979),Territory

二、 本計畫已將部分研究結果整理,發表於國際研討會(Chan, Y.-H., Lin, S.-P., (2010/7), A new model for service improvement design, The 2010 International Conference

This project is the optical electro-mechanic integration design and manufacturing research of high magnifications miniaturized size zoom lens and novel active high accuracy laser

Florida, R.,2002, The Rise of the Creative Class and How It's Transforming Work, Leisure, Community and Everyday Life, Basic Books, New York. (Ed), Toward Scientific

Some efficient communication scheduling methods for the Block-Cyclic redistribution had been proposed which can help reduce the data transmission cost.. The previous work [9,

With the advancement in information technology and personal digital mobile device upgrade, RFID technology is also increasingly common use of the situation, but for