• 沒有找到結果。

結論與未來工作

5.1 結論

本研究提出使用特徵選取的方式來實踐通用影像分類器的架構,藉由 SVM 的核心方法 F-score 演算法有效挑選出合適的影像特徵來完成影像分類的工作。

透過實驗證明在不同領域的分類任務中皆能有一定程度的辨識準確度,並且深 入探討分類資料中的訓練樣本數與類別數目之間的關係與差異性。

其中,相同主題的資料庫以 NEC Animal Dataset 為最佳,皆擁有 9 成 5 的 辨識率;不同主題的影像資料庫使用 10 張訓練樣本,就能達到 8 成以上的辨識 率。但是我們也可以看到,本研究方法並非對所有的資料庫表現都是最好的,

資料庫的特性仍然是決定辨識率好壞一個很重要的因素。

在應用方面貼近一般真實分類任務狀況,在尚未假設分類主題領域的前提 之下,使用本研究提出的通用分類器架構能自動產出適合的分類器,供其使用。

46

5.2 未來工作

本研究仍有改善的空間,為達到通用分類器的效果,我們在準確率上做了 犧牲,相較於專屬設計的分類器上仍然遜色許多,雖說這是權衡(Trade-off)的考 量,要達到通用效果難無法完美於辨識率。為提高辨識準確率,特徵擷取的部 分應提高擷取特徵的詮釋能力,例如在特徵擷取的時候加入重點區域選擇,或 者更多不同類型的影像特徵,加強特徵的表示能力,避免不必要的雜訊影響到 整體辨識率;或者,在特徵選取的決策上能有更精闢的策略,例如嘗試將特徵 以類別的方式作分類,以分群階層式架構概念做挑選,讓佔有高比重同類型的 特徵擁有較低的優先權,彌補 F-score 沒考量到共同資訊的缺點,相信都能夠使 分類器擁有更佳的表現。

47

參考文獻

[1] C. Chang and C. Lin, “LIBSVM: A Library for Support Vector Machines,”

ACM Trans. Intell. Syst. Technol. 2011.

[2] Y. Chen and C. Lin, “Combining SVMs with Various Feature Selection Strategies,” Featur. Extr. Stud. Fuzziness Soft Comput., vol. 207, no. 1, pp.

315–324, 2006.

[3] K. Koutroumba, Pattern Recognition 4th Edition. pp. 4–7.

[4] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int.

J. Comput. Vis., 2004.

[5] Fei-Fei Li, Rob Fergus, Antonio Torralba “Recognizing and Learning Object Categories,” Int. J. Comput. Vis., 2009.

[6] U. Vidal Naquet, “Object Recognition with Informative Features and Linear Classification,” Comput. Vis., 2003.

[7] Curse of dimensionality. [Online]. Available:

http://en.wikipedia.org/wiki/Curse_of_dimensionality.

[8] M. Dash and H. Liu, “Feature selection for classification,” Intell. Data Anal., 1997.

[9] Feature Selection. [Online]. Available: http://terms.naer.edu.tw/detail/1678987/.

[10] M. Dash, K. Choi, P. Scheuermann, and H. L. H. Liu, “Feature selection for clustering - a filter solution,” 2002 IEEE Int. Conf. Data Mining, 2002.

[11] J. R. Quinlan, “Discovering Rules From Large Collections of Examples: A Case Study,” Expert Syst. Microelectron. Age. Edinburgh Univ. Press., 1979.

[12] M. Robnik-Siknja and I. Kononeko, “Theoretical and empirical analysis of RelifF and RReliefF,” Mach Learn, 2003.

[13] Huan Liu, “Chi2: feature selection and discretization of numeric attributes,” in Tools with Artificial Intelligence, 1995.

[14] P. Pudil, “Floating search methods in feature selection,” Pattern Recognit. Lett., pp. 1119–1125, 1993.

[15] I.-S. Oh, J.-S. Lee, and B.-R. Moon, “Hybrid genetic algorithms for feature selection.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 11, pp. 1424–

1437, 2004.

48

[16] P. L. Iñaki Inza1, Basilio Sierra1, Rosa Blanco1, “Gene selection by sequential search wrapper approaches in microarray cancer class prediction,” J. Intell.

Fuzzy Syst., vol. 12, 2002.

[17] F. De Inform and B. A. Draper, “Feature Selection from Huge Feature Sets,”

Comput. Vis., 2001.

[18] Y. Chang and C. Lin, “Feature Ranking Using Linear SVM,” pp. 53–64.

[19] S. Das, “Filters, wrappers and a boosting-based hybrid for feature selection,”

in ICML ’01 Proc. of the Eighteenth International Conference on Machine Learning, 2001, pp. 74–81.

[20] A. Y. Ng, “On Feature selection: Learning with Exponentially many Irrelevant Features Training Examples,” Proc. 15th Interntional Conf. Mach. Learn., pp.

404–412, 1998.

[21] G. McLachlan, Discriminant Analysis and Statistical Pattern Recognition.

Wiley. 1992.

[22] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,”

Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect.

Notes Bioinformatics), vol. 3951 LNCS, pp. 404–417, 2006.

[23] M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua,

“ BRIEF: Binary Robust Independent Elementary Features, ” IEEE Trans.

Pattern Anal and Mach. Intell, 2012.

[24] E. Rublee and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,”

Comput. Vis. (ICCV), 2011.

[25] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. “Gene selection for cancer classication using support vector machines,” Machine Learning, 2002.

[26] NEC Animal Dataset. [Online]. Available:

http://ml.nec-labs.com/download/data/videoembed.

[27] Leedsbutterfly dataset. [Online]. Available:

http://www.comp.leeds.ac.uk/scs6jwks/dataset/leedsbutterfly/.

[28] Flower Image Dataset. [Online]. Available:

http://www.robots.ox.ac.uk/~ vgg/data/flowers.

[29] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman, “Multiple Kernels for object detection,” Proc. IEEE Int. Conf. Comput. Vis., pp. 606–613, 2009

49

[30] Caltech-101 Dataset. [Online]. Available:

http://www.vision.caltech.edu/Image_Datasets/Caltech101/.

[31] Experiments on Caltech-101. [Online]. Available:

http://www.robots.ox.ac.uk/~vgg/software/MKL/.

相關文件