• 沒有找到結果。

Wavelet Networks For Hyperpsectral Image Classification

4. Experiments and Results

In this experiment, two hyperpsectral test fields were used to test the performance of the wavelet networks classifier. The first data sets is an AVIRIS image derived from Purdue University. The image was taken over an agricultural portion on NW Indiana in 1992.

Figure shows the scene and its corresponding vegetation map. The image size is 145¯145 pixels. The original image has 224 spectral bands from 400nm to 2450nm with 10nm spectral resolution. The number of bands is 220 after removing 4 noisy bands.

⎟⎟

Hsiu-Han Yang, Pai-Hui Hsu:Wavelet Networks For Hyperpsectral Image Classification 213

(a) Test data 1 (b) Ground truth data

Figure 3 Test data 1delivered by an AVIRIS image on NW Indiana

(a) Test data 2 (b) Ground truth data

Figure 4 Test data 2 collected by a Hyperion image covering the Kenting National Park in Taiwan

The ground truth data includes 8 different classes after discarding 5 classes which contains insufficient training samples. The second test image is a Hyperion hyperspectral image collected on 5 May 1994 as shown in Figure . This study area covers the Ken-ting National Park in Taiwan. The image size is 100¯100 pixels. Due to the weak spectral response, only 198 spectral bands ranging from 426.82nm to 2395.50nm were selected from the original 242 spectral bands. 5 classes were available for classification test in this area. The training samples for classification and the check data for estimation were obtained from available ground truth. The radiance spectra of these two hyperspectral images are directly used without any kind of atmospheric correction.

4.1 Experiment 1: Effect of Training Sample Size

The purpose of this experiment is to test the influence of different number of training samples The overall accuracies of classification

using different training samples with different number of features are shown in figure 5. Some consequences are revealed as follows, Firstly, The overall accuracies using different training samples are nearly equivalent when the feature number is small than 8. Secondly, due to insufficient samples for increasing features, the smaller training sample size, 50 samples for example, yielded a slower raising rate of accuracy than the other training sets that contains more available training samples.

Thirdly, the more training samples are beneficial for the increase of classification accuracy theoretically, however, there was no quiet apparent accuracy improvement with increasing training samples from 100 to 200. The findings indicate that a wavelet networks-based classifier carries out a desirable accuracy even limited training samples are available. A similar conclusion can be made according to figure 6. In fact, a more significant Hughes phenomenon showed in relatively fewer samples of test image

2, the 20 and 40 samples after 10 features for example.

Figure 5 Overall accuracy of test image1

Figure 6 Overall accuracy of test image2

4.2 Experiment 2: Comparison with Different Feature Extraction Methods

In experiment 2, three different feature extraction methods: linear WFE, nonlinear WFE and PCA, were integrated into a hybrid neural networks classifier to compare with wavelet networks-based classifier. The classification

accuracies of four approaches with two study images were illustrated in figure 7 and figure 8.

With different sample size, wavelet networks represents generally the best classification performance among four methods. Two summaries are worth notice in figure 7 and figure 8. Three approaches, wavelet networks and two wavelet-based feature extraction methods, resulted in more similar accuracies in data set 2 than in data set 1. By contrast, wavelet networks shows particularly outstanding classification result in data set 1 compare to neural networks that was trained by pre-processed features. These results suggested that wavelet networks classifier may provide a superior separability in very similar classes because data set 1 comprises 8 insufficiently separable classes. The other point is that principle component analysis lead a significant poor accuracy than two wavelet-based feature extraction. It verified the efficiency of wavelet-based feature extraction and agreed with Hsu’s results (Hsu 2003).

Figure 7 The effect of training sample size on classification accuracywith test image 1

Hsiu-Han Yang, Pai-Hui Hsu:Wavelet Networks For Hyperpsectral Image Classification 215

Figure 8 The effect of training sample size on classification accuracy with test image 2

5. Conclusions

In this paper, the wavelet network algorithm is used for the classification of hyperspectral images. In the wavelet networks, the task of feature extraction is performed with the wavelet decomposition, whereas the classification is carried out by the multi-layer neural networks. The advantages of wavelet networks include optimally adapted features as network inputs, improvement of the classification accuracy, and reduction of the calculated time. The experiment results showed that the wavelet networks exactly an effective tool for classification of hyperspectral images.

Because the results of classification using

wavelet networks are strongly depend on the choice of wavelet basis, the classification accuracies of wavelet network using different wavelets function will be tested in the future.

Instead of back propagation wavelet networks, an additional interesting research might be to consider other wavelet networks, such as radical wavelet neural networks and recurrent wavelet networks. Moreover, a more extensive study about the applicability of other artificial intelligence techniques, such as support vector machine, fuzzy logic and genetic algorithm is another interesting topic.

ACKNOWLEDGMENT

This research project was sponsored by the National Science Council of Republic of China under the grants of NSC 95-2221-E-492 -011.

REFERENCE

Dickhaus, H. and Heinrich, H., 1996.

Classifying biosignals with wavelet networks [a method for noninvasive diagnosis]. Engineering in Medicine and Biology Magazine, IEEE, 15(5): 103-111.

Gong, P., Pu, R. and Yu, B., 1997. Conifer species recognition: An exploratory analysis of in situ hyperspectral data.

Remote Sensing of Environment, 62(2):

189-200.

Hsu, P.-H., 2003. Spectral Feature Extraction of Hyperspectral Images using Wavelet Transform, Ph.D. Thesis, National Cheng Kung University, Tainan, Taiwan, R.O.C.

Hsu, P.-H., 2007. Feature extraction of hyperspectral images using wavelet and matching pursuit. ISPRS Journal of Photogrammetry and Remote Sensing, 62(2): 78-92.

Lee, C. and Landgrebe, D.A., 1997. Decision boundary feature extraction for neural networks. IEEE Transactions on Neural Networks, 8(1): 75-83.

Mallat, S., 1989. A theory for multiresolution signal decomposition: the wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7):674-693.

Mallat, S., 1999. A Wavelet Tour on Signal Processing. Academic Press, New York.

Richards, J.A. and Jia, X., 2005. Remote Sensing Digital Image Analysis: An Introduction. Spinger, Berlin Heidelberg.

Stefan, P. and Sagar, V.K., 1999. Feature

Extraction From Wavelet Coefficients for Pattern Recognition Tasks. 21(1): 83-88.

Zhang, Q. and Benveniste, A., 1992. Wavelet networks. IEEE Transactions on Neural Networks, 3(6): 889-898.

航測及遙測學刊 第十三卷 第三期 民國 97 年 9 月 217

相關文件