• 沒有找到結果。

CHAPTER 3: AN AUTOMATIC ALGORITHM TO CHOOSE THE

II. k-Nearest Neighbor Classifier based on MRF

4.3 Experiment Designs

In this chapter, only one dataset, Indian Pine site hyperspectral image dataset, would be used to our experiments, because the classification algorithms introduced in this chapter had been designed to improve the classification performance of hyperspectral image via the spatial information. In this chapter, we will compare the classification performances obtained by the SCSVM with OAO and OAA multiclass strategies and other reference classification algorithm: i) standard supervised SVM with OAO and OAA multiclass strategies , ii) k-NN classifier, iii) Bayesian contextual classifier based on Markov random fields, iv) CS4VM, and v) spectral–spatial classification scheme.

For SVM-based classifiers (SVM, CS4VM, and SCSVM), RBF kernel function would be utilized. There is a parameter C to control the trade-off between the margin and the size of the slack variables, and a parameter σ for the RBF kernel function.

Hence, the gird search has been adopted to find the proper 2σ within a range 2 ]

10 , 10

[ 2 for RBF kernels, and parameter C within a given set 1000}

200, 160, 100, 60, 20, 10, 1,

{0.1, . For the model selection of CS4VM and SCSVM,

we consider the same values for C and 2σ as for the standard SVM in order to 2 have comparable results. Moreover, for CS4VM, we fix the value κ1 κ2 = 2, and used

the following values for C κ1:2, 4, 6, 8, 10, 12, and 14, and consider a first-order neighborhood system for the context patterns. For SCSVM aspect, the second-order neighborhood system would employ to SCSVM algorithm to gain the spatial information, and an empirical fixed value of γ is used in our experiments (i.e. we fix

=0.1

γ ). In spectral–spatial classification scheme aspect, the scheme (SVM+EM),

combined the results of a pixel wise SVM classification and the segmentation maps obtained by EM partitional clustering technique, would be employed in our experiments. The EM clustering algorithm was performed with the maximum number of clusters (17 clusters), and applied the OAO multiclass strategy to SVM. For SVM of this scheme, the parameter 2σ and 2 C were determined within the range

] 10 , 10

[ 2 and {0.1,1,10,20,60,100,160,200,1000} by fivefold cross validation, respectively. In PR step of SVM+EM and SCSVM classifiers, we adopt the 3×3 masks to filter the noises in classification maps.

The Bayesian contextual classifiers based on MRF are ML classifier with MRF (ML_MRF) and k-NN classifier with MRF (k-NN_MRF), respectively. We apply second-order neighborhood system and set β =30 to MRF empirically. Concerning the k-NN-based classifiers (k-NN and k-NN_MRF), we carried out several trials, varying the value of k from 1 to 20 in order to identify the value that maximizes the accuracy. For simplicity, the model selection for the k-NN-based classifiers was carried out on the basis of the accuracy computed on the testing dataset.

4.4 Experiment Results and Findings

According the experiment designs for IPS, we choose 10% of the samples for each class as the training set. Hence, when we employ the ML-based classifiers, which requires estimating the covariance matrices of the classes, to classify the whole IPS hyperspectral image, the singular problem of covariance matrices and poor estimations will occur, because number of training sample of the class is less than the dimensionality. Therefore, in IPS experiment, the performances of ML and ML_MRF will not be compared. We calculate the class-specific accuracies of all classifiers and

the overall classification accuracies, kappa coefficients and the average accuracies.

Table 4.1 and Table 4.2 are shown these validation measures from the best performance of k-NN classifier (k = 1), SVM (OAO and OAA multiclass strategies), CS4VM, the spectral-spatial classification scheme (SVM+EM) with and without PR step, SCSVM (OAO and OAA multiclass strategy) with and without PR step, respectively. Note that the best overall accuracy, kappa coefficient and average accuracy are highlighted in shadow cell. ‘‘after PR’’ and ‘‘before PR’’ means that the results include and not include the PR step, respectively. For classification maps aspect, we choose the classification maps with highest accuracies of each type of classifier are shown for comparison. Figure 4.5 (a) to (l) are the classification results of IPS using k-NN (k = 1), k-NN_MRF (k = 1), SVM (OAO), SVM (OAA), CS4VM, SVM+EM with and without PR, SCSVM (OAO) with and without PR, and SCSVM (OAA) with and without PR, respectively. Figure 4.5 (k) is the ground truth of IPS image.

Classifier Overall

Accuracy (%)

Kappa Coefficient (%)

Average Accuracy (%)

k-NN 75.5 72.1 74.6

k-NN_MRF 84.4 82.1 81.8

SVM_OAO 84.4 82.3 85.5

SVM_OAA 86.5 84.6 83.8

CS4VM 88.0 86.3 85.0

before PR 91.3 90.0 81.6

SVM+EM

after PR 92.8 91.8 82.5

before PR 92.9 92.0 94.5

SCSVM_OAO

after PR 94.8 94.1 96.5

before PR 93.3 92.3 91.2

SCSVM_OAA

after PR 96.4 95.9 95.8

Table 4.1 Overall accuracies, kappa coefficients, and average accuracies in percentage of the experimental classifiers in IPS dataset.

62

Class SVM+EM SCSVM(OAO) SCSVM(OAA) No.Number of samples

k-NN k-NN _MRF SVM (OAO)SVM (OAA)CS4 VMbefore PRafter PR before PRafter PRbefore PRafter PR 1 46 78.3 84.8 91.3 95.7 95.7 93.5 93.5 93.5 100.0 95.7 100.0 2 1428 64.8 75.1 78.8 86.3 88.9 86.6 89.0 82.4 86.4 92.0 95.7 3 830 62.8 70.7 82.4 79.8 81.4 89.2 90.1 93.0 94.1 83.1 87.6 4 237 54.9 67.9 95.4 77.2 79.7 97.9 100.0 97.0 100.0 94.9 100.0 5 483 89.2 93.6 90.9 91.1 91.1 93.6 94.6 95.4 95.9 93.2 96.1 6 730 94.9 98.8 93.8 94.5 93.8 97.1 98.5 95.9 97.0 98.9 100.0 7 28 85.7 92.9 96.4 85.7 85.7 0.0 0.0 100.0 100.0 82.1 96.4 8 478 96.0 99.0 84.7 97.3 97.5 97.9 98.3 86.2 88.3 99.4 100.0 9 20 40.0 40.0 60.0 45.0 45.0 5.0 0.0 95.0 100.0 70.0 80.0 10 972 74.4 87.2 89.8 85.8 85.7 87.5 90.2 97.1 98.4 93.2 97.1 112455 76.7 88.6 79.6 86.6 88.7 92.7 94.2 94.4 96.4 93.9 97.4 12 593 50.8 58.5 77.7 75.4 82.6 92.6 92.9 90.2 91.2 96.0 98.7 13 205 98.0 100.0 99.5 99.5 99.5 99.0 99.0 99.5 100.0 99.5 100.0 14 1265 89.7 96.1 91.8 92.6 92.6 93.3 93.8 95.9 97.3 96.1 97.6 15 386 48.7 59.8 69.2 66.8 67.6 83.7 88.3 97.2 99.2 82.9 87.6 16 93 89.2 95.7 87.1 81.7 83.9 95.7 96.8 98.9 100.0 88.2 98.9

Table 4.2 The class-specific accuracies in percentage for the IPS dataset.

(a) k-NN (b) k-NN_MRF (c) SVM_OAO

(d) SVM_OAA (e) CS4VM (f)SVM+EM

(g)SVM+EM (PR) (h) SCSVM_OAO (i) SCSVM_OAO (PR)

(j) SCSVM_OAA (k) SCSVM_OAA (PR) (l) ground truth

■ Alfalfa ■Corn-notill ■Corn-min ■Corn ■Hay-windowed ■Grass/trees

■Grass/pasture-mowed ■Grass/pasture ■Oats ■Soybeans-notill ■Soybeans-min

■Soybeans-clean ■Wheat ■Woods ■Bldg-Grass-Tree-Drives ■Stone-steel towers Figure 4.5 Classification maps of IPS dataset from all classifiers.

According to the classification results from Table 4.1, Table 4.2, and Figure 4.4, there are some findings, and describe as following:

1. At the accuracy aspect, SCSVM (OAA) with PR step obtains the highest accuracies in overall accuracy, kappa coefficient and they are 96.4%, 95.9, respectively (see Table 4.1). However, SCSVM (OAO) with the PR step can obtain the highest average accuracy, and it is 96.5%. These results indicate that SCSVM with OAO has the better classification ability than SCSVM with OAA to the classes with a few samples (i.e. the class-specific accuracies from SCSVM (OAO) with PR step in class 7, 9, and 16 all are 100%, but from SCSVM (OAA) with PR step in class 7, 9, and 16 are 96.4%, 80%, 98.9%, respectively). Obviously the class-specific accuracies of SCSVM with PR step is higher than k-NN, SVM (OAO), SVM (OAO), CS4VM, SVM+EM in all classes (see Table 4.2).

2.There are only 10% of samples, which select randomly from the reference dataset, for each class as the training set to train the classifiers. Therefore, some classes are represented by a few train samples (i.e. there are only 3 and 2 samples for class 7 (Grass/pasture-mowed) and class 9 (Oats), respectively), that may provide a not fair-enough representation of this class in training process. The classification performance of class 7 from k-NN, SVM (OAA), CS4VM, and SVM+EM, are not well, and the classification accuracy of class 9 from the k-NN, SVM (OAO), SVM (OAA), CS4VM, and SVM+EM, are low. However, this situation would be improved by SCSVM (OAO), even without PR step, and the classification accuracies of class 7 and 9 are 100%.

3. The classifiers with postprocessing (PR) step can reduce some noises in the

classification map (see as Figure 4.5 (f) →Figure 4.5 (g), Figure 4.5 (h) →Figure 4.5 (i), and Figure 4.5 (j) →Figure 4.5 (k)), and increase a little classification accuracy.

4. The classification maps from spatial-based classifiers (SVM+EM, SCSVM (OAO), and SCSVM (OAA)) are much better than the only spectral information based classifiers (k-NN, SVM (OAO), SVM (OAA), and CS4VM), and the speckle-like errors improve by the spatial-based classifiers, especially in areas of Soybeans-min, Soybeans-notill, and Corn-notill, which are the most difficult parts to accurately classify. It deserves to be mentioned that SCSVM (OAO) has a great improvement in the classification map, the classification map from SCSVM (Figure 4.5 (k)) is approximated the ground truth of IPS.

5. The spectral-spatial classification scheme (SVM+EM) has a sound performance of classification map, and the classification accuracy of SVM+EM can obtain 92.8%

overall classification accuracy. This scheme relies on the partitional clustering results. Hence, if the partitional clustering technique is not good for partitioning these areas, which have similar spectral properties from different classes or come from the small sample size classes (i.e. Oats and Grass/pasture-mowed), then these areas will be misclassified and classified by the clustering technique to the same class (see as the red circle of Figure 4.5 (g)). In the other point of view, when the partitional clustering technique works very well, but SVM classifier can’t sensitively distinguish some pixels, which have similar spectral properties from different classes, and then these areas will be sacrificed (see as the black circle of Figure 4.5 (g)). According to the above reasons, SVM+EM lead to either 0% or low classification accuracies for the small classes

相關文件