• 沒有找到結果。

CHAPTER 3: AN AUTOMATIC ALGORITHM TO CHOOSE THE

3.4 Experiment Results and Findings

3.4.1 Hyperspectral Image

According the experiment designs for IPS dataset, we choose 10% of the samples for each class as the training set. However, ML classifier requires estimating the covariance matrices of the classes, the singular problem of covariance matrices and poor estimations would occur, because number of training sample of the class is less than the dimensionality. Therefore, in IPS experiment, the classification results of ML classifier are null. Table 3.5 are shown these validation measures from the best performance of k-NN classifier (k = 1), SVM (OAO and OAA multiclass strategy) with different kernel approaches and the training time about these classification algorithms, and Table 3.6 display the class-specific accuracies from classifiers which

Table 3.4 The framework of experimental designs.

have the highest valid measure. For a convenient reason, we display the classification maps with highest performance from Table 3.5, and these maps shown at Figure 3.3, Figure 3.4, and Figure 3.5, respectively.

Classifier Kernel function

Overall Accuracy

(%)

Kappa Coefficient

(%)

Average Accuracy

(%)

Training time (sec)

k-NN 75.5 72.1 74.6 334.4

CK1 with constraint 87.6 85.9 85.8 29972.6 CK2 with constraint 83.3 80.9 80.7 29567.1

CK3 with constraint 87.6 85.9 85.8 23871.2

CK1 without constraint 60.2 54.7 56.6 41381.1 CK2 without constraint 80.4 77.6 78.2 22544.8 CK3 without constraint 86.1 84.1 83.6 15414.6

RBF 87.6 85.9 85.8 336758.1

SVM_OAA

Polynomial 83.0 80.6 78.4 240159.7

CK1 with constraint 79.5 77.0 85.1 1769.8 CK2 with constraint 75.9 72.9 81.7 706.9 CK3 with constraint 79.5 77.0 85.3 1903.9 CK1 without constraint 84.8 82.8 85.8 2003.9 CK2 without constraint 72.9 69.9 82.8 872.2 CK3 without constraint 84.8 82.7 85.8 1922.2 ACK1 with constraint 83.8 81.5 80.8 155.5 ACK2 with constraint 74.7 71.8 83.2 49.3 ACK3 with constraint 83.9 81.6 81.9 352.4 ACK1 without constraint 83.2 80.9 80.2 153.6 ACK2 without constraint 77.2 74.5 84.4 47.6 ACK3 without constraint 83.5 81.2 81.5 352.9

RBF 84.0 81.9 85.5 7049.8

SVM_OAO

Polynomial 75.2 72.2 82.0 2231.8

Table 3.5 Classification results of IPS dataset from experimental designs.

Figure 3.4 The classification map of IPS dataset from k-NN classifier.

(a) CK1 (b) CK2 (c) CK3

SVM (OAA) with constraint

(d) CK1 (e) CK2 (f) CK3

SVM (OAA) without constraint

(g) RBF (h) Polynomial

Figure 3.5 Classification maps of IPS dataset from experimental designs of SVM (OAA).

(a) CK1 (b) CK2 (c) CK3

(d) ACK1 (e) ACK2 (f) ACK3

SVM (OAO) with constraint

(g) CK1 (h) CK2 (i) CK3

(j) ACK1 (k) ACK2 (l) ACK3

SVM (OAO) without constraint

(m) RBF (n) Polynomial

Figure 3.6 Classification maps of IPS dataset from experimental designs of SVM (OAO).

Table 3.6 The percentage of class-specific accuracies from classifiers which have the highest valid measure.

Class SVM_OAA SVM_OAO

No.

Number of samples

CK1 with constraint

CK3 with constraint

RBF

CK1 without constraint

CK3 without constraint

1 46 95.7 95.7 95.7 91.3 91.3

2 1428 85.2 85.2 85.2 81.7 81.7

3 830 75.4 75.4 75.4 77.8 78.0

4 237 84.0 84.0 84.0 89.5 88.2

5 483 92.8 92.8 92.8 91.9 91.9

6 730 95.2 95.2 95.2 96.3 96.3

7 28 75.0 75.0 75.0 96.4 96.4

8 478 97.7 97.7 97.7 94.8 94.8

9 20 60.0 60.0 60.0 60.0 60.0

10 972 86.3 86.3 86.3 86.1 86.0

11 2455 87.0 87.0 87.0 81.5 81.5

12 593 85.8 85.8 85.8 71.8 72.2

13 205 99.5 99.5 99.5 99.5 99.5

14 1265 95.7 95.7 95.7 91.5 91.5

15 386 71.2 71.2 71.2 71.5 71.5

16 93 86.0 86.0 86.0 91.4 91.4

From Table 3.5, Table 3.6, Figure 3.4, Figure 3.5, and Figure 3.6, there are some findings shows as following:

1. SVM can obtain the better performance than k-NN, and the OAA multiclass strategy has the better performance than OAO multiclass strategy. The highest classification performance comes from the SVM (OAA) with CK1 with constraints, CK3 with constraints, and RBF kernel (grid search), and the overall classification accuracy, kappa coefficient, and average accuracy are 87.6%, 85.9%, and 85.8%, respectively.

2. From a formular point of view, composite kernel with constraints is to find the weight of components in composite kernel. If the combination coefficient of the

component of composite kernel approaches to 1 and other components approach to 0, then the algorithm is finding the most important component in composite kernel.

3. In OAA multiclass strategy, the composite kernels (CK1, CK3 with constraints) applies to SVM can obtain the same result in classification accuracies with RBF (grid search), even in the classification maps. In OAO multiclass strategy, the composite kernel (CK1 without constraint) can gain the better classification performance than RBF and polynomial (grid search). Hence, in OAO, the composite kernel is more suitable than other kernels. However, we can observe that the highest average accuracy of SVM (OAO) is the same with SVM (OAA). In the other hand, SVM (OAO) has the better classification performance in the small size class (i.e.

class 3, 4, 7 and 16, see as Table. 3.6).

4. In the training time aspect, the training time of grid search is much greater than composite kernels, no matter what multiclass strategy it is. Hence, applying this automatic algorithm to choose the proper composite kernel can save the training time, and obtain a better or the same classification performance.

3.4.2 Educational Measurement Data

For educational measurement data, Table 3.7 are shown these validation measures from the best performance of ML classifier, k-NN classifier (k = 1), SVM (OAO and OAA multiclass strategy) with different kernel approaches and the training time about these classification algorithms, and Table 3.8 display the class-specific accuracies from classifiers which have the highest valid measure.

Classifier Kernel function

Overall Accuracy

(%)

Kappa Coefficient

(%)

Average Accuracy

(%)

Training time (sec)

ML 45.2 0.0 12.5 0.03

k-NN 35.9 25.4 59.1 0.91

CK1 with constraint 64.5 51.3 74.8 10258.5 CK2 with constraint 57.8 44.7 63.9 5240.2 CK3 with constraint 67.2 55.2 79.5 10536.6 CK1 without constraint 21.8 -1.5 19.9 7736.6 CK2 without constraint 42.0 26.3 48.6 5956.6 CK3 without constraint 32.0 8.3 26.7 6376.2

RBF 69.6 55.3 75.7 644737.2

SVM_OAA

Polynomial 58.5 46.9 73.0 127539.8

CK1 with constraint 72.8 61.8 81.9 2034.0 CK2 with constraint 63.8 53.3 71.7 2039.1 CK3 with constraint 72.6 61.6 81.8 2670.4 CK1 without constraint 76.2 67.0 85.1 374.1 CK2 without constraint 54.9 43.8 67.8 750.3 CK3 without constraint 75.9 66.7 84.8 764.0

ACK1 with constraint 75.5 65.9 82.9 16.6

ACK2 with constraint 59.2 48.6 71.1 3.2

ACK3 with constraint 73.5 63.4 72.4 34.2

ACK1 without constraint 76.4 67.2 82.2 17.6

ACK2 without constraint 62.4 52.4 76.7 2.81

ACK3 without constraint 76.2 67.3 87.5 32.7

RBF 73.0 59.9 82.5 57870.2

SVM_OAO

Polynomial 73.5 64.4 80.0 237561.3

Table 3.7 Classification results of educational measurement dataset from experimental designs.

Class SVM_OAO No.

Number of samples

ACK1 without constraint

ACK3 without constraint

1 30 96.7 100.0

2 16 62.5 100.0

3 27 92.6 92.6

4 201 76.6 73.6

5 33 87.9 93.9

6 10 90.0 90.0

7 5 80.0 80.0

8 266 71.1 69.5

From Table 3.7 and Table 3.8, there are some conclusions listed as following:

1. In the experiments of educational measurement data, the highest overall classification accuracy comes from SVM (OAO) via adaptive composite kernel (AK1) without constraints, and the highest kappa coefficient and average accuracy come from SVM (OAO) via adaptive composite kernel (AK3) without constraints.

The highest overall classification accuracy, kappa coefficient, and average accuracy are 76.4%, 67.3%, and 87.5%, respectively. At classifier aspect, SVM can obtain the better classification performance than ML and k-NN.

2. SVM (OAO) via AK1 without constraints has the highest overall classification accuracy, but not the highest classification performance in kappa coefficient and average accuracy. We can observe that SVM (OAO) via AK1 without constraints may not have better classification ability in some small size class via kappa coefficient and average accuracy. From Table 3.8, the evidence support our point.

Table 3.8 The percentage of class-specific accuracies from classifiers which have the highest valid measure.

3. In the training time aspect, the training time of grid search is much greater than composite kernels, no matter what multiclass strategy it is, and the classification performance has a great improvement via this automatic algorithm. Hence, applying this automatic algorithm to choose the proper kernel not only can save the training time, but also can obtain better classification performances in the classification problem of educational measurement data.

CHAPTER 4: CLASSIFIERS USING SPECTRAL AND SPATIAL INFORMATION

4.1 Previous Works

In the recent year, many studies (Bruzzone & Persello, 2009; Camps-Valls, Gomez-Chova, Munoz-Mari, Vila-Frances, Calpe-Maravilla, 2006; Jackson &

Landgrebe, 2002; Kuo, Chuang, Huang, &Hung, 2009; Marconcini, Camps-Valls, &

Bruzzone, 2009; Li & Narayanan, 2004; Tarabalka, Benediktsson & Chanussot, 2009) show that classification techniques employ both spectral and spatial information are effective and stable for hyperspectral image classification. Figure 4.1 is the diagram of definition of spectral domain and spatial domain.

There are three different types of spatial-based classifier, which had proposed from the references, are introduced. One is Context-sensitive semisupervised support vector machine (Bruzzone & Persello, 2009), and another one is Bayesian contextual classifier base on Markov random fields (Jackson & Landgrebe, 2002; Kuo, Chuang, Huang, & Hung, 2009), and the other one is Spectral–Spatial classification Based on partitional clustering techniques (Tarabalka, Benediktsson, & Chanussot, 2009). The similar spectral properties was presented and described in the previous chapter. In this thesis, a spatial-contextual support vector machine (SCSVM) classification algorithm

is proposed to overcome this situation for hyperspectral image classification problem, and it would be introduced in the following section.

Figure 4.1 The diagram of definition of spectral domain and spatial domain.

相關文件