• 沒有找到結果。

The Algorithmic Parameters of a Fuzzy Dynamic Learning Neural Network

N/A
N/A
Protected

Academic year: 2022

Share "The Algorithmic Parameters of a Fuzzy Dynamic Learning Neural Network"

Copied!
12
0
0

加載中.... (立即查看全文)

全文

(1)

The Algorithmic Parameters of a Fuzzy Dynamic Learning Neural Network

Y. C. Tzeng 1 N. S. Chou 2

ABSTRACT

When applied to remote sensing imagery classification, the conventional neural network classifier represents training information in a one-pixel-one-class basis. Therefore, the class mixture of a pixel cannot be taken into account resulting in poor classification accuracy. Based on the original dynamic learning neural network (DL), this paper introduces a fuzzy dynamic learning neural network (FDL) in which the training information is represented as fuzzy sets. In order to add the fuzzy information to the neural network, a fuzzy c-means clustering algorithm is used to assign the degree of membership of each pixel during the training stage. In this paper, the FDL is applied to a SAR image classification to evaluate the selection of the algorithmic parameters including the membership weighting exponent and the measure of dissimilarity. Furthermore, comparisons between the DL and FDL are made. The effectiveness of the combination of the neural network and fuzzy sets theory is demonstrated by an example of SAR image classification. Experimental results show that the separatability between the similar classes and the classification capability of the class-mixed pixels are improved. Moreover, the classification results match better with the ground truth.

Key Word: degree of membership, membership weighting exponent, measure of dissimilarity, SAR

1. INTRODUCTION

The neural network has been widely applied to remote sensing imagery classification (Heermann and Khazene, 1992; Bischof et al., 1992; Hara et al., 1994; Chen et al., 1995; Chen et al., 1996) in the recent years. It has been shown by many investigators that the neural network approach to classification problems is superior to the statistical approach without a priori knowledge of the data (Benediktsson et

al., 1990). However, poor classification a c c u r a c y i s u s u a l l y o b t a i n e d a t t h e nonhomogeneous regions such as urban areas. It is because the conventional neural network classifier represents training information in a one-pixel-one-class basis; therefore, class mixture of a pixel cannot be taken into account.

Consequently, a new approach to represent the training information according to the degree of m e m b e r s h i p i s r e q u i r e d . T h e e f f e c t i v e combination of the fuzziness information in a neural network is expected to improve the

1 Department of Electronics Engineering, National United University

(2)

classification performance.

Fuzzy sets were first introduced by Zadeh (1965). The operations of fuzzy sets are extensions of those used for traditional crisp sets.

Traditional crisp sets use probability theory to explain if an event is expected to occur, however, fuzzy sets measure the degree to which an event occurs. In the past, many researchers have a p p l i e d t h e f u z z y s e t s t h e o r y t o i m a g e classification. Caillol et al. (1993) adopted fuzzy random fields for image segmentation. Wang (1990) introduced fuzzy mean and fuzzy variance for a fuzzy set to identify the class- mixed pixel. The advantage provided by fuzzy sets is that the degree of membership in a set can be specified, rather than the binary is or is not a member. It is valuable to combine the neural network and fuzzy sets theory so as to take advantages from these two distinct approaches.

By viewing each class as a fuzzy set and identifying the assignment operation with the membership function, a direct relationship between fuzzy sets and neural network can be realized. Keller and Hunt (1985) applied the fuzzy hyperplane decision boundaries to replace the crisp decision boundaries of the perceptron neural network. Simpson (1990) proposed a fuzzy min-max neural network. Each fuzzy set is an aggregate of fuzzy set hyperbox defined by a min point and a max point with a corresponding membership function. Based on the original dynamic learning neural network (DL) (Tzeng et al., 1994), Tzeng and Chen (1998) introduced a fuzzy dynamic learning neural network (FDL) in which the training information is represented by means of fuzzy sets theory. In order to add the fuzziness information to the neural network, a fuzzy c-means clustering algorithm (Bezdek,

1981) is used to assign the degree of membership of each pixel during the training stage. In the next section, the fuzzy dynamic learning neural network is introduced. The algorithmic parameters including the membership weighting exponent and the measure of dissimilarity of the fuzzy c-means algorithm are introduced in the following section.

In Section IV, the selection of the algorithmic parameters of the fuzzy c-means algorithm is discussed. The effectiveness of FDL is also demonstrated by the example of SAR image classification. The classification results by applying DL and FDL are compared. Finally, some conclusions are drawn in Section V.

2. THE FUZZY DYNAMIC LEARNING NEURAL

NETWORK

By incorporating the fuzzy sets theory into the dynamic learning neural network (DL), the development of the fuzzy dynamic learning neural network (FDL) is introduced (Tzeng and Chen, 1998). Given a set of input vectors

{

x1,K,xN

}

, a fuzzy c partition of these vectors specifies the degree of membership of each vector in each of the c classes. It is denoted by the c×N matrix

U =

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎥⎥

⎥⎥

⎥⎥

⎥⎥

u u u

u u u

u u u

N

N

c c cN

11

12

1

2 1

22

2

1 2

...

...

...

M M O M

, where ukj is the degree of membership of xj

(3)

in class k. The matrix U has to satisfy the following properties

[ ]

ukj ∈ 0 1, for 1≤ ≤k c and 1≤ ≤j N

ukj

k c

= = 1

1 for 1≤ ≤j N

0

1

< <

=ukj N j

N

for 1≤ ≤k c

The fuzzy c-means algorithm attempts to cluster input vectors by searching for a local minima in the following objective function

( ) ( )

Jm ukj mD j k

j N k

= c

=

=

x v

1 1

……….(1) (1) , where vk is the fuzzy mean of the class k, m is

the membership weighting exponent or the fuzziness index, and the measure of dissimilarity

( )

D xjvk is the squared distance between the data point xj and the k-th fuzzy mean vk. The selection of the membership weighting exponent and the measure of dissimilarity will be discussed in the following sections. The objective function Jm follows the squared error clustering criterion, as its minimization produces fuzzy clusters that are optimal in a generalized least squared errors sense. It was shown in (Bezdek, 1981) that, a local minimum of Jm is reached only if

( )

v

( )

x

k

k j m

j j

N

k j m j

N

u

u

= =

=

1

1

………(2) (2)

( )

( )

( ) u

D D

k j

j k j

i m

i c

=

⎜⎜

⎟⎟

=

1

1 1

1

x v

x v

…………....(3) (3) Fuzzy c-means algorithm (Bezdek, 1981) solves

equations (2) and (3) iteratively until it converges to a local minimum. A direct relationship between fuzzy sets and neural network is realized by viewing each class as a fuzzy set and identifying the assignment operation with the membership function. In order to incorporate the fuzziness information into the original dynamic learning neural network, the crisp desired output dkj of the k-th class corresponding to the j-th input pattern is replaced by the degree of membership ukjand the fuzzy c-means clustering algorithm is adopted to assign the degree of membership of each pixel during the training stage.

3. THE ALGORITHMIC PARAMETERS

The fuzzy c-means algorithm adopted by FDL has a number of algorithmic parameters.

Among them, the most important parameters are the membership weighting exponent m and the measure of dissimilarityD

(

xj vk

)

. In general, the larger m is, the fuzzier which have the membership assignments; and conversely, as m approaches 1, fuzzy c-means solutions become crisp. The membership weighting exponent m thus controls the extent of membership sharing between fuzzy clusters. However, no theoretical basis for an optimal choice for m has emerged up to date. A heuristic guideline for m is available to follow: good fuzzy clusters are in fact not very fuzzy (Backer, 1978). Besides, in this paper, four different types of dissimilarity measure are analyzed, which are:

Type 1. Euclidean distance:

(4)

( ) ( ) ( ) ( )

D j k j k T j k xij vki

i n

x v = x v x v =

= 1

2 (4) (4)

where T denotes the transpose operator, n is the number of input channels, xij is the i-th channel of the j-th input pattern xj, and vki is the i-th channel of the k-th fuzzy mean vk.

Type 2. Euclidean distance weighted by variance:

( ) ( )

D j k x v

ki i

j ki i

n

xv = −

= 12 2

1

σ

(5) (5)

where the fuzzy variance can be obtained by (Bezdek, 1981)

( ) ( )

σ ( )

µ

ki

µ

k j m

i j

ki j

N

k j m j

N

x v

2

2 1

1

=

=

=

(6) (6)

Type 3. Euclidean distance weighted by probability:

( ) ( )( )

D

p x

x v

j k

ki i

j i

j ki i

n

xv = −

= 1 2

1

(7) (7)

where the probability distribution function

( )

pki xij is a description of the degree of similarity between xij and vki. After speckle reduction, the probability distribution of the amplitude of a SAR image is approximately a Gamma distribution (Nezry et al., 1993). The Gamma distribution function can be obtained by

( ) ( ) ( )

⎜⎜

⎛−

⎟⎟⎠

⎜⎜ ⎞

ki j j i

ki i j

ki i

v x x

x v

p β β

β

β β

1 1exp (8) (8)

where

β

= vki2

σ

ki2 is a normalized variance.

Type 4. Mahalanobis distance:

( ) ( ) ( )

D xjvk = xjvk TC−1fk xjvk (9) (9) where the fuzzy covariance matrix of the k-th

class Cfk can be obtained by (Bezdek, 1981)

( ) ( )( )

C

( )

u

u

fk

k j m

j

k j

k T j

N

k j m j

= N

− −

=

=

x v x v

1

1

(10)

4. APPLICATIONS AND DISCUSSIONS

It is worthwhile to devote some efforts to understand the selection of the algorithmic parameters of the fuzzy dynamic learning neural network (FDL). To assess the performances obtained by different combinations of the algorithmic parameters selections, the FDL is applied to a SAR image classification. In this paper, four different values of the membership weighting exponent m and four different types of the measure of dissimilarity D

(

xj vk

)

mentioned in the previous section are evaluated.

A polarimetric SAR image is used as a test example. Because SAR generates an image by the coherent processing of scattering signals, they are highly susceptible to the speckling effects. The presence of multiplicative speckle noise in an image reduces the ability of the image to be distinguished and classified. Thus, a preprocessing of the image is necessary. A Lee filter (Lee et al., 1991) is chosen to reduce the speckle. The window size of the filter is 7×7. On the other hand, for the purpose of classification, a winner-takes-all approach is adopted to select a proper class. Therefore, no threshold value is set and thus no “unknown” class is produced.

This enabled us to use Kappa coefficient for accuracy evaluation. This statistic was shown to be an efficient estimator of the classification accuracy (Lillesand and Kiefer, 1993). It is

(5)

based on the difference between the actual classification agreement (i.e. agreement between computer classification and reference data as indicated by the diagonal elements) and the chance agreement, which is indicated by the product of row and column marginals. The kappa coefficient, overall purity, UP% (user’s purity), and PP% (producer’s purity) can be calculated from a classification matrix by

kappa coefficient=

= + +

= + +

=

N xii x x N x x

i r

i i

i r

i i

i r

1 1

2 1

(11) (11)

overall purity=

⎟ ×

= xii N i

r 1

100% (12) (12)

( )

UP%= xii x+i ×100% (13) (13)

( )

PP%= xii xi+ ×100% (14) (14) where r is the number of rows in the matrix, N is

the total number of observations, xii is the number of observations in row i and column i (i.e., the i-th diagonal element), xi xij

j r + =

=

1

and x i xji

j r + =

=

1

are the marginal totals of row i and column i, respectively, and xij denotes the number of pixels of class i that is classified as class j. Notice that the numerator of equation (11) is similar to the observed minus in the expected calculation performed in a chi-square analysis.

The kappa coefficient is indeed a measure of how well the classification agrees with the reference data.

The test site is located at San Francisco Bay area. A 4-look L band fully polarimetric SAR data is acquired by the NASA/JPL AIRSAR system. For hh-polarization, the original SAR image is shown in Fig. 1 and the speckle reduced SAR image is shown in Fig. 2.

The size of images is 1024 lines by 900 pixels.

The training area and verification area are

enclosed in boxes as is shown in Fig. 1 and 2, respectively. This site mainly contains four types of terrain covers which are ocean, urban, park 1, and park 2. In the urban area, buildings are blocked by streets which sometimes have vegetation cover. Hence, the degree of class mixture is high in the urban area. Because the backscatter returns are different between grass and tree, the vegetated area is divided into park 1 and park 2. Fig. 3 shows the histogram map of each class at different polarization. It is clear that park 1 and park 2 are two distinct classes.

For the case of our test site, the mean and variance of each class calculated from the training data selected are displayed in Table I. In addition, at different polarization, the Gamma probability distribution for each class is shown in Fig. 3. The theoretical distribution plot deviates from its corresponding histogram because that the degree of class mixture is high in the urban area. Furthermore, the covariance matrix for each class is given as

Ocean=

2 272008 6 2 429272 7 2 823437 6 2 429272 7 4 052797 8 3 495667 7 2 823437 6 3 495667 7 3 844826 6

. . .

. . .

. . .

e e e

e e e

e e e

− − −

− − −

− − −

⎢⎢

⎥⎥

⎥ park1=

2 522182 5 7 689065 6 2 656531 5 7 689065 6 5 385074 6 1 071511 5 2 656531 5 1 071511 5 3 840476 5

. . .

. . .

. . .

e e e

e e e

e e e

− − −

− − −

− − −

⎢⎢

⎥⎥

⎥ urban=

7 390002 5 1548428 5 3164838 5 1548428 5 5 268050 6 5 618950 6 3164838 5 5 618950 6 6 688647 5

. . .

. . .

. . .

e e e

e e e

e e e

− − −

− − −

− − −

⎢⎢

⎥⎥

⎥ park2=

6 533464 7 1119903 7 4 415474 7 1119903 7 7 280034 8 8 563050 8 4 415474 7 8 563050 8 8 430916 7

. . .

. . .

. . .

e e e

e e e

e e e

− − −

− − −

− − −

⎢⎢

⎥⎥

(6)

The FDL is configured to have 3 input nodes (hh, vv, and vh), 4 output nodes, and 2 hidden layers with 40 hidden nodes each. The kappa coefficients obtained by different combinations of the algorithmic parameters selections are listed in Table II. First, the classification results at different values of the membership weighting exponent m by applying FDL to the test site were compared. As can be seen in the Table II, FDL obtained the best result at m equals 2 for various selection of the dissimilarity measure. For example, when the Euclidean distance was used, the kappa coefficient is 79.74%, 80.55%, 85.77%, and 78.98% for m equals 1.1, 1.5, 2.0, 3.0, respectively. In addition, by applying FDL to the test site, the classification results for different types of dissimilarity measure were also compared. It can be concluded that, at different value of the membership weighting exponent, the best result was obtained when the Type 3 dissimilarity measure (Euclidean distance weighted by probability) was used. For instance, at m = 2, the kappa coefficient is 85.77%, 84.52%, 87.86%, and 83.23% for the measure of dissimilarity of Type 1, Type 2, Type 3, and Type 4, respectively. Although, as seen in the Table II, the addition of fuzziness might make the classification problem more complicate which resulting an even worse classification result. However, it did improve the classification accuracy if a proper selection of the algorithmic parameters is made. In this study, we are not trying to find an optimal value of m for different types of dissimilarity measure.

It is also meaningful to compare the FDL with DL by applying both neural networks to SAR image classification. Both the DL and FDL

have the same configurations as stated above. As shown in Table III, by using DL, the classification accuracy is 84.69%. The classification accuracy is low because that the mixture of urban and park areas was not distinguished well. It can be seen that there are 135 pixels of class 2 (park 1) which had been misclassified as class 3 (urban). In contrast, as Table IV shows, a higher classification accuracy (87.86%) is obtained by using FDL when the membership weighting exponent m = 2 and Type 3 dissimilarity measure is used. The vegetated region in the urban area has been identified successfully. The producer‘s accuracy of class 2 has been greatly increased from 74.80% to 92.39% and the user‘s accuracy of class 3 substantially increased from 79.3% to 97.3%.

Therefore, the classification capability of the class-mixed pixels is improved. Figs. 4 and 5 display the classification results by using DL and FDL, respectively, in which ocean, urban, park 1, and park 2 are represented in blue, red, green, and yellow colors, respectively.

5. CONCLUSIONS

A fuzzy dynamic learning neural network is introduced in this paper. In order to add the fuzzy information to the neural network, a fuzzy c-means clustering algorithm is used to assign the degree of membership of each pixel during the training stage. The selection of the algorithmic parameters including the membership weighting exponent and the measure of dissimilarity of the fuzzy dynamic learning neural network is also discussed. It can be concluded that the best result is obtained when the membership weighting exponent m

(7)

equals 2 and the Type 3 dissimilarity measure (Euclidean distance weighted by probability) is used. The effectiveness of the combination of the neural network and fuzzy sets theory is demonstrated by an example of SAR image classification. Experimental results show that the separatability between the similar classes and the classification capability of the class-mixed pixels are improved. Moreover, the classification results match better with the ground truth.

REFERENCES

Backer E., 1978, Cluster Analysis by Optimal Decomposition of Induced Fuzzy Sets, (Delft: Delft University Press).

Benediktsson J. A., Swain P. H., and Ersoy O.

K., 1990, Neural network approaches versus statistical methods in classification of multisource remote sensing data. IEEE Transaction on Geoscience and Remote Sensing, 28, 540-552.

Bezdek J. C., 1981, Pattern Recognition with Fuzzy Objective Function Algorithms, (New York: Plenum).

Bischof H., Schneider W., and Pinz A. J., 1992, Multispectral classification of Landsat- image using neural network. IEEE Transaction on Geoscience and Remote Sensing, 30, 482-490.

Caillol H., Hillion A., and Pieczynski W., 1993, Fuzzy random fields and unsupervised image segmentation. IEEE Transaction on Geoscience and Remote Sensing, 31, 801- 810.

Chen K. S., Tzeng Y. C., Chen C. F., and Kao W. L., 1995, Land-cover classification of multispectral imagery using a dynamic

learning neural network. Photogrammetric Engineering and Remote Sensing, 61, 403- 408.

Chen K. S., Huang W. P., Tsay D. H., and Amar F., 1996, Classification of multifrequency polarimetric SAR imagery using a dynamic learning neural network.

IEEE Transaction on Geoscience and Remote Sensing, 34, 814-820.

Hara Y., Atkins R. G., Yueh S. H., Shin R. T., and Kong J. A., 1994, Application of neural network to radar image classification. IEEE Transaction on Geoscience and Remote Sensing, 32, 100- 109.

Heermann P. D. and Khazene N., 1992, Classification of multispectral remote sensing data using a back-propagation neural network. IEEE Transaction on Geoscience and Remote Sensing, 30, 81-88.

Keller J. and Hunt D., 1985, Incorporating fuzzy membership functions into the perceptron algorithm. IEEE Transaction on Pattern Analysis and Machine Intelligence, 7, 693-699.

Lee J. S., Grrunes M. R., and Mango S. A., 1991, Speckle reduction in multipolarization multifrequency SAR imagery. IEEE Transaction on Geoscience and Remote Sensing, 29, 535-544.

Lillesand T. M. and Kiefer R. W., 1993, Remote Sensing and Image Interpretation, (New York: Wiley).

Nezry E., Mougin E., Lopes A., and Gastellu- etchegorry J. P., 1993, Tropical vegetation mapping with combined visible and SAR spaceborne dara. International Journal of Remote Sensing, 14, 2165-2184.

(8)

Simpson P. K., 1990, Fuzzy min-max neural networks—Part 1: Classification. IEEE Transaction on Geoscience and Remote Sensing, 28, 194-201.

Tzeng Y. C., Chen K. S., Kao W. L., and Fung A. K., 1994, A dynamic learning neural network for remote sensing applications.

IEEE Transaction on Geoscience and Remote Sensing, 32, 1096-1102.

Tzeng Y. C. and Chen K. S., 1998, A fuzzy neural network to SAR image classification. IEEE Transaction on Geoscience and Remote Sensing, 36, 301- 307.

Wang F., 1990, Fuzzy suppervised classification of remote sensing images. IEEE Transaction on Geoscience and Remote Sensing, 28, 194-201.

Zadeh L., 1965, Fuzzy sets. Information and Control, 8, 338-353.

(9)

TABLE I

MEAN AND VARIANCE OF EACH CLASS

TABLE II

KAPPA COEFFICIENTS OBTAINED BY DIFFERENT COMBINATIONS OF THE ALGORITHMIC PARAMETERS SELECTIONS

measure of dissimilarity

Type 1 Type 2 Type 3 Type 4 m = 1.1 79.74% 78.14% 82.35% 75.47%

m = 1.5 80.55% 78.47% 82.21% 74.55%

m = 2.0 85.77% 84.52% 87.86% 83.23%

m = 3.0 78.98% 79.45% 81.09% 75.58%

TABLE III

CLASSIFICATION MATRIX (USING DL)

class ocean park 1 urban park 2

hh 3.491680e-3 1.102105e-2 2.112502e-2 4.851654e-3 hv 7.597737e-4 5.915747e-3 7.137382e-3 1.384192e-3

mean

vv 6.569010e-3 1.171671e-2 1.934680e-2 4.802634e-3 hh 2.272008e-6 2.522182e-5 7.390002e-5 6.533464e-7 hv 4.052797e-8 5.285074e-6 5.268050e-6 7.280034e-8

variance

vv 3.844826e-6 3.840476e-5 6.688647e-5 8.430916e-7

class 1 2 3 4 Producer's

1 775 0 0 0 100.0

2 2 472 135 22 74.80

3 8 77 518 10 84.50

4 7 15 0 397 94.75

User's 97.9 83.7 79.3 92.5 overall accuracy = 88.67%; kappa coefficient = 84.69%

1: ocean, 2: park 1, 3: urban, 4: park 2

(10)

TABLE IV

CLASSIFICATION MATRIX (USING FDL)

Fig. 2. Speckle reduced SAR image (P band hh-pol.)

class 1 2 3 4 Producer's

1 775 0 0 0 100.0

2 0 583 13 35 92.39

3 16 110 463 24 75.53

4 5 16 0 398 94.99

User's 97.4 82.2 97.3 87.1 overall accuracy = 91.02%; kappa coefficient = 87.86%

1: ocean, 2: park 1, 3: urban, 4: park 2

Fig. 1. Original SAR image (P band hh-pol.)

(11)

0.01 0 .02 0 .03 0

100 2 00 3 00 4 00 5 00 6 00

G am m a PD F (H H )

O cean(data) U rban(data) P ark1(data) P ark2(data) O cean(theory) U rban(theory) P ark1(theory) P ark2(theory)

P(x)

Am plitude

0.005 0.010 0.015 0.020 0.025 0.030 0.035

0 100 200 300 400 500 600

Gamma PDF (VV)

Ocean(data) Urban(data) Park1(data) Park2(data) Ocean(theory) Urban(theory) Park1(theory) Park2(theory)

P(x)

Amplitude

0.00 2 0.0 04 0 .0 06 0 .00 8 0.010 0.0 12 0 .0 14

0 50 0 100 0 150 0 2 00 0

G am m a PD F (VH )

O c e a n (d a t a ) Urb a n (d a t a ) P a rk 1(d a t a ) P a rk 2 (d a t a ) O c e a n (t h e o ry ) Urb a n (t h e o ry ) P a rk 1(t h e o ry ) P a rk 2 (t h e o ry )

P(x)

Am plitude

Fig. 4. Classification result (using DL)

Fig. 3. Histogram map for each class at different polarization

Fig. 5. Classification result of (using

(12)

模糊動態學習神經網路之演算參數

曾裕強

1

周念湘

2

摘要

傳統的類神經網路分類器在進行訓練時,係以單一像元單一類別來表示其資訊。因此,對於像元內 類別混合的情形並未加以考慮,致使其分類準確度降低。模糊動態學習神經網路以動態學習神經網路為 基礎,並利用模糊集合來表示其訓練資訊。為將模糊資訊加入神經網路中,模糊動態學習神經網路在訓 練階段利用模糊c平均演算法來設定每一像元的隸屬度。本論文將模糊動態學習神經網路應用於合成口 徑雷達影像的分類,以來討論模糊c平均演算法中包括隸屬權值指數及差異度量測等演算參數的選擇。

同時,本論文將以合成口徑雷達影像的分類,來比較動態學習神經網路及模糊動態學習神經網路的效 能。實驗結果顯示模糊動態學習神經網路具有比動態學習神經網路更好的收斂特性及分類結果,且可改 善相似類別間的區別率及類別混合像元之分類能力。

關鍵詞:隸屬度、隸屬權值指數、差異度量測、合成口徑雷達

收到日期:民國 91 年 10 月 16 日 修改日期:民國 93 年 11 月 19 日 接受日期:民國 93 年 11 月 22 日

1國立聯合技術學院電子工程學系教授

2國立聯合技術學院資訊工程學系講師

參考文獻

相關文件

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most

In addition, to incorporate the prior knowledge into design process, we generalise the Q(Γ (k) ) criterion and propose a new criterion exploiting prior information about

This kind of algorithm has also been a powerful tool for solving many other optimization problems, including symmetric cone complementarity problems [15, 16, 20–22], symmetric

• elearning pilot scheme (Four True Light Schools): WIFI construction, iPad procurement, elearning school visit and teacher training, English starts the elearning lesson.. 2012 •

(Another example of close harmony is the four-bar unaccompanied vocal introduction to “Paperback Writer”, a somewhat later Beatles song.) Overall, Lennon’s and McCartney’s

„ An adaptation layer is used to support specific primitives as required by a particular signaling application. „ The standard SS7 applications (e.g., ISUP) do not realize that

DVDs, Podcasts, language teaching software, video games, and even foreign- language music and music videos can provide positive and fun associations with the language for