• 沒有找到結果。

Performance Comparison of Observation Selection and Ratio Combining Selection and Ratio Combining

Sensor Network Navigation System for Firefighting Robot

Appendix 3.A State-space Model with Estimation of Previous State

Q. E.D To sum up, many fuzzy logic controllers make decision on multiple observation

4.6 Performance Comparison of Observation Selection and Ratio Combining Selection and Ratio Combining

Now we try to compare the performance of the above two defuzzification procedures, which are indeed the Observation Selection and Ratio Combining with Cramer-Rao bound coefficient (Ratio Combining in brief). In fact, this Ratio Combining is a special case which ignores the correlation among observations while takes the quality indicated by Cramer-Rao bound into consideration. We do not include correlation information into Ratio Combining because we compare its performance with Observation Selection, which is assumed to be operated in the circumstances of no correlation information. To be consist with the Observation Selection and Ratio Combining section, here we use the original notation, observation

’s and action decision a, to substitute the notations, and . To enable us to apply Cramer-Rao bound, we assume the conditions in Lemma 4-6 are satisfied. Then the condition (4.27) implies that the conditional mean minimizes the expected utility function [9]. Consequently, the decision of the Observation Selection becomes:

E | p | d 4.44 And the decision of the Ratio Combining becomes

E | 1

To simplify the problem, we make the assumption that the efficient estimator exists for all observations and derive the second equality in (4.45). The optimal decision is

E | , , … , p | , , … , d 4.46 We compare the difference of mean square error (MSE) to optimal decision, E and E , to infer which of Observation Selection and Ratio Combining makes better decision gain more utility. In fact, the difference of mean square error is proportional to expected utility function which contains only square terms and constant. Hence we apply it to performance comparison. We investigate the comparison by a simple example of two observations with Gaussian distribution. Assume the distributions of observations conditioned on parameter are normal distributions and bivariate normal distribution:

P |Θ 1 assume the prior distribution of is

p Θ 1

√2π exp

2 4.50 Then we have the a posterior distributions

P Θ|

Observing that (4.51)~(4.53) are all in the form of normal distribution, we can derive the conditional mean of each distribution:

E Θ| σ

Var Θ| 1 1

The difference of mean square error, E E and

E E , with respect to variations of observation variance and correlation are shown in Fig.4.2. In Fig.4.2(a)(b), we set the variance of observation 1,

, fixed to 1 and change the variance of observation 2, , while the correlation coefficient ρ is fixed. In 4.2(c)(d), we set the variance of observation 1 fixed to 1 and

change the correlation coefficient while the variance of observation 2 is fixed. Note that the smaller distance implies better performance. The figures show that when both observations have similar variances and nearly independent, Ratio Combining notably outperforms selection scheme (Fig.4.2(c) and part of (a)(b)). If the difference between the variances of two observations is not significant, Ratio Combining is still better than Observation Selection when the observations are nearly independent (Fig.4.2(a)).

On the other hand, if the observations are highly correlated or the difference between variances is large enough, the performance of Observation Selection can achieve or even exceed the performance of Ratio Combining (Fig.4.2(b)(c)(e)). Then we have the following remark.

Remark 5-1: The numerical result of above performance comparison in low correlation region shows that

1) Diversity gain dominates the performance comparison when the correlation among observations is low and the variances of observations are close.

2) The inferior observation diminishes the diversity gain when the variances among observations are significantly different.

These characteristics coincide with the intuitions which are able to establish common experience-based decision rules for multiple observation. However, under intelligent decision framework, we mathematically demonstrate the validity of those intuitive rules and relate the observations from the numerical results to widely-used multiple observation decision schemes such as diversity or selection.

Besides the above discussion of performance comparison in ordinary region, the abnormal behaviors in extreme region also can be well interpreted. In intelligent decision framework, optimal decision takes correlation among observations into consideration while Ratio Combining and Observation Selection ignoring it. This

results in two abnormal and opposite behaviors of performance comparison for the extreme cases in which the observations are highly correlated. Ratio Combining and Observation Selection both approach the optimal performance when two observations have the same variance and highly correlated (Fig. IV-2(c)). In fact, when correlation coefficient approaches 1, the two schemes are almost equivalent. However, when the observations are highly correlated and variances are different, the estimation error of optimal decision approaches zero and the mean square differences to optimal decision of both Ratio Combining and Observation Combining jump sharply (Fig. IV-2(d)).

This is due to the correlation gain, which utilize the correlation to enhance estimation performance in contrast to the diversity gain by independent observations. In fact, the

(a)

(b)

Fig. 4.2 Difference of MSE to Optimal Decision (a),(b) the correlation coefficient fixed.

(c)

(d)

Fig. 4.2 Difference of MSE to Optimal Decision (c),(d) the variance of observation 2 fixed.

Fig. 4.2 Difference of MSE to Optimal Decision (e) The segment from (d), coefficient range 0~0.6 is highlighted.

correlation gain enables optimal decision to achieve perfect estimation as the correlation approaches 1. Then we have the following remark:

Remark 2: For the highly-correlated observations

1) When the variances of observations are the same, the performance of Observation Selection and Ratio Combining both converges to optimal decision as the correlation approaches 1

2) When the variances of observation are different, the error of optimal decision converges to zero as the correlation approaches 1 due to correlation gain.

We can intuitively explain the performance enhancement by correlation gain stated in Remark 2-2. Consider the extreme case, correlation coefficient is 1 and variance of observation 2 is 3. The noise added on observation 2 is exactly three times of noise added on observation 1. Then the difference between two observations is exactly two times of noise added on observation 1 and we can derive that noise and the event

parameter, . Consequently, the optimal decision taking the correlation coefficient into consideration is able to estimate exact value of the parameter while both Ratio Combining and Observation Selection are unable to do so due to ignoring the correlation among observations. This explains the jump in Fig. IV-2(d). To sum up, the performance comparison analysis under intelligent decision framework broaden the scope of multiple observation diversity and correlation gain and explain them more precisely.

Chapter 5

相關文件