• 沒有找到結果。

無人載具之導航與控制(II)

N/A
N/A
Protected

Academic year: 2021

Share "無人載具之導航與控制(II)"

Copied!
7
0
0

加載中.... (立即查看全文)

全文

(1)

ÌA-x5ûND−„

(II)

Navigation and Control of Unmanned Vehicles (II)

lå)U: NSC 89-2212-E-002-084

ÏW‚Ì: 88



8~

1nB

89

11

~

30

n

3MA:

Ù ô Å «É×ç@à‰çû˝F `¤

Introduction

To perform the navigation of an unmanned vehicle, various sensors, such as GPS, INS, compass, encoder, etc, can be used. Depending on their characteristics, different sensors may have different advantages. For example, GPS may be more sensitive to low-freqency noise, while INS is more susceptible to high-frequency noises. In order to integrate these sensors, the algo-rithm of data fusion along with the Kalman filter may be adopted. However, there are some issues having to be tackled. First, the initial setting of the algo-rithm must be given. Secondly, if the assumption of independence in the Kalman filtering is not valid, it is necessary to deal with dependent processes. In this report, an algorithm of determining the initial settings, including the error covariance, the process noise covariance and the measurement noise covari-ance, is proposed. On the other hand, the covariance intersection algorithm is used to solve the problem regarding the dependence of the information. The combination of these strategies is then used to de-sign the fusion INS-GPS system for the navigation of a vehicle. The experimental results showed that the algorithm is more robust comparing with classi-cal Kalman filtering algorithm.

1

Background Material

First, some basic formula in estimation theory are reviewed. Let x, z be random vectors. From the observation Z on z, it is desired to estimate x. The MMSE(Minimum mean-square error) estimator is

de-fined to be ˆ xM M SE= arg min ˆ x E h (ˆx − x)2|Zi.

It can be shown that the solution of the previous minimization problem is

ˆ

xM M SE= E [x|Z] . (1)

Furthermore, if x, z are jointly Gaussian with covari-ance matrices denoted by

Pxx= E h (x − ¯x) (x − ¯x)T i , Pxz = E h (x − ¯x) (z − ¯z)T i = PzxT, Pzz= E h (z − ¯z) (z − ¯z)Ti, (2)

where ¯x, ¯z are the mean vectors of x, z respectively,

the conditional mean can be further expressed as

E [x|Z] = ¯x + PxzPzz−1(z − ¯z) . (3)

The associated conditional variance is

Pxx|z= Pxx− PxzPzz−1Pzx. (4)

Accordingly, the MMSE estimate (1) can be found as ˆ

x = ¯x + PxzPzz−1(Z − ¯z) , (5)

and the corresponding covariance matrix is computed through (4). On the other hand, by the least-square

(2)

type argument, the estimator in (5) can be also ob-tained for the estimation of Non-Gaussian random vectors.

Secondly, we review the Chi-square distribution. A random variable q has a Chi-Square distribution with degree of freedom n, denoted by q v χ2

n , if its pdf is of the form p (q) = 1 2n2Γ¡n 2 ¢ q(n−22 )e−q2,

where Γ denotes the Gamma function defined through

Γ (m + 1) = mΓ (m) , Γ (1/2) =√π,

Γ (1) = 1. The random variable q defined by

q = (x − ¯x)TP−1(x − ¯x)

is chi-square distributed with degree of freedom n. Furthermore, the mean value and the variance of q are n and 2n respectively. For given two independent chi-Square random variables q1v χ2mand q2v χ2n, a new random variable q3 defined by

q3= q1+ q2,

is also Chi-Square distributed with (m + n) degrees of freedom.

2

Basic Kalman Filtering

Algo-rithm

Consider a linear system

x (k) = Φ (k − 1) x (k − 1) + u (k − 1) , (6) with measurement

z (k) = H (k) x (k) + w (k) (7)

where the process noise u (k − 1) and the mea-surement noise w (k) are assumed to be indepen-dent with Gaussian distributions N (0, Q (k − 1)) and

N (0, R (k)) respectively. The problem is to estimate

ˆ

x (k) ,given measurement

Zk= {z (1) , z (2) , ..., z (k)}.

A recursive process, termed the Kalman filter, was developed to perform the estimation, and the pro-cess can be divided into two parts. One is to predict the state at k from the observations through (k − 1) . Next is to correct the prediction by current measure-ment at k. The predictions of both the states and measurement based on Zk−1 can be obtained from the MMSE estimator as

ˆ

x (k|k − 1) = E£x (k) |Zk−1¤, ˆ

z (k|k − 1) = E£z (k) |Zk−1¤.

Regarding the above equations as the means of x, z respectively. The correction step based on the obser-vation z (k) is then performed through (5),

ˆ x (k|k) = ˆx (k|k − 1) + Pxz(k|k − 1) Pzz−1(k|k − 1) (z (k) − ˆz (k|k − 1)) (8) and Pxx(k|k) = Pxx(k|k − 1) − Pxz(k|k − 1) Pzz−1(k|k − 1) Pzx(k|k − 1) (9)

where the conditional covariance matrices, Pxz and

Pzz are defined similar to (2). In fact, denoting

˜ x (k|k − 1) = x (k) − ˆx (k|k − 1) , ν = z (k) − ˆz (k|k − 1) , we have Pxx(k|k − 1) = E h ˜ x (k|k − 1) ˜x (k|k − 1)T|Zk−1i, Pxz(k|k − 1) = E h ˜ x (k|k − 1) ˜z (k|k − 1)T|Zk−1 i , Pzz(k|k − 1) = E h ˜ z (k|k − 1) ˜z (k|k − 1)T|Zk−1i. From the dynamic equations(6,7), the prediction ˆ

x (k|k − 1) can be further expressed in terms of

ˆ

x (k − 1|k − 1) as

ˆ

(3)

The corresponding update rule for the covariance ma-trix is

Pxx(k|k − 1) = Φ (k − 1) Pxx(k − 1|k − 1) ΦT(k − 1) + Q (k − 1) .

The gain in eqs. (8) is called the Kalman gain

K (k) = Pxz(k|k − 1) Pzz−1(k|k − 1) .

The covariance of the innovation ν can be computed as

S (k) = Pzz(k|k − 1)

= H (k) Pxx(k|k − 1) HT(k) + R (k) . Based on these notations, the Kalman filtering algo-rithm can be summarized as, from ˆx (k − 1|k − 1) , Pxx(k − 1|k − 1) , ˆ x (k|k − 1) = Φ (k − 1) ˆx (k − 1|k − 1) , ˆ z (k|k − 1) = H (k) ˆx (k|k − 1) , Pxx(k|k − 1) = Φ (k − 1) Pxx(k − 1|k − 1) ΦT(k − 1) + Q (k − 1) , S (k) = H (k) Pxx(k|k − 1) HT(k) + R (k) , K (k) = Pxx(k|k − 1) HT(k) S−1(k) , ν (k) = z (k) − ˆz (k|k − 1) , ˆ x (k|k) = ˆx (k|k − 1) + K (k) ν (k) , Pxx(k|k) = Pxx(k|k − 1) − K (k) S (k) KT(k) .

However, there are some problems associated with the applications of this method. First, it is not straightforward to determine the initial condition of this problem. Secondly, for multiple sensors in differ-ent sampling rates, it is desired to develop a synthe-sis method to integrate those information. These two topics are the main issues discussed in this report.

3

Determination of the Initial

Setting

Assume that all initial covariance matrices are diag-onal. The normalized states error q (0) is

q (0) = ˜xT(0) P−1(0|0) ˜x (0) .

then q (0) should be smaller than the upper bound cn which is dependent on the states degree of freedom

n and determined by chi-Squared confidence gate as(

two-side)

c0

n≤ q (0) ≤ cn.

Also, the initial states error is smaller than the max-imized value of an acceptable state error bound li as

|˜xi(0)| ≤ li.

Further, we need to know how to obtain the P (0|0) components in terms of the confidence cn and the ith state error bound li. A straightforward way is,

as-suming that all states are uncorrelated, given a def-inite positive diagonal matrix P (0|0) is a scaler σ2

1,

then applying the confidence gate and desired upper bound to as l2 i σ2 i = cn.

Thus, the maximized and minimized values of stan-dard deviation σiof ˜xi are

0 < σi≤ pnli

c0 n

As a result, we obtain an initial covariance matrix

P (0|0) as P (0|0) =       n2l2 1 c0 n 0 ... 0 0 n2l22 c0 n ... ... ... ... ... 0 0 ... ... n2l2n c0 n       n×n ,

where liis the ithacceptable state error upper bound, for i = 1, ..., n.

For the process noise, we have

(4)

which is also a chi-square distributed with n degrees of freedom. Similarly, by the confidence area,

0 ≤ uT(0) Q−1(0) u (0) = qn≤ c n,

which implies, under the assumption that Q (0) is a diagonal with the elements Q2

i (0) , i = 1, ..., n. Σn i u2 i cnQ2i(0) ≤ 1.

It is desired to find Q1, ..., Qn such that the distur-bance ui, i = 1, .., n leis within the region bounded

by a ellipsoid Σn i u

2 i

cnQ2i = 1. This ellipsoid may be

ap-proximated by a sphere with radius √cn

n ΣniQi(0). If the averaged error on the disturbance along each axis is denoted by ρα, with probability 1−α, then we have

cn

n Σ

n

iQi(0) = ρα. Moreover, define the nominal value

¯

Q (0) = √ρα cn

.

The other components may be written as

Qi(0) = βiQ (0) ,¯ (10) in which βi varies about 1 and Σn

iβi = n. The ratios are determined through the knowledge on the distur-bance about each variable.

Similar process may be used to determine the ini-tial value of R, for the measurement noise.

Let the average error for the measurement along each axis is denoted by rα. We have

¯

R (0) = √rα cm and

Ri(0) = ξiR (0) ,¯ (11) where cm is determined by the chi-distribution with

m degrees of freedom and ξi corresponds to the knowledge on the sensor specifications on each axis.

After the initial condition on Q and R are deter-mined, it is desired to adjust them during the filtering

process so that the change of actual environment can be accommodated. Defines the state error

εx= ˜xTP−1x,˜

which is a random variable with εxv χ2

n. To certain confidence, it is anticipated that εx must be in the range [c0

n, cn] such that

Pr(c0n≤ εx≤ cn) = 1 − α.

If εx lies above cn or below c0

n, the process may be good enough to model the disturbances. In order to have εx lies in the region, we may adjust γ (k) in

Pxx(k|k − 1) = Φ (k − 1) P (k − 1|k − 1) ΦT(k − 1) + γ (k) Q (k − 1) .

For either εx > cn or εx < c0n, we may choose the parameter γ (k) be equal to ratio between εxand its mean value n, i.e.

γ (k) =εx n.

If εx > cn, γ (k) > 1, the process noise is scaled up, and the state error may be reduced in the re-gion. On the other hand, if εx< c0

n, γ (k) < 1, which means that the process noise is reduced. Moreover, for εx ∈ (c0

n, cn) ,we also perform the fine tuning on the process covariance matrix through the formula

γ (k) = 0.5 µ εx n + n εx. We denote Q∗(k − 1) = γ (k) Q (k − 1) . If εx is out of range, we should initialize the Kalman filter.

Recall that the innovation process may be used to the performance of the sensors. The covariance ma-trix for the innovation process may be written as

S (k) = H (k) {Φ (k − 1) Pxx(k − 1|k − 1) ΦT(k − 1) + Q∗(k − 1)}HT(k) + R (k) .

Define the sensor error

(5)

with εν v χ2

m. Specifying some confidence region given by

Pr(c0m≤ εν(k) ≤ cm) = 1 − α,

if εν does not lie in (c0

m, cm) , either the algorithm fails or possibly the measurement covariance matrix

R is not properly given. For the later reason, we try

to adjust R according to following rule.

R∗(k) = η (k) R (k) , where the factor η (k) is determined as

½ η (k) = εν m, εν∈ (c/ 0m, cm) η (k) = 0.5 ³ m εν(k)+ εν(k) m ´ , otherwise.

The innovation covariance matrix is then updated

S∗(k) = H (k) {Φ (k − 1) Pxx(k − 1|k − 1) ΦT(k − 1) + Q∗(k − 1)}HT(k) + R(k) .

If the algorithm still does not work after the adjust-ment, the message of sensor fault should be reported. By appropriately adjusting Q and R, the filter can be made robust in the presence of environmen-tal changes. Such notion of adaptive Kalman filter (AKF) is necessary for long term navigation.

4

Data Fusion

To perform estimation of the states in a physical sys-tem, it is sometimes necessary to use multi-sensors. The problem of how to combine the output data from each mode of a data-log network is the main concern in the section. The notion of covariance intersection is introduced to solve the problem of dependency be-tween these filters. The local observation information can be obtained from each sensor. If the measure-ments are independent, i.e. the cross covariance ma-trices can are zero, then it is easy to achieve a com-bination of information. On the other hand, if the information are correlated, which the cross covari-ance matrices are unknown, the information fusion becomes a much more complicated process.

Consider the case that local information measure-ments are independent. Let ˆx1 and ˆx2 be two

esti-mates of x with covariance P1 and P2, respectively.

The combined information by applying MMSE esti-mator can be expressed as

ˆ xc= £ P−1 1 + P2−1 ¤−1£ P−1 1 xˆ1+ P2−1xˆ2 ¤ with the resulting covariance[4, Bayerian Inference][2, Chap10]

PC= £

P1−1+ P2−1¤−1.

In general, if the estimates are dependent and the fused system covariance matrices are[2, Chap10.3][3,

Chap8] ·

P1 P12

P21 P2

¸

,

where the cross-covariance matrix is

P12= E

h

x1) (˜x2)T

i

.

By using again the MMSE estimates, we obtain the fused state estimate

ˆ xC= ˆx1+ [P1− P12] h P1+ P2− P12− (P12)T i−1 [ˆx2− ˆx1]

and the corresponding covariance is

PC= P1− [P1− P12]

h

P1+ P2− P12− (P12)T

i−1

[P1− P12]T. (12)

Because the cross covariance matrices are too compli-cate and frequently unknown, to deal with this prob-lem, one can modify the information fusion algorithm via convex combination idea of two system error co-variance matrices[7, CI]. Instead of (12), we choose a parameter α, where 0 ≤ α ≤ 1, such that

PC= £

αP1−1+ (1 − α) P2−1¤−1,

and the corresponding updated estimate is ˆ xC = PC £ αP−1 1 xˆ1+ (1 − α) P2−1xˆ2 ¤ .

(6)

This method is called Covariance Intersection(CI) fil-tering algorithm[7]. Extending the Covariance In-tersection to an n−subsystem sensory system[7], the fused covariance PC and state ˆxC are

PC= £ ΣαjPj−1 ¤−1 , (13) and ˆ xC= PC £ ΣαjP−1 j xˆi ¤ (14) where 0 ≤ αj≤ 1, Σjαj= 1.

Without loss of generality, for the jthsubsystem at the kth time step, by selecting the factor cj for each node, we define α0 j = P j D ³ PGj´−1cje−( 1 2ε2νj(k))

where PDj is a probability of detection [2, detection] which determined by the jthsignal-noise ratio above some threshold, and the gate probability PGj is a gated threshold[2, gate] as the chi-square probabil-ity for the jthnode. The parameters in CI algorithm may be then chosen as

αj= α0 j Σn iα0i .

5

Fused INS-GPS system

The ideas presented above are now applied to the fused-INS-GPS system. Let Pins, Pgps be the states error covariance matrices of INS(Inertial Naviga-tion System) and GPS(Global PosiNaviga-tioning System) respectively. The covariance matrices from INS and GPS via Kalman filter algorithm individually, which may be termed decentralized or distributed approach[3, Chap8], can be expressed as

Pins(k|k) = Pins(k|k − 1) − W S(ins)WT

and

Pgps(k|k) = Pgps(k|k − 1) − W S(gps)T WT.

The fused, updated covariance matrix is then

PC(k|k) = £ αP−1 ins(k|k) + (1 − α) Pgps−1(k|k) ¤−1 (15) and the fused estimate vector is

ˆ xC = PC(k|k) £ αPins−1(k|k) ˆxins+ (1 − α) Pgps−1(k|k) ˆxgps ¤ .

The problem is then to choose the coefficient α. Obviously, the role of α should provide good infor-mation with the better fusion performance and some criterion of faults rejection. In the data logging net-work of a multisensor system, the signals are sampled in different sampling rates. If εins, εgpsare all located within the 95% confidence area, their corresponding measurements are acceptable. Either the value of εins or εgps is out of bound, it means this system has ab-normal or the faults occurred.[5, faults detection][1, FDI] [6, FD] From the previous section, we compute

α0

ins= cinsPDins ¡ Pins G ¢−1 e−12 εins, α0 gps= cgpsPDgps(P gps G ) −1 e−1 2εgps.

The values of cins, cgpsare given by 0.5, since there are no bias on each node. The probability of detection

PDgps, Pins

D are assumed to be 1. The estimated α is then chosen as α = α 0 ins α0 ins+ α0gps .

Incorperating the algorithm of initialization and the covariance intersection algorithms, the fused-INS-GPS system can be then used to estimate the speed of an unmanned vehicle. Two experimental results are shown in the following figures.

(7)

Push and drive some small torque.

At 5thstep, push the car then free driving to zero speed.

6

Conclusion

The design of a fused-INS-GPS navigation system is presented in this report. Comparing with classical notion of Kalman filter, the ideas of the adaptive change of the noise covariances and the covariance in-tersection are adopted. Experimental results showed that the algorithm is applicable and more robust.

References

[1] S. Alag. A BAYESIAN

DECISION-THEORETIC FRAMEWORK FOR REAL-TIME MONITORING AND DIAGNOSIS OF COMPLEX SYSTEMS: THEORY AND APPLI-CATION. PhD thesis, University of California

at Berkeley, 1996.

[2] Y. Bar-Shalom and T. E. Fortman. Tracking and

Data Association. Academic Press., 1988.

[3] Y. Bar-Shalom and X.-R. Li. Multitarget-Multisensor Tracking: Principles and Techniques.

YBS, 1995.

[4] N. Bergman. Recursive Bayesian Estimation: Navigation and Tracking Applications. PhD

the-sis, 1999.

[5] J. Palmqvist. On Integrity Monitoring of

Inte-grated Navigation Systems. PhD thesis, 1997.

[6] S. Sukkarie. Low Cost, High Integrity, Aided

In-ertial Navigation Systems for Autonomous Land Vehicles. PhD thesis, University of Sydney., 2000.

[7] J. Uhlmann. Dynamic Map Building and

Local-ization New Theoretical Foundation. PhD thesis,

參考文獻

相關文件

Two causes of overfitting are noise and excessive d VC. So if both are relatively ‘under control’, the risk of overfitting is smaller... Hazard of Overfitting The Role of Noise and

Teachers may consider the school’s aims and conditions or even the language environment to select the most appropriate approach according to students’ need and ability; or develop

Courtesy: Ned Wright’s Cosmology Page Burles, Nolette &amp; Turner, 1999?. Total Mass Density

2-1 註冊為會員後您便有了個別的”my iF”帳戶。完成註冊後請點選左方 Register entry (直接登入 my iF 則直接進入下方畫面),即可選擇目前開放可供參賽的獎項,找到iF STUDENT

Microphone and 600 ohm line conduits shall be mechanically and electrically connected to receptacle boxes and electrically grounded to the audio system ground point.. Lines in

The continuity of learning that is produced by the second type of transfer, transfer of principles, is dependent upon mastery of the structure of the subject matter …in order for a

Teacher then briefly explains the answers on Teachers’ Reference: Appendix 1 [Suggested Answers for Worksheet 1 (Understanding of Happy Life among Different Jewish Sects in

* All rights reserved, Tei-Wei Kuo, National Taiwan University, 2005..