• 沒有找到結果。

In this section, we construct the system model of intelligent decision for the sensor network based intelligent system. In intelligent decision framework, we formally define and formulate the mathematical relationship between the essential elements involving in the decision process: event parameter, physical quantity related to the event (physical quantity in brief), sensor observation and the control action of the intelligent device. Traditional estimation problem in decision theory directly maps event to sensor observation. However, in order to derive a general framework unifying sensor observation aggregation, decision fusion and control action that is applicable to various environment, we reconstruct two mappings to account the uncertainty involve in the process. The process involves the uncertainty (or incomplete information) of the relationship between event parameter and physical

quantities and the uncertainty introduced during observation of the physical quantities (observation noise). According to this concept, we construct the framework of the intelligent decision as follows.

Definition 2-1.1: (Event Space) Event Space is composed of the event parameter, denoted by , representing the environmental facts or events that are necessary for the intelligent system to make the decision.

Definition 2-1.2: (Observation Space) Observation Space is composed of the quantity of observations, denoted by , from sensors.

Remark: The observations are the physical quantity plus noise and interference induced during sensor observation.

Definition 2-1.3: (State Space) State space is composed of the observable physical quantity induced by the events. We call them state and denoted by .

Definition 2-1.4: (Action space) Action space is composed of the decision of actions of the intelligent device, denoted by .

Definition 2-1.5: (Utility function) The utility function is the reward of the system receiving by making a decision on its action, denoted by ,

Fig. 2.2 Mathematical structure of intelligent decision making mechanism for sensor network based intelligent systems.

Remark: The utility function must reach its maximum value when the action matches the event parameter, and decrease when the action is more inconsistent with the event parameter. If the system should be panelized by each incorrect decision, we use cost function instead.

Definition 2-2: (Optimal decision mapping) The optimal decision mapping is the mapping Π: that maximize the utility function ,

We use an example, sensor network navigation for firefighting robot (Fig.2.3), to illustrate the above definitions. The necessary information for firefighting robot’s task, reaching the place on fire, is the direction of the place on fire. Hence it is defined to be the event parameter. The fire induces abnormal temperature distribution (or smoke density) in the environment. Consequently, the temperature (smoke) is the physical quantity the sensors should observe. The temperature (smoke density) read on the sensor’s thermometer (smoke detector) is the observation aggregated by the firefighting robot. Finally, the control action is the robot’s movement direction decided by the sensor observations. Traditional estimation only estimates the exact value of the observed physical quantity considering the observation noise. Hence it can not directly determine the control action. However, our intelligent decision considers the relationship between event and the induced physical quantity and is able to determine the control action according to the utility function under the unified framework. We illustrate the decision mechanism by the mappings between the spaces as follows.

Proposition 2-3: The optimal decision mapping, Π: , is determined from the mapping from event space to state space, Φ: , and the mapping from state space to observation space, Ψ: , to maximize the utility function ,

Fig. 2.3 Sensor network navigation for firefighting robot under intelligent decision framework

Remark: From above discussion and definition, we know the state is induced by the event parameter by the mapping Ψ: . Noise and interference are introduced during sensor observation and establish the mapping Φ: . Consequently, we must use Ψ: and Φ: to construct the optimal decision mapping Π: to maximize the utility function , .

Generally speaking, Φ: involves noise and interference introduced during observation. It can be represented by conditional probability p | as traditional sensor estimation problem. For the mapping Ψ: , we have the following proposition:

Proposition 2-4: The mapping Ψ: can be represented by the conditional probability p |

Remark: The uncertainty of Ψ: comes from the uncertainty or incomplete

information of the relationship between the physical quantities we observe and the desired event. We call this “system model uncertainty,” or “model uncertainty” in brief. Unlike the mapping Φ: which depends on noise statistics, this mapping depends on the knowledge of the nature and is usually complex.

If this relationship is deterministic and completely known or state and event parameter is the same physical quantity, the mapping degenerates to deterministic or identical mapping. For example, when tracking a fighter, the relationship between the observable physical quantity (radar signal) and the event parameter (fighter’s position) is known, the mapping is deterministic. Besides this, with appropriate conditions, we can degenerate Ψ to the deterministic or identical mapping. For example, for firefighting robot navigation problem, the mapping from the direction towards fire (event parameter) to the direction of temperature gradient (state) is an identical mapping if the pattern of the potential field modeling the temperature distribution is radiative. This example will be discussed in detail in next section. For the mapping Ψ to be an identical mapping, we have the following corollary:

Corollary 2-5: Ψ is an identical mapping if and only if | 1

From above propositions, the mappings, Ψ: and Φ: , are represented by conditional probabilities. Hence we can interpret the optimal decision in Definition II-2 by the following proposition:

Proposition 2-6: (Optimal decision mapping) The optimal decision mapping, Π: , following Definition 2-2, is the mapping that maximize the a posterior expected utility function E , | .

Remark: The optimal decision on the action is the action that maximize the a posterior expected utility:

arg max E , | arg max , p | d 2.1

Baysian Inference

By applying Baysian theory, the a posterior probability p | becomes

p | p | p

p 2.2 p is the a prior distribution of the event. By Proposition 2-4 and the mapping Φ: , we can represent p | by the two conditional probabilities

p | p | p | d 2.3 And we apply (2.2) and (2.3) to (2.1), we have

arg max , p | p | d p

p d 2.4

arg max , p | p | p d d

,

2.5

(2.4) and (2.5) is equal because p is constant for every a. The two conditional probabilities, p | and p | , stands for the two mappings, Φ: and Ψ: , that involves in the decision mapping Π: . Consequently, the decision involves system identification for the modeling uncertainty p | as well as noise and interference cancellation for p | , as depicted in Fig. 2.1. We formulate insightful example of the firefighting robot navigation problem under the intelligent decision framework to demonstrate its application in next chapter.

Chapter 3

Sensor Network Navigation

相關文件