• 沒有找到結果。

CHAPTER 4. LOCATION AWARE SYSTEM DESIGN

4.3 M OBILE R OBOT S YSTEM

4.3.4 Human Detection and Tracking

The current design of location aware system is based on the received signal strength indicator (RSSI) of ZigBee wireless sensor networks. However, due to limited accuracy of the RSSI-based localization method, the robot cannot rely on merely RF information. A second method is required as the robot approaches to the user. In this implementation, a vision-based human detection and tracking system is integrated with the RSSI-based localization system to allow the robot approach the user more precisely.

In the current design, the human detection system includes two schemes: the body detection and face detection. Both methods are exploited to search and track the user as the robot approaches the user, as shown in Fig. 4-6. If a user is detected, the location of the user can be estimated by combing the detection result and the localization method proposed in Chapter 2. The robot is therefore able to track the user. In particular, the face detection can ensure that the user is facing the robot. For a robot with RGBd camera, the body detection method is used in this design based on the implementation of OpenNI[55]. The body detection system can not only identify the body, but also able to track the skeleton of the body from the depth image of Kinect. The posture of the human can therefore be recognized and the robot can adopt its pose to fit the user while delivering an object. The face detection method, on the other hand, is adopted from the functions in OpenCV[56]. The skin color segmentation is applied before the detection step to further increase the robustness

4.4 Summary

This chapter describes hardware and software modules of the location aware system developed and implemented in this thesis. The developed intelligent environment consists of three main sub-systems: ZigBee-based intrusion detection system, multimedia Human-machine interface and OSA platform, which provides the user to be alerted and monitor the house via camera or the robot from their mobile devices. The implemented mobile robot system is able to autonomous localize itself, navigate the environment, and detect human during the navigation to provide service.

(a)

(b)

Fig. 4-6: Examples of the human detection system: (a)human body detection, (b)face detection

Chapter 5.

Experimental Results

5.1 Introduction

In this chapter, several experimental results using the proposed method and implementation from Chapter 2 to 4 are presented and discussed. In Section 5.2, the experiment of Human-machine-Interface in Intelligent environment shows how the human can monitor the intelligent environment via speech command. The monocular visual navigation system is demonstrated in section 5.3. The experiment of intruder detection application which integrates the location aware system and intelligent environment is shown is section 5.4.

Finally, The experiment of robot-on-demand service is presented in section 5.5.

5.2 Human-Machine-Interface in Intelligent Environment

In order to evaluate the capability and practicability of the system, the system is tested in a sample room. The pan-tilt camera DCS-5300G from D-Link was installed in the middle of the room. Nine ZigBee modules were placed around the room, while three users carrying the ZigBee beacons. The integrate system has been tested to exam the scenarios described in the first section: location request and “asking-for-help”. In the first scenario, several people with ZigBee beacons move around some room simultaneously. A user then requests to watch these people from the cell phone. The camera successfully moves to the target person and the user retrieves the target image for each request successfully. In the second scenario, several people wore a headset microphone, as shown in Fig. 5-1. They took turns saying their name, to pretend they were in a situation to ask for help. The speech recognition system successfully

recognized these people, translated the messages to the location server, and launched the camera to rotate and capture image of the person. The images successfully transmitted to other users’ cell phones and PC via e-mails.

5.3 Mobile Robot Obstacle Avoidance

This experiment aims to test if the proposed system can be used to replace a common laser range finder and accomplish an autonomous navigation task. The proposed system has been implemented on an experimental mobile robot, as shown in Fig. 5-2. An Industrial PC (IPC) was used for image processing and robot control. All the motors are velocity-servoed by using a DSP-based motion control board. The navigation system described in Section 4.3.3 is adopted in this experiment, as shown in Fig. 5-3. The robot localized itself by using odometry.

The navigation task was achieved by behavior fusion of two navigation behaviors, namely, obstacle avoidance and goal seeking. The goal seeking behavior works to drive the mobile robot towards the direction of the target. The obstacle avoidance behavior guides the mobile robot to move along the direction away from any obstacle to prevent from any possible collision on the way to the target. The main challenge of the proposed vision-based system is that the original laser scanner provides a distance canning of 240-degree while the adopted web camera, however, can provide only 60-degree scanning, as shown in Fig. 5-4, due to the camera’s field of view. Therefore, the robot may not move too fast to collide into obstacles

Fig. 5-1: An example of the “asking-for-help” scene

Acquired image

Microphone Jin-Huai

Pan-tilt Camera

which appear in the blind zone. In the experiments, the threshold value in (3-4) was set to 2 pixels, considering the current implementation for the laboratory environment.

The experimental results are shown in Fig. 5-5 and Fig. 5-6. In the experiment, the robot was programed to navigate from coordinates in meters (0,0) to (13,1) via (2,2.5) and (12,2.5).

Fig. 5-2: The mobile robot used in the navigation experiment.

Fig. 5-3: The navigation design of the experimental mobile robot

Webcam

The initial speed of the robot was 25cm per second. The recorded trajectory in Fig. 5-5 shows that the robot successfully avoided several static and moving obstacles and reached the destination. Note that the robot is able to detect a moving pedestrian, as shown in Fig. 5-6 (f), Fig. 5-5. (f) and Fig. 5-7 (a), and also detect the homogeneous ground region, as shown in Fig.

5-6 (h), Fig. 5-5. (h), and Fig. 5-7(b). These two scenarios validate the effectiveness of the proposed obstacle detection method mentioned in Chapter 3.

Fig. 5-4: The scale of the view from the mobile robot after IPT.

Fig. 5-5: The recorded trajectories of navigation experiments using the proposed MonoVNS and the URG-04LX-UG01 laser range finder. Label (a)-(l) represent the position of the robot

in Fig. 5-6. (a)-(l).

Recorded trajectory of navigation using the laser range finder Recorded trajectory of navigation using the proposed MonoVNS Wall

Chair Chair Chair

Pedestrian

(a) (b)

(c) (d)

(e) (f)

(g) (h)

(i) (j)

(k) (l) Fig. 5-6: Snapshots from the navigation experiment.

(a)

(b)

Fig. 5-7: Experimental results of ground detection and pedestrian detection

In theory, the robot is possible to achieve speeds higher then 25cm/s under current implementation (with 190ms processing time on average). However, the narrow view angle would make it dangerous to take turns and therefore 25cm/s was set for safety. To compare with a common laser range finder, we have repeated the experiment using a URG-04LX-UG01 to experiment in the same environment with the same navigation algorithm. The range of the laser range finder is 20 to 5600mm. The scanning angle of the laser range finder was limited as 60 degrees (the same condition with the monocular camera) so that the results can be fairly compared. The trajectory of the experiment is also shown in Fig. 5-5. The time of navigation using laser range finder was 38 seconds, while that of using the proposed method is 43 seconds.

The distance travelled using laser range finder is 16.9m, while that of using the proposed method is 17.3m. The experimental results show that the both sensors give similar performance under the same view angles.

5.4 Intruder Detection Application

The experiment aimed to demonstrate the application of a WSN-based location aware system and mobile robot for intruder detection. Fig. 5-8 shows the system architecture of the experiment. Three intruder sensor modules were placed in the environment, including one microphone and two pyro sensors as well as a 3-axis accelerometer. The sensing data from the intruder sensor modules were first collected and processed on the modules independently. A location aware system was formed using sensor modules, and other ZigBee nodes deployed in the environment. The system can provide the locations of the nodes installed on the sensor modules and the robot. If an intruder is detected, the ID of the module and related sensor data will be transmitted to the mobile robot via the WSN. The current position of the robot and the triggered module are also determined by the location aware system at the same time.

Meanwhile, the mobile robot will receive the detection result from the WSN, navigate to each

Fig. 5-8: The hardware arrangment of an intruder detection system.

of the alarm locations, and take real-time images using the robot’s webcam. The security agency and users can easily access both the detection result and real-time video transmitted by the robot on their portable devices via both WiFi and 3G networks. Users can also remotely control the robot to confirm the circumstances in detail.

Fig. 5-9 shows the execution flowchart of the intrusion detection system. Once a sensor module is triggered by the detection of a possible intruder, the ID of that module will be logged and added to the patrol schedule. Additional redundant triggers will then be ignored.

Once the robot arrives at the alarm location, it will send both a short message and real-time

Fig. 5-9: Flowchart of the proposed intruder detection system Intruder

images to the security agency and end-users. The operators can then take control of the robot if necessary. Otherwise, the robot will move to the next alarm position. In the experiment, three different cases were tested in one script, including situations that mimicked a broken window by hand clapping (Event 1), a person entering the room (Event 2), and a thief sneaking into the room (Event 3), see Fig. 5-10. Fig. 5-11 illustrates the experiment using snapshots. Fig. 5-12 shows the actual trajectory of the robot during the experiment. During Event 1, a person clapped his hand and triggered the alarm, as shown in Fig. 5-10 (a) and Fig. 5-11 (a). The robot received the alarm signal and moved to where the corresponding surveillance location had been designated by the ZigBee sensor module (Fig. 5-10 (b)-(c), Fig. 5-11 (b)-(c) and Fig. 5-12 (a)-(c)). During Event 2, a person entered the room, which triggered the pyro sensor near the door. The robot reacted by moving toward the door (Fig. 5-10 (d)-(f), Fig. 5-11 (d)-(f) and Fig.

5-12 (d)-(f)). Finally, during Event 3, a person acting as a thief sneaked inside and wanted to steal a notebook. He triggered the pyro sensor and the robot once again approached the notebook and took pictures of the thief (Fig. 5-10 (g)-(i) and Fig. 5-11 (g)-(i), and Fig.

5-12(g)-(i)). Fig. 5-10 and Fig. 5-11also show the images received by a remote user. Since these images were taken without any recognizing algorithm and tracking mechanism, the system cannot guarantee to take any certain parts of the intruder.

5.5 Robot-on-Demand Experiment

This experiment aims to demonstrate how to accomplish the robot-on-demand service using location aware system and mobile robot. In order to provide practical services in the demonstration, several adaptive behaviors such as autonomous navigation, mobile manipulation and grasping, object delivery to the robot control system. The architecture of the robot control system is shown in Fig. 5-13. These behaviors are designed to safely manipulate the robot according to the status of users and the environment. All the outputs of these

Fig. 5-10: The process and the images acquired by the robot in the intruder detection experiments: (a) a person claped his hands near the microphone; (b) the robot received the alert and moved to the person; (c) the robot took the person’s image and sent it to the user; (d)

a person walked into the room from the doorway; (e) the robot received the alert and moved to the person; (f) the robot took the person’s image and sent it to the user; (g) a person (acting as a theif) intruded into the room; (h) the robot received the alert and moved to the person; (i)

the robot took the image and sent it to the user.

UBOT Pyrosensor Microphone Obstacles

(b)

(c)

(a) User

Event 1

UBOT Pyrosensor Microphone Obstacles

(d)

(f)

(e) User

Event 2

UBOT Pyrosensor Microphone Obstacles

(g)

(h) (i)

User Event 3

Fig. 5-11: Snapshots of the intruder detection experiment corresponding to Fig. 5-10.

Fig. 5-12: The robot trajectory in the intruder detection experiment. The mark (a)-(i) represent the corresponding location of the robot in 5-10 (a)-(i)

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

-1.5 -1 -0.5 0 0.5 1 1.5 2

-1 0 1 2 3 4

x(m)

y(m)

(i)

(g)

(a)

(c) (f) (h)

(b)

(d)

(e)

Fig. 5-13:System Architecture of the dual arm mobile manipulator.

behaviors are collected and fused to determine the ultimate motion of the robot using a fuzzy neural network. The behavior-fusion design of this work is extended from the method presented in section 4.3.3. The robot on-demand behavior integrates location aware, face detection and the autonomous navigation system of the mobile robot [57].

To demonstrate various complex tasks for daily-life assistance in a home setting, a multi-purpose mobile robot is also required. In this experiment, a dual-arm manipulator was installed on an omni-directional mobile platform, as shown in Fig. 5-14 (a). A recent photo and the hardware configuration of the mobile manipulator are shown in Fig. 5-14 (b). A Kinect camera is mounted on the pan-tilt platform for vision-based grasping and human detection. An on-board industrial PC (IPC) is responsible for analyzing the sensory data and task planning and control of the robot behaviors. A laser scanner was put at the front of the robot for obstacle avoidance. The semi-humanoid design not only provides a human-friendly looking, but also makes it more suitable to work in an environment originally structured for human. Harmonic

Enco

RSSI, ZigBee ID Distance Images with depth info

Encoders

Motor command Motor command Motor command Motor command Motors of the Left Arm

IPC

Motors of the Right Arm Motors of the Mobile Platform

Motor command Motor command

drives are employed at shoulder and elbow for smoother motion. The EPOS positioning controllers from Maxon were used to control the motors of each joint. The position commands were transferred to qc values as input to the controller. It will make the arm move to the assigned position. Current gripper design aims to mimic the function of human palm and fingers, making it capable of handling various objects. The gripper is driven by motor and reduction gears in order to provide accepeable grip force. The mobile platform is provided with four omni-directional wheels, each driven by a self-contained motor. The platform gives the

(a) (b)

Fig. 5-14: The dual-arm mobile manipulator; (a)the mechanical design of the mobile manipulator; (b) a recent photo of the robot and the key components of the mobile

manipulator.

Kinect Camera

6-DOF Arm 2

Laser Scanner

Omni-directional Mobile Platform (inside) On-board Industrial PC

robot omni-directional degrees of freedom for navigation and vision-based grasping in narrow space. The detailed motion model of platform was presented in [57].

The effectiveness of the proposed methods is demonstrated by performing practical daily assistive tasks in which the robot interacts with a user for handling objects in the environments.

There are two phases in the experiment. In phase 1, the robot shows how it can deliver an object to a user. In phase 2, it is demonstrated that the robot can detect if the user is under critical situation, in this case, a heart attack. The robot can then come to help the user by bring him the medicines. In order to both monitor the location and the poses of the user, the user carries two ZigBee wireless sensor modules during the experiment, as shown in Fig. 5-15. A human pose recognition module [11] is tied on his waist, and a pulse oximeter (SpO2) is mounted on his finger [12]. With these two sensors, the robot can monitor the pose, heart rate, and oxygen saturation in blood of the user. Six additional ZigBee modules are installed in the environment to form the ZigBee sensor network for transferring the sensor data and localize the user. The positions of these modules and the estimated user positions are shown in Fig.

5-17 and Fig. 5-19. The experiment is described in the following.

Phase 1: Object delivery

In the beginning of the experiment, the user was sitting on in the living room reading the newspaper, as shown in Fig. 5-15 The robot was at standby mode, as shown in Fig.8(a).

Meanwhile, a parcel delivery man came in, notifying that he has a package to deliver. Since the user was reading newspaper, he commanded the robot to take the package for him. The robot therefore moved to the front door and received the package from the delivery man, as shown in Fig. 5-16 (b). It is noticed that although the delivery man moved the object during the grasping process, the robot could still grasp the object by visual servoing. After that, the robot moved toward the user according to estimated location of the user to hand over the parcel. During its way to the user, the human detection system recognized that the user was doing exercises. So

the robot gradually adjusted its movement and stopped in front of the user. Then, the robot delivered the object to the user, as shown in Fig. 5-16 (c). Finally, the robot headed back to its rest zone, as shown in Fig. 5-16 (d), where it corrected its position estimation using a landmark in the environment. Fig. 5-17 shows the trajectory of the robot and the user during phase 1 of the experiment.

Phase 2: Emergency handling by fetching a drug can

In the beginning of phase 2 of the experiment, the user was sitting on the sofa unpacking the package. The robot was in standby mode at the rest zone. Suddenly, the user had a heart attack, as shown in Fig. 5-18(a). This situation was simulated by un-mounting the pulse oximeter on the user’s finger, and the heart rate data became the maximum value. The robot noticed the situation right away via the ZigBee sensor network and moved toward the user to assist. When the robot arrived, it offered both of its hands to comfort the user, as shown in Fig.

5-18 (b). Although the user tried to react, he eventually laid down again due to severe illness.

The robot therefore turned back to take the medicine on the cupboard, as shown in Fig. 10(c), and handed it to the user, as shown in Fig. 5-18 d). Finally, the heart rate of the user was back

Fig. 5-15: The user wearing two sensor modules in the experiment.

Pose recognition module

Pulse oximeter

to normal, and the robot headed back to its rest zone again. Fig. 5-19 shows the trajectory of the robot and the user during phase 2 of the experiment.

The experiment shows that the ZigBee WSN successfully provides the approximated location of the user for the robot to approach and search for the exact location of the user.

Furthermore, the experiment shows the robot can accomplish two object-grasping-and- delivery tasks, despite very different situations of the object and the user.

5.6 Summary

This chapter first evaluates the basic components of the location aware system: the intelligent environment and the mobile robot. The experiments show the intelligent environment can locate the user via the ZigBee location deisgn. The multimedia HMI facilitates a user to visualize the located target, and command the system by speech. The experiment of monocular visual navigation system shows the robot is able to reach the desired goal using a single camera. The result show that the proposed method is able to distinguish

This chapter first evaluates the basic components of the location aware system: the intelligent environment and the mobile robot. The experiments show the intelligent environment can locate the user via the ZigBee location deisgn. The multimedia HMI facilitates a user to visualize the located target, and command the system by speech. The experiment of monocular visual navigation system shows the robot is able to reach the desired goal using a single camera. The result show that the proposed method is able to distinguish

相關文件