• 沒有找到結果。

Experimental Results

在文檔中 MOVING OBJECT TRACKING (頁 128-141)

Implementation

6.5. Experimental Results

Before 2001, experimental data was collected with the Navlab8 vehicle (see Figure 1.7). A SICK PLS100 laser scanner was mounted on the right side of the Navlab8 vehicle, doing horizontal profiling. The preliminary experimental results showed that it is feasible to accomplish localization, mapping and moving object tracking without using measure-ments from motion sensors. However the algorithms fail when large portions of stationary objects are occluded by moving objects around the Navlab8 vehicle.

Currently, the Navlab11 vehicle (see Figure 1.8) is used to collect data. The Navlab11 vehicle is equipped with motion sensors (IMU, GPS, differential odometry, compass, incli-nometer, angular gyro) and perception sensors (video sensors, a light-stripe rangefinder, three SICK single-axis scanning rangefinders). The SICK scanners, one SICK LMS221 and two SICK LMS291, were mounted in various positions on the Navlab11 vehicle, doing hor-izontal or vertical profiling. The Navlab11 vehicle was driven through the Carnegie Mellon University campus and around nearby streets. The range data were collected at 37.5 Hz with 0.5 degree resolution. The maximum measurement range of the scanners is 81 m. Ta-ble 6.1 shows some features of SICK laser scanners. In this section, we show a number of representative results.

Table 6.1. Features of SICK laser scanners. The measurement points are interlaced with 0.25and 0.5resolution

SICK Laser Scanner PLS 100 LMS 221/221/291

Scanning Angle 180 100, 180

Angular Resolution 0.5, 1 0.25, 0.5, 1

Maximum Range ∼ 51 m ∼ 81 m

Collection Rate 6 Hz with 0.5resolution 37.5 Hz with 0.5resolution

Detection and Data Association

Figure 6.7 shows a result of multiple vehicle detection and data association. Five dif-ferent cars were detected and associated over 11 consecutive scans. This result demon-strates that our detection and data association algorithms are reliable even with moving objects 60 meters away. Additionally, the visual image from the tri-camera system illus-trates the difficulties of detection using cameras.

Figure 6.8 and Figure 6.9 show results of pedestrian detection and data association.

In Figure 6.8, object 19, 40, and 43 are detected pedestrians, object 17 is a detected car and Object 21 is a false detection. Without using features or appearances, our algorithms detect moving objects based on motion. In Figure 6.9, the visual image shows several stationary

6.5 EXPERIMENTAL RESULTS

−40 −30 −20 −10 0 10 20 30 40

0 10 20 30 40 50 60

20 23 24

25 26

Scan 1140

Figure 6.7. Multiple vehicle detection and data association. Rectangles denote the detected moving objects. The segment numbers of the moving objects are shown.

pedestrians that are not detected. Although our approaches cannot classify stationary cars and pedestrians, these temporary stationary objects actually do not have to be dealt with, because their stationary state will not cause any critical threat that the driver/robot has to be aware of, therefore this drawback is tolerable.

Figure 6.10 shows a result of bus detection and data association. Comparatively, Fig-ure 6.11 shows a temporary stationary bus. These big temporary stationary objects have a bad effect upon data association in the large. The approaches for dealing with these temporary stationary objects have been addressed in the previous chapter.

Tracking

In this section, we show several tracking results of different objects in the real world.

IMM with the CV and CA models. Figure 6.12 shows the tracking results of the example in Figure 3.20. The IMM algorithm with the CV and CA models performed well in this case. The distributions of the state estimates described the uncertainty properly.

−40 −30 −20 −10 0 10 20 30 40 0

10 20 30 40 50 60

17

19 21

4340

Scan 9700

Figure 6.8. Pedestrian detection and data association. See the text for details.

−20 −10 0 10 20

−5 0 5 10 15 20 25 30

4 10

13 21 19

46

Figure 6.9. Pedestrian detection and data association. The visual image shows sev-eral stationary pedestrians, which are not detected by our motion-based detector.

6.5 EXPERIMENTAL RESULTS

−40 −30 −20 −10 0 10 20 30 40

0 10 20 30 40 50 60

15 1 Scan 150

Figure 6.10. Bus detection and data association.

−40 −30 −20 −10 0 10 20 30 40

0 10 20 30 40 50 60

54 11

12 16

17 1918

20

36

Scan 13760

Figure 6.11. Temporary stationary objects. A temporary stationary bus is shown.

−2 0 2 4 6

−2

−1 0 1 2 3 4 5

meter

meter

(a) Measurements

0 1 2 3 4 5

−1.5

−1

−0.5 0 0.5 1 1.5 2

Y (meter)

X (meter)

observed IMM

(b) Location estimates

1.6 1.8 2 2.2 2.4 2.6 2.8

1.6 1.8 2 2.2 2.4 2.6

Y (meter)

X (meter)

observed IMM

(c) The enlargement of (b)

Figure 6.12. Tracking results of the example in Figure 3.20. In (c), the distributions of the state estimates are shown by 1σ ellipses.

Ground Vehicle Tracking. The previous example showed a very short period track-ing in which data association was easy because the tracked object was not occluded. Fig-ure 6.13-6.17 illustrate an example of tracking for about 6 seconds. FigFig-ure 6.13 shows the detection and data association results and Figure 6.14 shows the partial image from the tri-camera system. Figure 6.15 shows the raw data of the 201 scans in which object B was occluded during the tracking process. Figure 6.16 shows the tracking results. The occlu-sion did not affect tracking because the learned motion models provide reliable predictions of the object states. The association was established correctly when object B reappeared in this example. Figure 6.17 shows the speed estimates of these four tracked objects from the IMM algorithm.

Pedestrian Tracking. Figure 6.18-6.25 illustrate an example of pedestrian tracking.

Figure 6.18 shows the scene in which there are three pedestrians. Figure 6.19 shows the

6.5 EXPERIMENTAL RESULTS

−40 −30 −20 −10 0 10 20 30 40

0 10 20 30 40 50 60

meter

meter

Figure 6.13. Detection and data association results. The solid box denotes the robot.

Figure 6.14. The partial image from the tri-camera system.

Four lines indicate the detected vehicles.

0 10 20 30 40 50 60 70 80 90

−15

−10

−5 0 5 10 15

X (meter)

−Y (meter)

Object A

Object B Object C

Object D

Figure 6.15. Raw data of 201 scans. Measurements associated with stationary ob-jects are filtered out. Measurements are denoted by × every 20 scans. Object B was occluded during the tracking process.

0 10 20 30 40 50 60 70 80 90

−15

−10

−5 0 5 10 15

X (meter)

−Y (meter) 10.6 m/s

10.5 m/s 9.8 m/s 10.6 m/s

9.2 m/s Robot

Object A

Object B Object C

Object D

Figure 6.16. Results of multiple ground vehicle tracking. The trajectory of the robot is denoted by the red line and the trajectories of the moving objects are denoted by the blue lines. × denotes that the state estimates are from not the update stage but the prediction stage because of occlusion.

visual images from the tri-camera system and Figure 6.20 show the 141 raw scans. Because of the selected distance criterion in segmentation, object B consists of two pedestrians. Fig-ure 6.21 shows the tracking result which demonstrates the ability to deal with occlusion.

0 1 2 3 4 5 6 0

2 4 6 8 10 12

Time (sec)

Speed (meter/s)

Obj A Obj B Obj C Obj D

Figure 6.17. Speed estimates.

Figure 6.22 and 6.24 show the speed estimates of object A and B respectively. Figure 6.23 and 6.25 show the probabilities of the CV and CA models of object A and B respectively.

−40 −30 −20 −10 0 10 20 30 40

0 10 20 30 40 50 60

meter

meter

Figure 6.18. An intersection.

Pedestrians are pointed out by the arrow.

Figure 6.19. Visual images from the tri-camera system. Block boxes indicate the detected and tracked pedestrians.

Move-Stop-Move Object Tracking. Figure 6.26-6.30 illustrate an example of move-stop-move object tracking. Figure 6.26 and Figure 6.28 show the scan from the laser scanner and the visual image from the camera. Figure 6.28 shows the 201 raw scans and the robot trajectory.

Figure 6.29 shows the tracking results using IMM with the CV and CA models and Figure 6.30 shows the speed estimates. As described in Chapter 4, the speed estimates did not converge to zero.

Figure 6.31 shows the result using the move-stop hypothesis tracking algorithm where the stationary motions were identified.

6.5 EXPERIMENTAL RESULTS

3 4 5 6 7 8 9 10 11

2 3 4 5 6 7

Y (meter)

X (meter)

Object A

Object B Occlusion

Figure 6.20. Raw data of 201 scans. Measurements are de-noted by × every 20 scans.

3 4 5 6 7 8 9 10 11

2 3 4 5 6 7

Y (meter)

X (meter)

1.4 m/s

2.1 m/s

Figure 6.21. Results of multiple pedestrian tracking. The final scan points are denoted by ma-genta × and the estimates are denoted by blue +.

0 0.5 1 1.5 2 2.5 3 3.5 4

0 1 2 3 4 5

Time (sec)

Object A Speed (meter/s)

Figure 6.22. Speed estimates of object A.

0 0.5 1 1.5 2 2.5 3 3.5 4

0 0.2 0.4 0.6 0.8 1

Object A: Time (sec)

Model Porb.

CV Prob CA Prob

Figure 6.23. Probabilities of the CV and CA models of object A.

0 0.5 1 1.5 2 2.5 3 3.5 4

0 1 2 3 4 5

Time (sec)

Object B Speed (meter/s)

Figure 6.24. Speed estimates of object B.

0 0.5 1 1.5 2 2.5 3 3.5 4

0 0.2 0.4 0.6 0.8 1

Object B: Time (sec)

Model Porb.

CV Prob CA Prob

Figure 6.25. Probabilities of the CV and CA models of object B.

3D (212D) City-Sized SLAM

We have demonstrated that it is feasible to accomplish city-sized SLAM in Chapter 3, and Figure 3.28 shows a convincing 2-D map of a very large urban area. In order to

−20 −15 −10 −5 0 5 10 15 20

−5 0 5 10 15 20 25

meter

meter

Object A

Figure 6.26. The scene.

−5 0 5

0 1 2 3 4 5 6 7

Y (meter)

X (meter)

Robot Object A

Figure 6.27. 201 raw scans and the robot trajectory. Measure-ments are denoted by red × ev-ery 20 scans.

Figure 6.28. The visual image from the tri-camera system. The move-stop object is indicated by a box.

−1.5 −1 −0.5 0 0.5 1

5 5.5 6 6.5 7

Y (meter)

X (meter)

Observed Filtered

−0.3 −0.28 −0.26 −0.24 −0.22

7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18

Y (meter)

X (meter)

Observed Filtered

Figure 6.29. The result of move-stop object tracking using IMM with the CV and CA models. On the left: the tracking result. On the right: the enlargement of the left figure. The measurement-estimate pairs are shown by black lines.

build 3-D (212-D) maps, we mounted another scanner on the top of the Navlab11 vehicle to perform vertical profiling. Accordingly, high quality 3D models can be produced in a minute. Figure 6.32 shows a 3D map of several street blocks. Figure 6.33 shows the 3D

6.6 2-D ENVIRONMENT ASSUMPTION IN 3-D ENVIRONMENTS

1 2 3 4 5

−1

−0.5 0 0.5 1 1.5 2 2.5

Time (sec)

Speed (meter/s)

3 3.5 4 4.5 5

−0.5 0 0.5 1

Time (sec)

Speed (meter/s)

Figure 6.30. Speed estimates from IMM. On the right: the enlargement of the left figure. Note that speed estimates did not converge to zero.

−0.29 −0.28 −0.27 −0.26 −0.25 −0.24 −0.23 −0.22 7.1

7.12 7.14 7.16 7.18

Y (meter)

X (meter)

Observed Filtered Stop

3 3.5 4 4.5 5

−1

−0.5 0 0.5 1

Time (sec)

Speed (meter/s)

Figure 6.31. The result of tracking using the move-stop hypothesis tracking algo-rithm. On the left: location estimates. On the right: velocity estimates. Zero veloc-ity estimates are denoted by red ×.

model of the Carnegie Museum of Natural History. Figure 6.34, Figure 6.35 and Figure 6.36 show the 3-D models of different objects, which may be very useful to applications of civil engineering, architecture, landscape architecture, city planning, etc.

6.6. 2-D Environment Assumption in 3-D Environments

Although the formulations derived in Chapter 2 are not restricted to two-dimensional applications, it is more practical and easier to solve the problem in real-time by assuming that the ground is flat. But can algorithms based on the 2-D environment assumption sur-vive in 3-D environments? For most indoor applications, this assumption is fair. But for applications in urban, suburban or highway environments, this assumption is not always valid. False measurements due to this assumption are often observed in our experiments.

One is from roll and pitch motions of the robot, which are unavoidable due to turns at high speeds or sudden stops or starts (see Figure 6.37). These motions may cause false

Figure 6.32. A 3-D map of several street blocks.

Figure 6.33. A 3-D model of the Carnegie Museum of Natural History.

measurements such as wrong scan data from the ground instead of other objects. Addi-tionally, since the vehicle moves in 3-D environments, uphill environments may cause the laser beam to hit the ground as well (see Figure 6.38). As compared with most metropolitan areas, Pittsburgh has more hills. Table 6.2 shows the steepness grades of some Pittsburgh hills.

In order to accomplish 2-D SLAM with DATMO and SLAM with GO in 3-D envi-ronments, it is critical to detect and filter out these false measurements. Our algorithms

6.6 2-D ENVIRONMENT ASSUMPTION IN 3-D ENVIRONMENTS

Figure 6.34. 3-D models of buildings on Filmore street.

Figure 6.35. 3-D models of parked cars in front of the Carnegie Museum of Art.

Figure 6.36. 3-D models of trees on S. Bellefield avenue.

can detect these false measurements implicitly without using other pitch and roll measure-ment. First, the false measurements are detected and initialized as new moving objects by our moving object detector. After data associating and tracking are applied to these

−10 −5 0 5

−5 0 5 10 15 20 25 30 35

−10 −5 0 5

−5 0 5 10 15 20 25 30 35

−10 −5 0 5

−5 0 5 10 15 20 25 30 35

No.5475 No.5480 No.5485

Figure 6.37. Dramatic changes between consecutive scans due to a sudden start.

Figure 6.38. False measurements from a uphill environment.

measurements, the shape and motion inconsistency will tell us quickly that these are false measurements. Also these false measurements will disappear immediately once the mo-tion of the vehicle is back to normal. The results using data from Navlab11 show that our 2-D algorithms can survive in urban and suburban environments. However, these big and

在文檔中 MOVING OBJECT TRACKING (頁 128-141)