• 沒有找到結果。

Simulation Results and Experiment Demonstrations

Chapter 6 Simulations and Experiments

6.3 Simulation Results and Experiment Demonstrations

This section demonstrates several different scenarios. The robot arm-hand system is conducted to grasp different objects from basic geometry to complex geometry. The experimental environment is shown in Fig. 6.11. The objects are on the blue table. Then the robot arm-hand system is located at the one end of the table. Two depth sensors are on the one end of the table and the side of the robot arm-hand system to catch the point cloud of the environment.

Fig. 6.11 The experimental environment

The flowchart of the experiment is given in Fig. 6.12. First, we collect the point cloud from depth sensors and then reconstruct the surface by point cloud processing.

The surface can be imported to the simulator. The simulator will generate the paths to grasp objects and place objects by grasp planning and path planning. The paths will be sent to the robot arm-hand system. According to this information, the robot arm-hand system will be driven to grasp objects.

Fig. 6.12 The flowchart of the experiment

The point cloud block contains every point cloud processing techniques, as shown in Fig. 6.13. In the scenario, we used two Kinects to catch point cloud. The point cloud will be described in their own Kinect frames. There exists a transformation matrix between two Kinects. The RANSAC finds the coefficients of the plane from two Kinects. According to the coefficients, we can find the transformation roughly. We map the points to the main Kinect, and then the point cloud processing will work. The reconstructed surface will be described in main Kinect frame. The transformation between the main Kinect and the simulator, can map the surface to the correct location in the simulator. These results are shown in Fig. 6.14. In Fig. 6.14(b), green points are the point cloud from main Kinect and white points are the point cloud from sub Kinect.

Fig. 6.13 The detail of point cloud block

(a) (b) (c) Fig. 6.14 Objects (a) on real table, (b) point cloud, (c) simulator.

In the robot simulator block, the simulator chooses one target which are detected from depth sensors. Then, the grasp planning which is mentioned in Chapter 3 to find the grasp configuration of robot arm-hand. The path planning uses RRT-Connect algorithm to find the collision-free paths for grasping paths and the placing paths. It is shown in Fig. 6.15.

Fig. 6.15 The detail of the robot simulator block

In the robot arm-hand system, the robot arm is responsible for the position and orientation of the robot hand. The robot hand control is a two-stage control. In pre-grasp stage, the robot hand is position control. The information from the simulator contains

the grasp configuration of the robot hand. In the grasping stage, the robot hand will turn to position-based force control. From the simulator, we can know which fingers of the robot hand will contact the object. The force control will be triggered on the contact fingers. According to the feedback of tactile sensors, and the ad-hoc reference force, the robot hand can grasp the object successfully.

6.3.1 Grasping Results: Ball

The first experiment is to grasp a simple ball. The ball is located on the table in which the two Kinects can catch the object surface point cloud. The reconstructed surface is shown in Fig. 6.16.

Fig. 6.16 The ball and the reconstructed surface

The grasp planner will randomly generate the hand pose to compute the Q1 quality measure. If a suitable pose is found, it will return the score to main program and run the path planner to generate the path. The Q1 quality measure is 0.0787 and the final grasp configuration are shown in Fig. 6.17.

Fig. 6.17 The Q1 quality measure and final pose of robot in experiment 1 The procedure of the real robot arm-hand and simulation are shown in Fig. 6.18.

Fig. 6.18 The procedure of experiment 1

6.3.2 Grasping Results: Bottle

The second experiment is a bottle. The Q1 quality measure is 0.0332 and the final grasp configuration is shown in Fig. 6.20. According to the high of the AABB of the object, the simulator generates the left side grasping pose to grasp the object. The result of path planning is shown in Fig. 6.21. In Fig. 6.21, the gray points are the nodes of RRT, and the red points are the final paths from the path pruning algorithm. The procedure of the real robot arm-hand and simulation are shown in Fig. 6.22.

Fig. 6.19 The bottle and the reconstructed surface

Fig. 6.20 The Q1 quality measure and final pose of robot in experiment 2

Fig. 6.21 The path planning: (a) to grasp (b) to place

Fig. 6.22 The procedure of experiment 2

6.3.3 Grasping Results: Joystick

The experiment 3 is a joystick. The Q1 quality measure is 0.0926 and the final grasp configuration is shown in Fig. 6.24. The reconstructed surface cannot contain all the geometry of the original object, as shown in Fig. 6.23. The reconstructed surface is rough but the grasp planner can still choose a grasp configuration, as shown in Fig. 6.24.

The procedure of the real robot arm-hand and simulation are shown in Fig. 6.25.

Fig. 6.23 The joystick and the reconstructed surface

Fig. 6.24 The Q1 quality measure and final pose of robot in experiment 3

Fig. 6.25 The procedure of experiment 3

相關文件