• 沒有找到結果。

Figure 4-1 illustrates the overall configuration of an eye-to-hand system, in which a line laser is attached to the end-effector. The robot base coordinate system is defined as a world coordinate system as follows.

Figure 4-1 Overview of an eye-to-hand system with a line laser module.

The camera model is slightly different from the one introduced in section 1.2.1, since the distortion model used here should form an effective cost function to deal with optimization problem. This camera model is the one introduced by Willson [80]

but the definitions of some parameters are slightly different. Since the effective focal length and the center-to-center distance between sensor elements can be combined as a pixel unit focal length for simplicity, this camera model combines their first and fourth transformations in [80] without loss of generality, and the skew coefficient and more distortion parameters are considered. A position in the Cartesian coordinate system is denoted Cp= Cx y z, ,C C T and the direction of the ray from the center of the camera to Cp is x u Cx zC Cy zC T . xTu 1T is also the undistorted position of the point that the ray intersects with the unit z plane. Based on Brown’s distortion model [48], the transformation from the distorted position, xd =

[

xd yd

]

T, ρ2 are two tangential distortion parameters. When distortion is neglected, all distortion parameters are zero and x =u x . The camera pin-hole model is a d

transformation between the image coordinate system and the 3D coordinate system.

The transformation between the distorted ray direction x and the observed image d

position xn =[ , ]u v T is

where K, the intrinsic matrix, includes five intrinsic parameters, of which f and u f are the focal lengths in pixel; the aspect ratio is v f f ; u v αc is the screw coefficient, and

[

0 0

]

u v T denotes the principal point, which is assumed to be the center of distortion.

Figure 4-2 shows the geometrical relationship between the line laser module and the end-effector. In the end-effector coordinate system, the laser plane is specified by a vector EΛa=(ΛE TnΛEt n)EΛ , which is perpendicular to the laser plane and ends on the laser plane. This laser vector is an extra calibration target in the proposed method.

Since the laser module is rigidly installed, the parameters do not change under normal operations.

Figure 4-2 The laser plane with respect to the end-effector.

This section describes in detail the method for calibrating an eye-to-hand system.

Calibration is preformed to reduce systematic errors by finding and correcting the parameters.

The line of intersection of two planes in 3D space is described by its direction and any position on the line. Any point x on the line of intersection of plane a and plane b satisfying of generality, let x be the point on the line that is closest to the origin of the world 0 coordinate system, such that u xT 0 =0. The term d represents the distance from x to 0

x .

A laser line stripe is formed by projecting the laser onto the working plane and then captured by the camera. The line strip lies on the three planes (laser plane, working plane, and camera-to-stripe plane). The line can be specified as the intersection of any two of the three planes, indicating that this system configuration contains redundant information. Differences between specifications of a single line derived using different planes are caused by systematic errors and/or noise in the measurements. Restated, the systematic parameters can be estimated by tuning them to reduce the differences between the lines that are computed from different intersections. Additionally, forming a batch of projected points enables the effects of noise to be reduced by various optimization methods.

4.3 Closed-form solution

The optimization problem, as described in the next section, is nonlinear. A good initial guess is required to avoid a local minimum and accelerate convergence. A closed-form solution is derived in this section, by exploring the unique hand movement.

Since the parameters need not be guessed in advance, the proposed method can be implemented automatically.

Manipulating the end-effector without rotation can generate parallel laser planes and parallel laser line stripes, which are projected on the working plane. Two non-parallel line stripes on the same working plane virtually cross each other in 3D

space and in the image plane. Two sets of parallel line stripes generate a set of virtual crossing points, which are distributed as a grid-like pattern. The closed-form solution is derived based on these geometrical relationships.

The laser line stripes on the working plane under a single plane pose can only provide 2D information in 3D space. Hence, the working plane pose must be changed once or more times to generate abundant 3D information for calibrating the intrinsic parameters of the camera. If working plane pose cannot be changed, then another plane with a different poses can be put on it.

Figure 4-3 shows the relationships among projected laser stripes and their virtual crossing points. For the p-th working plane pose, the end-effector maintains the (p,m1)-th orientation ERp m, 1, and is translated from the position E p mt , 1to several positions in the (p,m1,n1)-th direction (or vector kp m n, 1, 1). These translations generate parallel line stripes. This set of parallel line strips virtually crosses to another set of parallel line stripes, which are generated by moving the end-effector from E p mt , 2 in the (p,m2,n2)-th direction (or vector kp m n, 2, 2) at hand orientation ERp m, 2. The working plane, laser plane at the (p,m1)-th hand pose, and laser plane at (p,m2)-th hand pose all intersect at tp m m, 1, 2. The translation kp m n, 1, 1 moves the crossing point to and the translation kp m n, 1, 1 moves the crossing point to tp m m, 1, 2+jp m m n, 1, 2, 2. The (p,m1,m2,k)-th crossing point that is projected onto camera is at image position

x and its homogeneous coordinate is n

where wiand wj denote scales of a combination according to end-effector translating distance, z is a related scalar, and Hp m m, 1, 2 is a homogeneous matrix.

Suppose that the end-effector is translated in N directions at each hand orientation,

, 1, 2 p m m

H then becomes a 3 (2× N+ matrix, which includes N vectors i, N vectors 1) j, and one starting position t.

Figure 4-3 Projected laser stripes and virtual crossing points.

The closed-form solutions are derived from the measurements of the camera to the movements of the robot. Pure hand translations can eliminate the position terms of parameters in equations. The relationships can then be derived as several linear forms to find angle related terms of parameters. After all directions and orientations have been obtained, the position terms and the scaling factors can be obtained. The step-by-step calculations to obtain the closed-form solution are as follows.

1) Finding homography matrixes with unknown scales:

The entries of a homography matrix can be obtained by eliminatingwaccording to the direct linear transformation (DLT) method [21, 22]. Let h be the a-th row of a H. eigenvector that corresponds to the smallest eigenvalue of matrixQ QTH H . The

state-of-the-art eigen decomposition approach is based on singular value decomposition. The estimated matrix Hˆ p m, with an unknown scaling factor is equivalent to the homography:

, 1, 2 , 1, 2 , 1, 2

For a given working plane pose, all crossing points should be on this working plane even under various hand orientations. Thus, the ratios between values of λp m m, 1, 2, m1=1~M, m2:hand orientations have crossing points to m1, can be obtained. Then, only one scaling factor is undetermined for one working plane pose. The P unknown scaling factors can be obtained by the following steps.

2) Finding the normal vector of the laser plane relative to the robot:

The translations of the end-effector are related to the movements of the laser line stripe on the working plane by projecting the hand translation vectors to the working plane along the laser plane (Figure 4-4).

Figure 4-4 Projection of an end-effector translating vector along a laser plane on a working plane

The projection relationship satisfies

(

ip m n m, 1, 1, 2kp m n, 1, 1

)

TΛnp m, 1=0. (4-8) The vectors, ip m n m, 1, 1, 2, n1 = 1~N, are all in the same direction since they are on the parallel line stripes that are generated with the (p,m2)-th hand orientation. Therefore, the vectors can be specified as ip m n m, 1, 1, 2 =dp m n m, 1, 1, 2ip m m, 1, 2, where ip m m, 1, 2 is a unit

Then, the ratio of the n1-th distances to the 1-th distance is

, 1, 1, 2 , 1, 1 , 1

This ratio can also be determined as

, 1, 1, 2 , 1, 1, 2 , 1,1, 2 , 1, 1, 2 , 1,1, 2

From all normal vectors of the laser plane in the robot base coordinate system,

B , p m

Λn , and the corresponding hand orientations, BERp m, , p=1~P, m=1~M, the laser normal vector ΛEnwith respect to the end-effector coordinate system can be obtained by solving the simultaneous equations,

1,1 1,1

using the least squares method, and normalizing the solution to ΛEn =1. According to Eq. (3-9), only one orientation is required to calculate a unique ΛEn . However, to find the unique laser beam position EΛT in the following step, at least two orientations are required.

3) Finding the intrinsic matrix and the hand-to-eye rotation matrix:

A line stripe lies on its corresponding laser plane. Hence

, 2 , 1, 1, 2 , 2 -1 ˆ , 1, 1, 2 0 number of that two hand orientations have crossing relationship in each working plane pose. For all P plane poses and N translations under each hand orientation, Q G becomes a (2PMN × matrix. Solution ˆ ) 9 x lies in the null space of the matrixG Q . G The solution can be derived by calculating the eigenvector that corresponds to the smallest eigenvalue of matrixQ QTG G. Since each row of the rotation matrixCBR is a unit vector, and the last entry of K is 1, the solution is scaled to satisfy

2 2 2

31 32 33

g +g +g = . RQ-decomposition is applied to separate the homogeneous 1 matrix G into the intrinsic matrix K and the rotation matrixCBR.

4) Finding the scaling factors and the laser position and the camera position relative to the robot:

The p-th working plane, the laser plane associated with the (p,m1,k1)-th hand pose, and the laser plane associated with the (p,m2,k2)-th hand pose intersect at the (p,m1,m2,k)-th virtual crossing point least squares method. The laser position in the end-effector frame and the camera position in the robot base frame are then obtained. The homography matrices related to robot dimension are scaled as Hp m, =λpHˆ p m, . on the p-th plane and the position of the plane can be determined from the position of one of these crossing points or by averaging them.

4.4 Nonlinear Optimization

The above closed-form solution provides a good estimate of parameters of the system. However, the error can propagate throughout the procedure. A nonlinear optimization problem is introduced to refine the estimation. The projected laser line is constrained within the image plane under various arm poses, yielding the following optimization problem, image position on the observed laser stripe in the (p,m,k)-th image; ( )xu ⋅ denotes the inverse of the projection from the image position to the unit z plane in the camera frame, given by (1-1) and (1-2); line ⋅ denote the projection of a line onto the unit z ( ) plane in the camera frame, and this line is the laser line that is calculated using the robot command EBRp m, | BE p m kt , ,  and parameter set S, and dist a b denotes the ( , ) distance from a point a to a line b.

Equation (4-23) can be solved by the Levenberg-Marquardt method. An initial guess for S is required and can be made using the previously described closed-form solution. Second, if all of the system parameters can be obtained from specifications or design settings, these values must be close to the real values, and can be used as an initial guess in the optimization.

4.5 Summary

This work develops a novel calibration method for simultaneously calibrating the intrinsic parameters of a camera and the hand-eye-workspace relationships of an eye-to-hand system using a line laser module. Errors in the parameters of a hand-eye coordination system lead to errors in the position targeting in the control of robots. To solve these problems, the proposed method utilizes a line laser module that is mounted on the hand to project laser beams onto the working plane. As well as calibrating the

system parameters, the proposed method is effective when the eye cannot see the hand and eliminates the need for a precise calibration pattern or object. The collected laser stripes in the images must satisfy nonlinear constraints at each hand pose. A closed-form solution is derived by decoupling nonlinear relationships based on the homogeneous transform and parallel plane/line constraints. A nonlinear optimization, which considers all parameters simultaneously without error propagation problem, is to refine the closed-form solution. This two-stage process can be executed automatically without manual intervention. The effectiveness of the proposed method is verified via computer simulation and experiments, and the simulation reveals that the line laser is more efficient than a single point laser in Chapter 6.

Chapter 5

Robot Kinematic Calibration using a Laser Pointer, a Camera and a Plane

5.1 Introduction

This chapter proposes a robot kinematic calibration method. The calibration utilizes a laser pointer installed on the manipulator, a stationary camera, and a planar surface.

The laser pointer points to the planar surface, and then the camera observes the projected laser spot. The position of the laser spot in the camera frame is computed according to the geometrical relationships of the line-plane intersection. The laser spot position is sensitive to slight differences of the end-effector pose due to the extensibility of a laser beam. Inaccurate kinematic parameters cause inaccurate calculation of the end-effector pose, and then the forward estimation of the laser spot position is deviated from the camera observation. To calibrate the robot kinematics, the optimal solution of kinematic parameters is obtained by minimizing the differences of laser spot positions between the forward estimation and camera observation via a nonlinear optimization method. The proposed kinematic calibration method is cost-effective and flexible. The proposed method is validated by simulation and experiments using real data.

This work is a step forward improvement of the work in [68] to reduce the number of laser pointers to 1. Moreover, a more general relationship between the world coordinate system and the robot coordinate system is derived to facilitate position-based calibration.