• 沒有找到結果。

This work develops a new calibration method for accurately estimating the configuration between a camera and a robot. This technique is accomplished through a laser pointer. Since the laser is rigidly installed and the plane is fixed, camera measurements of light spots, which the laser beam projected onto the plane, must obey the geometrical constraints. Based on the constraints, the proposed closed-form solution obtains initial values, and the nonlinear optimization can be applied to refine these values. Computer simulations in Chapter 6 are used to validate the calibration method and to analyze the performance in different conditions. Experimental results using real data verify that the proposed method is effective when the hand cannot be viewed by the camera, and the good agreement between the results using simulation data and real data validates the proposed method in Chapter 6.

Chapter 3

Simultaneous Hand-Eye-Workspace and Camera Calibration using a Single Beam Laser

3.1 Introduction

In this chapter, a calibration procedure by considering un-calibrated camera is developed. Since the laser is mounted rigidly and the plane is fixed at each orientation, the geometrical parameters and measurement data must comply with certain nonlinear constraints and the parameter solutions can be estimated accordingly. Closed-form solutions are developed by decoupling nonlinear equations into linear forms to estimate all initial values. Consequently, the proposed calibration method does not require manual initial guesses for unknown parameter values. To achieve a high accuracy, a nonlinear optimization method is implemented to refine the estimation.

This study represents a significant improvement of previous results in [46]. The methods proposed here relax the requirement in [46] of projecting the same laser beam onto multiple planes. Doing so complicates data collection since the laser spot may run outside the view field of the camera. Moreover, the accuracy achieved in this work is better than that in [46]. The hand-eye-workspace calibration method developed in Chapter 2 relies on a pre-calibrated camera. The methods proposed here

can calibrate the hand-eye-workspace relationships and the intrinsic parameters of the camera simultaneously.

The rest of this chapter is organized as follows. Section 3.2 describes the parameters for calibration and notations. Section 3.2 also introduces a detailed method of how to solve the calibration problem with a laser beam. Next, Section 3.3 presents the closed-form solution for a multiple plane case. The closed-form solution for a single plane case is then presented in Section 3.4. Section 3.5 introduced the nonlinear optimization based on the constraint. Section 3.6 describes the overall calibration procedure. The summary is finally drawn in Section 3.7.

3.2 Preliminaries

Calibration attempts to reduce systematic errors by correcting the parameters introduced in Section 1.2. Additional parameters of the laser beam pose introduced in Section 2.2 also carefully considered here. Two closed-form solutions for multiple plane poses and for a single plane are then proposed in next two sections, respectively.

The single plane case calibration assumes that the principal point and the aspect ratio are known in advance. This assumption is reasonable since most cameras with lens are made such that their principal points are located at the center of the image sensor; in addition, two image axes are perpendicular to each other, i.e. the aspect ratio is approximately zero.

Figure 3-1 illustrates the overall configuration of an eye-to-hand system, in which a laser pointer is attached to the end-effector; in addition, the working plane is in front of the camera. The robot base coordinate system is defined as a world coordinate system as follows.

Figure 3-1 Overview of an eye-to-hand system with a laser pointer.

Analysis results in Section 2.2 indicate that this system configuration contains redundant information. Positional differences of the same point derived via different methods are caused by systematic errors and/or noise in measurements. Restated, the systematic parameters can be estimated by tuning these parameters to reduce the positional differences of the points computed from two of these three methods.

Moreover, creating a batch of projected points allows one to reduce effects of noise via various optimization methods.

The optimization problem, as described later, is nonlinear. A good initial guess is necessary to avoid local minimum and accelerate convergence. Closed-form solutions are derived in the next two sections, respectively, by exploring the unique arrangements in the setup. Since no prior guess of parameters is needed, the proposed method can run automatically.

The two cases proposed here are of single plane pose and of multiple plane poses.

The laser spots that lie on the plane under two or more different poses can provide abundant 3-D information in 3-D space, but only 2-D information under single plane pose. Hence, exploring all camera intrinsic parameters in the single plane case without an assumption is impossible. In reality, reasonable prior knowledge is that in which the skew coefficient is extremely small, i.e. the two axes of image sensor are perpendicular to each other, in which the principle point is at the center of image

sensor, i.e., the lens optical axis is through the image center, and/or in which the two focal lengths along two axes are proportional as well as the pixel width and length.

Manipulating the end-effector without rotation can generate constraint equations from parallel laser beams. Under the p-th plane pose, assume that the end-effector holds at (p,m)-th orientation and moves along a direction (without rotation), which is a linear combination of N translating vectors (jp m n, , vectors). The laser spot moves along a vector that is the same linear combination of corresponding vectors (ip m n, ,

vectors). In particular, for a fixed hand orientation, the end-effector moves along several directions (Figure 3-2). The effect of lens distortion is ignored in the closed-form solution. Restated, the pin-hole camera model of (1-1) without considering distortion is adopted. Under the p-th plane pose, (p,m)-th rotation and (p,m,k)-th translation, the (p,m,k)-th laser spot is at position

The closed-form solutions are derived from the aspect of measurements of the camera to the aspect of robot movements. Pure hand translations can eliminate the translation terms of parameters in equations. The relations can then be derived into several linear forms to find angle related terms of parameters. After all directions and orientations are obtained, the translation terms and the scaling factors up to real dimension can be obtained. The step by step calculations of the two closed-form solution are introduced as follows.

Figure 3-2 Projection of laser beams on one plane