• 沒有找到結果。

3-D polyhedral face computation from two perspective views with the aid of a calibration plate

N/A
N/A
Protected

Academic year: 2021

Share "3-D polyhedral face computation from two perspective views with the aid of a calibration plate"

Copied!
6
0
0

加載中.... (立即查看全文)

全文

(1)

Jui-Man Chiu, Zen Chen, and Chao-Ming Wang

Abstract— The 3-D reconstruction of visible polyhedral faces from a pair of general perspective views with the aid of a calibration plate is addressed. A polyhedron is placed on a planar calibration plate and two side views of both the polyhedron and the calibration plate are taken. Through proper arrangements we may assume that in the two views a number of polyhedral edges lying on the calibration plate and the whole calibration plate boundary are visible. We present an on-line camera calibration technique with the aid of the calibration plate and a two-stage process to find the vertex/edge correspondences without encountering the ambiguity problem of the conventional epipolar line technique. Then we give a closed form solution to the 3-D polyhedral vertices visible in both images. We also describe other advantages of using our method for the 3-D polyhedron reconstruction. Experimental results show that the obtained 3-D polyhedral vertex coordinates are rather accurate.

Index Terms—Calibration plate, feature correspondence, on-line cam-era calibration, polyhedron reconstruction, stereo vision.

I. INTRODUCTION

Three-dimensional (3-D) object reconstruction from two or more images is an important problem in computer vision. It can be used in the applications of robotics, industrial automation, and object recognition, etc. Two major approaches can be used to reconstruct the 3-D information. One approach uses active sensing vision techniques [1]–[4], and the other uses passive sensing stereo vision techniques [5]–[11]. The passive sensing stereo vision approach does not need extra light source besides the ambient light, e.g., a laser, so it has been used in many applications. Reconstructing the 3-D object using the conventional stereo vision technique requires to find the correspond-ing object features (e.g., points or edges) in two images which is called a feature correspondence problem. Once the correspondences are found, one can use a triangulation procedure to obtain the 3-D information.

It is well known that the point correspondence ambiguity often arises in the epipolar line based stereo vision techniques [7]–[10]. In what follows, we shall briefly describe how the ambiguity happens and how to solve it. First of all, the epipolar line constructed in view 2 of the stereo images for an image point p1 in view 1 is the intersection of the epipolar plane, which is defined by the image pointp1and the two lens centersO1andO2; with the image plane of view 2. Quite often, this epipolar plane contains other feature points in addition to the object point P whose image point p1 is being

Manuscript received December 12, 1994; revised December 30, 1995. This work was supported by the National Science Council of the R.O.C. under Contract Grant NSC83-0408-E-009-018. This paper was recommended for publication by Associate Editor K.-M. Lee and Editor A. Desrochers upon evaluation of the reviewers’ comments.

J.-M. Chiu and Z. Chen are with the Institute of Computer Science and Information Engineering, National Chiao Tung University, Hsinchu, Taiwan, R.O.C.

C.-M. Wang is with the Chung Shang Institute of Science and Technology, Tao-Yuan, Taiwan, R.O.C.

Publisher Item Identifier S 1042-296X(97)01027-6.

of the particular object point P; depending on whether point P is visible toO2: In order to decide the corresponding point for image pointp1in view 1, some heuristic information such as the average intensity of the neighborhood [10] or geometric constraints [7]–[9] is used to resolve the ambiguity. However, there is no guarantee that the heuristics will always work. Therefore, incorrect correspondence pairs may be produced. The triangulation procedure applied to these incorrect correspondence pairs results in superfluous or false 3-D object points. Obviously, these 3-D object points do not fall on the intended location of the object pointP: Based on the above fact on the ambiguity problem, we are motivated to find the correspondences between certain designated feature points in the two views using their 3-D coordinates. We call it the first-stage correspondence finding, as explained later. In doing so, we use an auxiliary plane, also called the calibration plate for the reason that it can be used for camera calibration. The object is placed on the auxiliary plane so that the object base face lie on the auxiliary plane. We shall use the visible vertices of the object base face as the designated feature points for finding the first-stage correspondences. We take two pictures of both the object and the auxiliary plane in such a way that some boundary part of the object base face and the whole boundary of the auxiliary plane are visible. Since the 3-D auxiliary plane equation can be found through the on-line camera calibration process described later, we can derive the 3-D coordinates of the visible feature points of the base face through the backprojection of these feature points in each image onto the 3-D auxiliary plane. Then by the 3-D coordinate comparisons, we can avoid the ambiguity in the point correspondence finding. Fig. 1 illustrates the difference between our proposed method and the epipolar line based method. Here two views of the polyhedron and the calibration plate are shown. In view 1 some vertices of the polyhedron base face G; H; I; J; together with some other vertices of the top face, are visible, while for view 2 let the lens centerO2lie on the extension of the line from vertexI to vertex D so that only D is visible to lens centerO2: The points O01andO02are the individual vertical projections of lens centers O1 and O2 on the calibration plate (These points will be used later to find the image points of some vertices of the object base face in each view). The epipolar line associated with image pointi1of view 1 is shown in Fig. 1(c). The correspondence between image point i1(of view 1) and image pointd2 (of view 2), as revealed by the epipolar line, is obviously incorrect. By using the proposed technique described later, it can be shown that the image point d2 is not the projection of a vertex of the object base face, so it is ruled out as a possible candidate for the corresponding point of image pointi1:

Furthermore, in the conventional stereo vision systems, the corre-sponding coordinate axes of the two cameras are parallel to each other, and the two camera origins are located on the baseline which is along with the two aligned x-axes. The accuracy of the reconstructed 3-D information of the object depends on the length of the baseline. When the baseline is short, the accuracy of the obtained 3-D information is low. In contrast, a longer baseline yields a higher accuracy; however, the common visible part of the object in the two images is diminished, when the baseline becomes longer [6], [7], [9],

(2)

(a)

(b) (c)

Fig. 1. Illustration for the difference between our method and the epipolar line based method. (a) The setup of the lateral stereo imaging system. (b) The view 1. (c) The view 2.

[11]. Once cameras are calibrated, their positions and orientations cannot be changed. Thus, the camera setup is fixed and cannot be adjusted in accordance with the shape and size of the object to be reconstructed.

In this paper, we present a new stereo vision technique for recon-structing the 3-D information of the visible faces of a polyhedron. We place a polyhedron on a planar calibration plate (i.e., the auxiliary plane) and two pictures are taken from two different viewing angles. The major advantages of the proposed method include: 1) camera calibration can be performed on-line, so one camera can be moved around to take the stereo images; 2) the feature correspondence ambiguity in the image pair can be completely avoided; and 3) the 3-D reconstruction obtained by the method can be made accurate since the method allows us to shoot two images with the two optical axes of the cameras being nearly perpendicular.

The remainder of this paper is organized as follows. Section II describes an on-line camera calibration process and a method for computing the 3-D coordinates of a polyhedral vertex of the base face from a single image. Section III describes the two-stage method to solve the feature correspondence problem for the stereo vision. Section IV gives the method for computing the polyhedral 3-D information. Section V describes the implementation of our method and presents the experimental results. Section VI is the conclusion.

Fig. 2. The geometry of the calibration plate and camera coordinate systems.

II. ON-LINE CAMERA CALIBRATION

A. Assumption

We place the polyhedron to be reconstructed on a calibration plate preferably with a larger polyhedral face (called the base face) resting on the calibration plate. When photographing the polyhedron, the camera is in a side looking position so that one or more polyhedral faces adjacent to the base face are visible in the image. In addition, we also assume that the calibration plate is sufficiently large so that its boundary is not blocked by the polyhedron and, therefore, is visible in the image.

We can move the camera around to any advantageous position to photograph the polyhedron, so the camera setup is very flexible. In addition, we move the camera so that the common visible object surface part of the two views can be made large. In contrast, the two cameras in the conventional stereo vision system are calibrated in advance, so the overlapping field of view of the two cameras is predetermined.

B. On-Line Calibration

Given an image of a calibration plate whose shape is polygonal or any closed curve, we can use the camera calibration method developed in our laboratory [12], [13] to obtain the six extrinsic camera parameters. The camera calibration is done for each image in which both the calibration plate and the polyhedron are visible. Then the same stereo images will be used later for 3-D polyhedral reconstruction. Therefore, the camera calibration is said to be done on line. For the self completeness of the paper, we briefly introduce notations and fundamental equations for the camera calibration which are borrowed from [12] and [13]:

First, the relevant coordinate systems are defined below.

i) (XR; YR; ZR): the right-handed reference coordinate system is defined with respect to the calibration plate in which the XR and

YR axes lie on the plane of the calibration plate, the ZR axis is perpendicular to the calibration plate and points upward, and the origin is located at the centroid of the calibration plate (refer to Fig. 2) ii) (XC; YC; ZC): the left-handed camera coordinate system in which the ZC axis is perpendicular to the image plane, pointing toward the calibration plate, and the XC and YC axes are parallel to the image plane; here the three axes constitute a left-handed coordinate system. The origin of the(XC; YC; ZC) coordinate system is located at the lens center of the camera.

iii) (UC; VC; ZC): the image coordinate system which is identical to the camera coordinate system except that its origin is located at the point(XC; YC; ZC) = (0; 0; f); f is the camera focal length.

Next, the six camera parameters and the necessary coordinate system transformations are given as:

a) : the angle between image plane and the calibration plate. b) RZ (): the rotation matrix that rotates the (XC; YC; ZC) coordinate system about theZC axis by a counterclockwise angle to obtain the(XC0 ; YC0; Z0C) coordinate system. Also RZ (') is used

(3)

d) T (dx; dy; dz): the translation matrix used to translate the refer-ence coordinate system to coincide with the(XC0; YC0; Z0C) coordinate system. That is,(dx; dy; dz) is the corresponding translation vector. The basic relationships between the reference coordinate system and the camera coordinate system are

(X0 C; YC0; ZC0; 1) = (XC; YC; ZC; 1)RZ () (1) (X0 R; YR0; ZR0; 1) = (XR; YR; ZR; 1)RZ (') (2) (X0 C; YC0; ZC0; 1) = (XR0; YR0; ZR0; 1)RX ()FZ 1 T (dx; dy; dz): (3)

Also, the image point (UC0; VC0) projected by a calibration plate boundary point(XR0; YR0; ZR0) with ZR0 = 0 is given by

U0

C = fXC0=ZC0 = f(XR0 + dx)=(YR0 sin  + dz) (4)

VC0 = fYC0=Z0C= f(YR0cos  + dy)=(YR0sin  + dz): (5) Under the mild assumptions that jXR0=dzj  1; jYR0=dzj 

1; jdx=dzj  1 and jdy=dzj  1; the six extrinsic camera

parameters relating the camera to the calibration plate can be effi-ciently estimated [12], [13]. We shall use the above equations and the estimated camera parameters as our bases to derive the 3-D information of the visible polyhedral vertices below.

C. Derivation of the 3-D Polyhedral Vertices of the Base Face

In each image if a polyhedral vertex lies on the calibration plate, then we can find its 3-D coordinates by backprojecting the 2-D image point under consideration onto the calibration plate. Formally, let the image point be represented as the homogeneous coordinates

(UC; VC; f; 1): Then based on (1), we have

(UC0; VC0; f; 1) = (UC; VC; f; 1)RZ (); (6) Then, from (4) and (5) we have

X0

R= f[(f dy 0 VC0 dz)=(VC0 sin  0 f cos )]UC0 sin 

+ U0

Cdz 0 f dxg=f; (7)

and

Y0

R= (f dy 0 VC0 dz)=(VC0sin  0 f cos ): (8) Finally, the 3-D polyhedral vertex of the base face expressed in the

(XR; YR; ZR) coordinate system is given by

(XR; YR; 0; 1) = (XR0; YR0; 0; 1)RZ (0'): (9) III. FINDING THEVERTEX/EDGE

CORRESPONDENCES IN THE TWO IMAGES

We shall solve the vertex/edge correspondence problem in two stages. In the first stage, we deal with the correspondences be-tween the polyhedral vertices/edges (called the polyhedral base vertices/edges), which lie on the calibration plate, in the two images. In the second stage, we use a pair of corresponding polyhedral base edges found in the first stage as the basis to find the other pairs of corresponding polyhedral edges in the two images. The first stage is done in the 3-D space with the aid of the 3-D calibration plate, and the second stage is done in the 2-D image space.

Fig. 3. The projective geometry for illustrating the backprojection of the polyhedral base edges onto the calibration plate.

A. The First Stage of the Vertex/Edge Correspondence Finding

We use the on-line calibration method described in Section II to find the six camera parameters relating the camera to the calibration plate. If a polyhedral base edge in an image is given, we can backproject each vertex of the polyhedral base edge in the image onto the plane of the calibration plate in the 3-D space. We then find the 3-D coordinates for the polyhedral base vertex, as given in (6)–(9). In this way, the 3-D polyhedral base vertices visible in each image can be found. Next, we compare any two 3-D polyhedral base vertices derived from the two images, one from each, to find if their 3-D coordinates are equal or nearly equal. If so, the existence of such a vertex pair indicates that the two associated image points in the stereo images are in correspondence. The detailed description of the first stage procedure is given below.

First, we only need to consider the polyhedral boundary edges in each image when we search for the possible polyhedral base vertices, since the internal edges of the polyhedral image obviously cannot be any polyhedral edges that lie on the calibration plate. Next, for these polyhedral boundary edges, we can use (6)–(9) to find their backprojected 3-D coordinates on the calibration plate. Under our assumption about the polyhedron image, some of the found 3-D backprojected points are the real polyhedral base vertices, while some are the fake polyhedral base vertices which are backprojected through the vertices of the polyhedral occluding edges. To distinguish between the real and fake base vertices, we shall use the vertical projection point (VPP) of the camera lens center on the calibration plate (or its extended plane). To make this idea clearer, consider Fig. 3. Vertices

b; c, and d happen to be, as shall be shown later, the image points

associated with the polyhedral base vertices, while verticese; f; g; h and a are not associated with the polyhedral base vertices. The 3-D backprojected points B; C and D are the real polyhedral 3-D base vertices. However, the 3-3-D backprojected pointsI; J; K; L and M obtained from image points e; f; g; h; and a are not real 3-D polyhedral vertices. To decide which of these backprojected points are the real 3-D polyhedral base vertices, we connect the

(4)

points B; C; D; I; J; K; L and M to form a polygon, as shown in Fig. 3. Also, we project the lens center vertically to the plane of the calibration plate to obtain the vertical projection pointOg: The triangle formed by pointOg and the backprojected edge BC does not overlap with the interior of the polygonBCDIJKLM: Neither does the triangle formed by pointOg and the edgeCD: It indicates that edgesBC and CD are closer to the COP than the other backprojected edges. According to our assumption, they are the visible polyhedral base edges. On the other hand, the triangle constructed by point Og and the backprojected edge DI overlaps with the interior of the polygonBCDIJKLM; so edge DI is not a polyhedral base edge visible to the camera lens center. Therefore, the associated edgede will not be considered in the first stage of correspondence finding process. Similarly, other 2-D edges including

ef; fg; gh; ha, and ab will not be considered, either. The above

polygon overlapping check done on the calibration plate can be also done on the image plane. In order to do so, let point O0g be the projection of point Og on the image plane. Then for the previous polygon overlapping check, replace each backprojected edge on the calibration plate by its edge projection on the image plane andOg by O0g: We shall use this check done on the image plane in the implementation.

On the other hand, the method to compute the 3-D coordinates of the VPP is based on (1) to (3). The 3-D coordinates of the camera lens center in the camera coordinate system is(XC; YC; ZC) = (0; 0; 0): The 3-D coordinates of VPP in the calibration plate coordinate system is denoted by(XR; YR; 0) where

XR= 0dx cos ' + (dy cos  + dz sin ) sin ' (10)

YR= 0dx sin ' 0 (dy cos  + dz sin ) cos ': (11) B. The Second Stage of the Vertex/Edge Correspondence Finding

Once we have found a pair of corresponding polyhedral base edges in the first stage of the correspondence finding procedure, it is a simple matter to find other vertex/edge correspondences in the two images. Since each edge of the first correspondence pair is a base edge in its own image, it is contained by a single (side) face. For instance, the edgebc is contained in the face specified by vertices a; b; and c in Fig. 3. Therefore, we can find two faces, one from each image, are in correspondence. All the edges in these two corresponding faces can be matched by using their edge sequences arranged in the clockwise (or counterclockwise) order. The matching of two faces is considered successful, if the following two conditions hold: i) the lengths of the two edge sequences are equal and ii) the new pairs of matched edges do not conflict with any existing corresponding edge pairs. After a successful matching of two faces, mark the two faces “matched” and record all the new matching pairs of edges and put them at the end of the “matching edge pair (MEP)” queue. Fetch the first matching edge pair out of the MEP queue, if the queue is nonempty. Next, check if the two fetched edges belong to two new faces. If the two faces are new, do the above face matching for the two faces and record any new pairs of matched edges in the MEP queue; if the two faces are already matched, throw away the fetched pair of matching edges. The above process continues until the MEP queue becomes empty. The correspondence finding process is thus completed.

IV. 3-D POLYHEDRON RECONSTRUCTION FROM TWO GENERAL IMAGES

After each pair of corresponding vertices are found from the stereo images, we can find the 3-D coordinates of the corresponding

Fig. 4. The bilevel reference view of the calibration plate.

polyhedral vertex as follows. First, from (1)–(3), the relationships between camera #i; i = 1, 2, coordinate system and the reference calibration plate coordinate system can be derived as follows:

(XR; YR; ZR; 1) = (XR0; YR0; ZR0; 1)RZ (0'i) = (X0 C; YC0 ; ZC0 ; 1)T (0dxi; 0dyi; 0dzi) 1 (FZ )01RX (0i)RZ (0'i) = (XC; YC ; ZC ; 1)RZ (i)T (0dxi; 0dyi; 0dzi) 1 (FZ )01RX (0i)RZ (0'i): (12) So, the relationship between the camera #1 coordinate system and camera #2 coordinate system is

(XC ; YC ; ZC ; 1) = (XC ; YC ; ZC ; 1)RZ (1)T (0dx1; 0dy1; 0dz1) 1 (FZ )01RX (01)RZ ('20 '1) 1 RX (2)FZ T (dx2; dy2; dz2)RZ (02) = (XC ; YC ; ZC ; 1)M (13) where M is a matrix.

Suppose we have obtainedn corresponding pairs of image points in the two views. LetpC andpC denote the pairs of image points for j = 1; 2; 1 1 1 ; n; and their coordinates be (UC ; VC ; f) and

(UC ; VC ; f): Also, pC andpC are the projection points of 3-D physical pointPjwhen projected onto two image planes respectively. A closed form solution to the 3-D coordinates(XC ; YC ; ZC ) of object pointPj will be derived in the camera #1 coordinate system as follows.

Let(XC ; YC ; ZC ) be the 3-D coordinates of the 3-D point Pj in the camera#i coordinate system, i = 1, 2. From the perspective projection, we can obtain the relationships between (UC ; VC ; f) and (XC ; YC ; ZC ) as

UC = XC f=ZC ; for i = 1; 2;

VC = YC f=ZC ; for i = 1; 2: (14) By representing(XC ; YC ; ZC ) in the camera #1 frame based

(5)

(a) (d)

(b) (e)

(c) (f)

Fig. 5. Three views of both the polyhedron and the calibration plate and their extraction results. (a) View 1. (b) View 2. (c) View 3. (d) The extracted image and the vertex numbering for view 1. (e) The extracted image and the vertex numbering for view 2. (f) The extracted image and the vertex numbering for view 3.

on (13), we can rewrite (14) as AAAxxx = bbb where AAA = 1 0 0UC =f 0 1 0VC =f m13UC 0 m11f m23UC 0 m21f m33UC 0 m31f m13VC 0 m12f m23VC 0 m22f m33VC 0 m32f xxx = XYCC ZC ; and bbb = 0 0 m41f 0 m43UC m42f 0 m43VC :

The least squares solution toxxx is given by

xxx = (AAATAAA)01(AAATbbb): V. EXPERIMENTAL RESULTS

A. On-Line Camera Calibration

We measured the intrinsic camera parameters of our CCD camera: the image center= (256, 240), the focal length = 25 mm, the x:y aspect ratio= 1:1.198, and 1 mm = 90 pixels. Next, we estimated the two sets of six extrinsic parameters relating the two camera positions

TABLE I

SIXEXTRINSICCAMERAPARAMETERS FOR THETHREEVIEWS

TABLE II

THEBASEEDGECORRESPONDENCES FOR THETWO-VIEW

COMBINATIONS OFVIEW1, VIEW2,ANDVIEW3

to the calibration plate. The reference view of the binary calibration plate used for estimating the six camera parameters is shown in Fig. 4. Next, we placed the polyhedron on the calibration plate and moved the camera to three different positions to take three pictures of the polyhedron and the calibration plate. The three views are called view 1, view 2, and view 3, as shown in Fig. 5(a)–(c). Table I lists the estimated camera parameters for the cameras in the three setups with respect to the calibration plate coordinate system.

B. Vertex/Edge Correspondences

In the experiment, we first determined the image coordinates of each visible polyhedral vertex in the three views separately and assigned a unique identification number to each vertex. The extracted

(6)

TABLE III

THEOTHEREDGECORRESPONDENCES FOR THETWO-VIEW

COMBINATIONS OFVIEW1, VIEW2,AND VIEW3

TABLE IV

ESTIMATEDEDGEERRORRATES FOR THETWO-VIEW

COMBINATIONS OFVIEW1, VIEW2,AND VIEW3

images and the vertex numberings of views 1, 2 and 3 are shown in Fig. 5(d)–(f). Table II shows the first stage correspondence result of the polyhedral base edges for two-view combinations of view 1, view 2 and view 3.

Based on any corresponding edge pair of the first stage cor-respondence result, we can find the other corresponding pairs of polyhedral edges in the two images. Table III lists the second stage edge correspondences found from all two-view combinations.

C. Polyhedral Face Reconstruction

For the two-view combinations, we computed the 3-D vertex coordinates for each corresponding vertex pair using the least squares solution given previously. The estimation results of the reconstructed edges for two-view combinations are shown in Table IV. Note that view 1 and view 3 produce a more accurate result. This is because the angle between the two associated optical axes is larger and closer to 90. This fact indicates the advantage of our dynamic stereo camera setup as compared to the conventional stereo vision system in which the cameras are fixed after the camera calibration.

VI. CONCLUSION

In this paper we have presented a new stereo vision method for reconstructing the 3-D information of a polyhedron. We place the polyhedron on a calibration plate and take two pictures of them such

that the calibration plate boundary and some of polyhedral base edges are made visible. We show that the camera calibration can be done on-line. Also, we propose a two-stage 3-D edge correspondence finding process that can avoid the ambiguity problem. Finally we use the corresponding pairs of image points in the stereo views to find the 3-D coordinates of the polyhedral vertices. Since we can set up the two cameras such that the angle between two optical axes is nearly 90, the polyhedron thus reconstructed is very accurate, as reflected by the experimental result. On the other hand, it is possible to apply our method to any smooth curved object reconstruction, if the object has a flat face to rest on the calibration plate and some proper surface marking is available. The surface marking can be done by casting a grid pattern on the curved surface using a structured light projector [4] or the surface marking can be done by drawing or pasting a grid pattern on the curved surface.

REFERENCES

[1] P. M. Will and K. S. Pennington, “Grid coding: a novel technique for image processing,” Proc. IEEE, vol. 60, pp. 669–680, June 1972. [2] Y. F. Wang, A. Mitiche, and J. K. Aggarwal, “Computation of surface

orientation and structure of objects using grid coding, “IEEE Trans.

Pattern Anal. Machine Intell., vol. PAMI-9, pp. 129–137, Jan. 1987.

[3] G. Hu and G. Stockman, “3-D surface solution using structured light and constraint propagation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 390–402, Apr. 1989.

[4] Z. Chen, S. Y. Ho and D. C. Tseng, “polyhedral face reconstruction and modeling from a single image with structured light,” IEEE Trans. Syst.

Man Cybern., vol. 23, pp. 864–872, May./June. 1993.

[5] U. R. Dhond and J. K. Aggarwal, “Structure from stereo-a review,” IEEE

Trans. Syst. Man Cybern., vol. 19, pp. 1489–1510, Nov./Dec. 1989.

[6] S. T. Barnard and M. A. Fischler, “Computational stereo,” Computing

Surveys, vol. 14, no. 4, pp. 553–572, Dec. 1982.

[7] N. H. Kim and A. C. Bovik, “A contour-based stereo matching algorithm using disparity continuity,” Pattern Recog., vol. 21, no. 5, pp. 505–514, 1988.

[8] R. Horaud and T. Skordas, “Stereo correspondence through feature grouping and maximal cliques,” IEEE Trans. Pattern Anal. Machine

Intell., vol. 11, pp. 1168–1180, Nov. 1989.

[9] X. W. Tu and B. Dubuisson, “3-D information derivation from a pair of binocular images,” Pattern Recog., vol. 23, no. 3/4, pp. 223–235, 1990. [10] M. T. Boraie and M. A. Sid-Ahmed, “Points of correspondence in stereo images with no specific geometrical constraints using mathematical morphology,” Computers in Industry, vol. 20, pp. 295–310, 1992. [11] M. Okutomi and T. Kanade,” A multiple-baseline stereo,” IEEE Trans.

Pattern Anal. Machine Intell., vol. 15, pp. 353–363, Apr. 1993.

[12] Z. Chen, C. M. Wang, and S. Y. Ho, “An efficient search approach to camera parameter estimation using an arbitrary planar calibration object,” Pattern Recog., vol. 26, no. 5, 1993.

[13] Z. Chen and C. M. Wang, “A two-parameter generate-and-test method for camera parameter estimation with any planar calibration object,” Int.

數據

Fig. 2. The geometry of the calibration plate and camera coordinate systems.
Fig. 3. The projective geometry for illustrating the backprojection of the polyhedral base edges onto the calibration plate.
Fig. 4. The bilevel reference view of the calibration plate.
Fig. 5. Three views of both the polyhedron and the calibration plate and their extraction results
+2

參考文獻

相關文件

[This function is named after the electrical engineer Oliver Heaviside (1850–1925) and can be used to describe an electric current that is switched on at time t = 0.] Its graph

The resulting color at a spot reveals the relative levels of expression of a particular gene in the two samples, which may be from different tissues or the same tissue under

He proposed a fixed point algorithm and a gradient projection method with constant step size based on the dual formulation of total variation.. These two algorithms soon became

That, if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet

Particularly, combining the numerical results of the two papers, we may obtain such a conclusion that the merit function method based on ϕ p has a better a global convergence and

From the perspective of two history subjects : The Centenary of Paris Peace Conference and the May Fourth Movement

The difference resulted from the co- existence of two kinds of words in Buddhist scriptures a foreign words in which di- syllabic words are dominant, and most of them are the

Boston: Graduate School of Business Administration, Harvard University.. The Nature of