• 沒有找到結果。

POLYHEDRAL FACE RECONSTRUCTION AND MODELING FROM A SINGLE IMAGE WITH STRUCTURED LIGHT

N/A
N/A
Protected

Academic year: 2021

Share "POLYHEDRAL FACE RECONSTRUCTION AND MODELING FROM A SINGLE IMAGE WITH STRUCTURED LIGHT"

Copied!
9
0
0

加載中.... (立即查看全文)

全文

(1)

864 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAYIJUNE 1993

[12] C. W. Thomas, G. C. Gilmore, and R. L. Royer, “Modeling contrast sensitivity of elderly and young adults,” Tech. Rep. PL90-9, Perception Lab., Psychol. Dept., Case Western Reserve Univ., 1990.

[13] G. Westheimer and F. W. Campbell, “Light distribution in the image formed by the living human eye,” J. Opt. Soc. Amer., vol. 52, no. 9, pp. 104CL1045, 1962.

Polyhedral Face Reconstruction and Modeling from a

Single Image with Structured Light

Zen Chen, Shinn-Ying Ho, and Din-Chang Tseng

Abstract- The determination of the 3-D geometric model of visible polyhedral faces from a single view is addressed. It applies a grid coding technique to derive the normal vector of each visible polyhedral face and the depth parameter of the face equation based on the given dimensions of the grid on the code plate. It is shown that the method is correspondenceless. Furthermore, it also determines the 3-D face model for all visible polyhedral faces including the occlusion relations between faces. ’ b o edge reconstruction methods are given and their goodness are compared. For the determination of the final object model, three different integration methods are given. Some experiments are reported to illustrate the method.

I. INTRODUCTION

The determination of the 3-D object structure from a single image or multiple images is of fundamental importance in computer vision. The 3-D structure information is useful to object recognition, part inspection and robot guidance tasks, etc. There are four major approaches to acquire the 3-D information [1], [ 2 ] :

1) passive monocular approaches, 2) passive binocular approaches, 3) active monocular approaches, 4) active binocular approaches.

Generally speaking, in the first two methods the object features (e.g., points, lines or contours) used for the 3-D object structure recovery are imbedded in the rather complicated scene and are not easy to extract from the scene; quite often, additional information about the objects is required. For instance, object reflectance model or object surface of a regular pattern is assumed in some passive monocular methods, and the correspondence information between object features and image features is needed in most of the passive binocular methods. This additional information is either not available or hard to obtain [1] [4]-[9]. In the active monocular methods using the “time-of-flight” technique, the depth determination is generally not very accurate if the returned beam is weak or noisy. For the active binocular method using triangulation to compute the depth information, they also suffer from the possible depth inaccuracy [3]. Besides, the triangulation technique determines the range data on a point-by-point basis, so it is a slow process; the object must Manuscript received February 16, 1991; revised November 22, 1991 and July 29, 1992.

Z. Chen and S.-Y. Ho are with Department of Computer Science and Information Engineering, National Chiao-Tung University, Hsinchu, Taiwan, R.O.C.

D.-C. Tseng is with the Department of Electronic Engineering, National Center University, Chungli, Taiwan, R.O.C.

IEEE Log Number 9206195.

remain still during the whole process. To alleviate the processing time, some active sensing methods apply the grid coding technique [4]-[16]. In these methods a set of parallel stripes or a crossing grid pattern is projected onto the object to produce a spatial encoded image for analysis. Again, the correspondence problem is needed to solve in most of these methods using techniques such as spatial labeling [4], relaxation

[SI

and color encoding [6], etc. However, under the assumption of a parallel projection model, Aggarwal and his co-workers [7]-[9] infered the object surface orientation and its relative depth without using correspondences. Nevertheless, the absolute depth cannot be determined by their methods.

On the other hand, since only the bright (or dark) stripes of the object surface are perceived by the above active sensing methods, so the complete object boundary (or contour) is not obtainable and, therefore, the object boundary cannot be precisely determined; jump edges or false edges between the object faces are judged subjectively by comparing the relative depth change with a specified threshold value [17]. Wang and Aggrawal [9] proposed to use integration of active and passive sensing techniques in order to extract the complete object boundary. On the other hand, Shirai and his co-workers [18] used a vertical slit projector to generate a set of parallel stripes and are able to detect the occlusion at the endpoints of the stripes. However, a single set of parallel stripes will miss the edges parallel to the stripes. Therefore, the occlusion detection has not been effectively solved. This defect results in inability to obtain face adjacency relationship. It, in turn, makes object recognition or object registration problem more difficult to tackle.

A new active sensing method also using the grid coding technique is to be proposed that aims to remove most of the difficulties mentioned previously, in particular.

1) it can determine the absolute depth and the normal vectors of all visible polyhedral faces from a single image, assuming the face is not too small.

2) it does not require or need to solve the correspondences between the object features (e.g., points or lines) and image features.

3) it can determine the 3-D structure of the *visible polyhedral faces in the scene including

a) the face equations

b) the angles between adjacent face pairs c) the edge lengths

d) the angles between edge pairs

e) a unique set of visible polyhedral vertices, and f ) the occlusion relations between faces.

In addition, in order to develop the multiple face equations for the object model, we shall need to determine on line which set of parallel laser planes produces the grid ,lines on all visible polyhedral faces.

Furthermore, a sensitivity analysis of the face equation parameters under a small variation in the extracted projected grid lines will be given to indicate how to design a robust estimation method. Also

several methods for determining polyhedral edges and vertices are given and their goodness measures are compared. The sensitivity analysis provides the guideline for better system performance.

In section 11, the plane equations of all visible faces are individually estimated from the associated projected grid lines based on the grid coding technique. In section 111, the sensitivity analysis of face equation parameters is given to indicate how to design a robust estimation method. In section IV, two methods are given for the polyhedral edge estimation and their goodness measures are

,

(2)

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAY/JUNE 1993 865

compared. In section V, three different methods for determining a unique set of visible polyhedral vertices are given and compared. In section VI, experimental results are reported to illustrate the performance of our method. Section VI1 is the conclusion.

11. FACE EQUATION DETERMINATION BY GRID CODING Now an active sensing system using a grid coding technique is going to be used to derive the plane equations of all visible polyhedral faces contained in a scene. The basic operation principle of this system is first described in [19]. A brief description of the system is given here. The system contains a laser projector which produces an expanded beam of parallel laser light like a colliminator. The parallel laser beam passes perpendicularly through a code plate with a grid pattem of two sets of parallel lines perpendicular to each other. The spaces between the two sets of parallel grid lines are given as s l and s2. Thus two sets of parallel light planes are yielded that travel in space. When they run across a polyhedral surface, a grid pattern on the surface is seen. The pattem is a result of the parallel projection of the original grid pattern on the code plate. However, the dimensions of the grid on a tilted polyhedral face will be different from those on the code plate due to the foreshortening effect. We shall infer the grid dimensions later. The normal vectors of the two perpendicular sets of laser planes, with respect to the camera coordinate system, are obtained via a came5a calibytion method [19]. Assume they have been determined as N1 and N 2 . In [19] only a single polyhedral face

was reconstructed in which the face normal vector was determined using the vanishing point technique. The method is more sensitive to the error in the projected grid line extracted. Here we shall deveIop a new method using the geometric relations to determine the normal vector. This technique is more reliable, as indicated by a formal sensitivity analysis. More importantly, this technique is more general in the sense that it is applicable even the face is approximately planar. Moreover, we shall determine all visible polyhedral faces for face modeling with a face occlusion analysis. In order to find multiple faces, we need to determine on line the identifications of the two sets of parallel laser plates on all visible faces.

A. The Face Normal Vector Determination

Let a backprojection plane be defined by the lens center and a projected grid line, as shown in Fig. l(a). Let the projected grid line be indicated by I , in the image and the grid line on the polyhedral face by L,. Also assume that the normal vecJors of the_ backprojection plane and the laser plane are given by N p l and N , , respectively. Now, we shall use two crossing projected grid lines produced by the two perpendicular laser planes to determine the normal vector of the polyhedral face which the laser planes run across.

From the projective geomety, the unit vector of the polyhedral grid line L,, i E 1 , 2 , called

UL,

is given by

Then the unit normal vector of the polyhedral face is expressed by

Here

do

is selected to point outward and it has a negative 2 component, since the face is supposed to be visible or partially visible. The dimensions of a grid on a tilted polyhedral face will be different from the dimensions of a grid on the code plate due to the foreshortening effect of a parallel projection transformation. It can be readily shown that the unit grid length on the polyhedral face is

8

Image plane sheet of ia plane

camera

Lens Centex (b)

Fig. 1. The geometry of the vision system.

equal to S c / 1 & .

Uzt

1,

i = 1,2, where s1 and s2 are the dimensions of one grid cell on the code plate.

B. The Depth Parameter Determination

After the determination of the surface normal vector, the next step is to determine the distance between the polyhedral face to the origin (i.e., lens center), namely, the value of

D

in the face equation

a X

+

bY

+

cZ =

D

where the parameter vector (a, b, c) represents the unit normal vector to the plane and ( X , Y , 2 ) is a 3-D point on the polyhedral face. To determine the parameter D, two suitable points on a projected grid line with a spacing of, say k grid units, are selected. Denote these two points as p = (z, y ) and q = (z’, y’) in Fig. l(b). Let the corresponding points on the polyhedral face be

P,

=

( X ,

Y, 2 ) and Q = ( X ’ , Y ’ , Z ’ ) . Assume the line direction PQ determined by (1) is denoted by a vector (Ut,V,,Wt), i E {1,2}. The length

(3)

is given by

Let

x‘ = f X ‘ / Z ‘ y‘ = f Y ‘ / Z ’

where

f

is the focal length of the lens. Furthermore,

P

and Q are related by one of the following two relations:

1) Q = P + ( U , , V , , W i ) 2) Q = P -

(Ui,

V,,

Wi)

(3)

866 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAYIJUNE 1993

It can be shown that

x

= XZ/f

y =

YZ/f

The correct solution of P is the one with a nonnegative value of Z , because the face is visible. From (5c) we obtain the different value of Z for different image points p = ( x , y ) and q = (z',y'). Furthermore, the depth parameter D can be calculated as

D = a X

+

bY

+

CZ

= (ax/ f

+

b y / f

+

c ) Z . In this determination of the polyhedral face equation, no actual correspondences between the polyhedral grid points (or lines) and the projected grid points (or lines) in the image are required. @stead, i c (l), (2) and (5) we only use the common normal vectors N l azd N2 of the two sets of parallel laser planes and the normal vectors N p l and N>2 of the backprojection planes that are each defined directly by a selected projected grid line Pq and the lens center. Furthermore, we need to know which of the two sets of laser planes produces the projected grid line Pq? This is determined as follows (Please refer to Fig. 1). On the projected grid line Pq there are

k

- 1 intersecting grid lines which are pro_duced by

k

- 1 parallel laser sheets with the same normal vector N3-1,

i

=1 or 2. Based on the vanishing point property [21], let the

k

- 1 intersecting grid lines be extended to yield a vanishing point, say

V

= ( x u , yu) then the vector given by ( x u , yu, f ) is perpendicular to N3--1. In this way, we know on line the set of parallel laser planes with the normal vector f l t produces grid line Pq, where N3*--r

.

(xu, yu, f ) z 0. There are other different methods to determine the identification of the projected grid lines such as the one in [12]. However, our method is different from them and simpler. Thus, our reconstruction method is a correspondenceless method.

111. SENSITIVITY ANALYSIS OF FACE PARAMETER ESTIMATION The estimation of the polyhedral face equation t X + b Y + c Z = D involves two components: the normal vector N o = (a, b , c ) and the depth parameter D. We shall consider the sensitivity of these parameters under the small perturbation of extracted projected grid lines due to the minor image processing or system error.

A. The Sensitivity of the Normal Vector (a,b,c) Estimation

From (1) and (2) the estimation accuracy of the norm21 vectcr (a, b, c ) depends on the two projected grid line directio!s L1 an! Lz

which, in turn, are determined by the normal vecJors, N1 a_"d N z , to the two laser planes and the normal vectors, N p l and N p 2 , to the two backprojection planes. Since

fll

and f l 2 are computed in the camera calibration process which can be carefully calibrated once for all. So the accuracy of the determination of N>, and N>* are the main concerns here.

In the method two projected %rid lines, Z l and Z2, are used to determine the vectors N p l and N p 2 . Their locations and directions are obtained by an image processing method based on a rotating line mask. To examine the error model of any one of these two lines, assume the line has length

Z

and slope cy, with the two endpoints

being at ( x , y ) and (z',y').

Assume the line has small perturbations A x and Ay in the horizontal and vertical directions. The perturbations may be mainly due to the thickness of the projected grid lines, i.e., the line width.

F3

0

Fo'

0 image

Plane

@) ( 4

Fig. 2. The two methods for polyhedral edge length estimation. (a) The overall view of the configuration. @) The auxilliary view of the configuration for the backpropagation method. (c) The auxilliary view of the configuration for the plane-intersection method.

Let the line after perturbation has a new slope /3, namely t a n

p

= (1 sin a

+

Ay)/(l cos cy

+

Ax).

It can be readily shown that ,B is nearly equal to cy under the follow- ing mild condition: max{lZ sin c y [ , 11 coscyl}

>>

max(lAz1, IAyl}.

Since max{

IZ

sin c y [ ,

IZ

cos aI}

2 Z/*,

so the above condition can

be also given by

Z/&

>>

max{lAzl, [Ayl}. In a typical case,

Z

2

100, lAzl

5

2, lAyl

5

2. Thus, as far as the accuracy of the line slope is concerned, choose either the longest or a sufficiently long projected grid line whose length is much greater than its width. Under this condition the normal vector of the polyhedral surface equation can be rather accurately estimated.

(4)

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAYIJUNE 1993

Fig. 3. The extraction of projected grid lines on a face from a polyhedral scene. (a) The scene. (b) The segmentation result by thresholding. (c) Some extracted projected grid lines. (d) The obtained projected edges of the face.

B. The Sensitivity of the Depth Parameter D Estimation

In our method two projected grid points on a sufficiently long projected grid line are used to infer the actual depth of the polyhedral face. From the foregoing discussion, the small endpoint perturbation of the projected grid line does not affect the line slope, if the line length is sufficiently larger than its width. Therefore, the small perturbation is represented by the change in the line length. Namely, it only causes the endpoints to drift along the line direction. To simplify the analysis, consider the perturbations of the endpoints one at a time. Let the projected grid point (x, y) be the one unchanged and the other grid point (x’, y’) drift along the line. Accordingly, the line segment has a resultant length change dl along the line direction. From (5), since x‘ = x

+

1 cos a and y‘ = y

+

1 sin cy, we have

f U

-

WX’ f U

-

w x

-

Wl coscy

,

if x’

#

x

-

Z = - 2’ - x 1 cos cy or f V - W y ’ - f V - W y - W l s i n c y Z = -

,

i f Y ‘ # Y Y’

-

Y I sin cy

Under the perturbation of grid point ( x ’ , ~ ’ ) , let 2’‘ = x

+

( 1

+

dl) cos cy and y” = y

+

(E

+

d l ) sin cy, the new value of Z becomes

f

U - W X

-

W(1

+

d l ) COS CY Z + d Z =

,

if 2’’

#

x (1

+

d l ) cos cy or

f

V - Wy

-

W(l

+

d l ) sin cy (1

+

dl) sin cy Z + d Z =

,

if Y”

#

Y

Therefore, the depth parameter D changes from the old value

to the new value

It can be readily shown

or

e = - ( L ) ( f v - w y ) ,

D l + d l f V - W y ’ i f y ’ z y . (6b) Furthermore, assume the two grid points (X, Y, 2 ) and ( X I , Y’, 2’ ) are on a distant polyhedral face, i.e., I ( X / Z ) (

<<

1 and

I(X’/Z’)l

<<

1 then we can further approximate (6) as follows. C a s e l : if W

#

0

(5)

868 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAYIJUNE 1993

Therefore, (6) can be approximated by

Case 2: if W = 0, it can be shown by the trigonometric relations that

From (7) the sensitivity of the depth parameter d D / D is almost equal to dl/l. Thus a robust estimation of the depth parameter D can be made, if a long projected grid line is used.

Kaio

F2

\

I

./a3\ e8

\

I

/

A

I

I

a9

( 4 IV. POLYHEDRAL EDGE DETERMINATION

A. Two Edge Construction Methods

After the polyhedral face equation is determined, we need to find the face boundary in terms of its edges. There are two possible methods to determine the face edges.

1) The Plane-lntersection Method: If the two faces on both sides of an edge are visible in the scene simultaneously, then the edge can be found through the intersection of these two faces. This edge construction method is called the plane-intersection method. When the two faces along an edge are not both visible in the scene, the above method does not work. In this case, we shall use the backprojection method to construct the edge.

2) The Backprojection Method: The real 3-D edge of the face is found through the intersection of a backprojection plane and the face itself. The backprojection plane is constructed from the lens center and an extracted boundary edge on the projected face in the image.

To find the boundary edges of the projected face, a face equation and two of its grid points are first determined from the image as before. To detect the rest of the projected grid lines, we can interpolate and extrapolate to obtain all hypothetical polyhedral grid lines based on the two grid points used in the face equation determination. Then we apply the known perspective transformation to these generated polyhedral grid lines to obtain the would-be perceived grid lines. These would-be perceived grid lines are then superimposed onto the real perceived grid lines in the image. In this way we can trace along the would-be perceived grid line to see if the real perceived grid line coincides with it. When the perceived grid line breaks away at a certain point. This break point is one endpoint of the real perceived grid line. A line fitting to these endpoints on the same face will produce the wanted projected edges of the face in the image. Notice that if the wanted projected edge is sufficiently long, the line fitting to the endpoints always produces a rather accurate fitting result. B. The lnfluence of the Depth Parameter Variation over the Edge Length Estimation In theory, both the plane-intersection method and the backprojection method should produce the same result for the polyhedral edges. But, due to the small variation of the depth parameter, these two methods do not produce the same result, as will be described below,

1) The Influence in the Backprojection Method: In Fig. 2(a) let the backprojection plane be defined by the lens center “0” and the two rays denoted by ZE and ZE, where points a and b are on polyhedral face FO and points c and d are on another face that is obtained through a translate of face

Fi

by a quantity A D of the depth parameter. The quantity A D is denoted in Fig. 2(b) by the distance between points

(b)

Fig. 4. The reference labels of geometric entities of the polyhedral objects used in (a) Experiment 1 and (b) Experiment 2.

TABLE I

THE ESTIMAllON RESULT OF ALL VISIBLE POLYHEDRAL FACES IN EXPERIMENT 1 A-THE FACE EQUATION ESTIMATlON

Face The Estimated Face Equation F1 F2 F3 FA 0.189X

+

0.860Y - 0.4742 = -243.0 -0.661X - 0.281Y - 0,6962 = -368.1 0.751X - 0.422Y

-

0.5082 = -272.2 0.755X

-

0.417Y - 0.5072 = -298.4

&THE ESTIMATION OF ANGLES BETWEEN ADJACENT FACE PAIRS

Anele

Face True Value Measured Value

( F l , F 3 ) 90.0 91.4 ( F 2 , F 3 ) 90.0 91.1 (F23F4) 90.0 91.4 Average Error (deg.) 1.50

Pl and P;. It can be shown that

(6)

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAYIJUNE 1993 869

TABLE I1

THE ESTIMATION RESULT OF POLYHEDRAL EDGE LENGTHS IN EXPERIMENT 1

Edge Type of True Measured Length (mm)

Projected Length Method l(Error%) Method 2(Error%) Method 3(Error%)

Edge (mm) el occluding 70.0 71.6(2.28%) 69.8(0.28%) 73.3(4.71%) e2 internal 70.0 64.5(7.85%) 67.7(3.28%) 58.9(15.86%) e3 internal 70.0 72.2(3.14%) 69.4(0.86%) 72.8(4.00%) e4 occluding 70.0 66.3(5.28%) 70.3(0.43%) 62.4(10.86%) e5 occluding 110.0 110.7(0.64%) 113.2(2.91%) 107.7(2.09%) e6 occluding 40.0 36.1(9.75%) 39.9(0.25%) 33.4(16.50%) e7 internal 40.0 41.3(3.25%) 41.2(3.00%) 45.6( 14.00%) e8 occluding 42.4 40.4(4.72%) 43.5(2.59%) 34.3(19 0.10%) e9 internal 40.0 41.8(4.50%) 40.2(0.50%) 54.0(35.00%) el0 occluding 40.0 46.9(17.25%) 39.6(1.00%) 52.2(30.5%) e l l occluding 70.0 75.3(7.57%) 68.7(1.86%) 80.2(14.57%)

Average Length Error 5.75% 1.66% 14.52%

el2 occluding 70.0 71.9(2.71%) 67.9(3.00%) 74.9(7 0.00%)

TABLE Ill

THE ESTIMATION RESULT OF ANGLES BETWEEN POLYHEDRAL EDGE PAIRS IN EXPERIMENT 1

Angle True Value Measured Value (deg.)

(den.) Method 1 Method 2 Method 3

a1 90.0 89.9 91.7 89.4 a2 90.0 90.5 88.0 88.0 a3 90.0 88.0 89.8 91.6 a4 90.0 91.5 90.4 91.1 a5 90.0 96.3 84.8 110.5 a6 90.0 83.5 94.0 69.9 a7 225.0 224.3 223.6 231.9 a8 135.0 136.7 136.4 127.9 a9 90.0 91.6 87.7 88.6 alo 90.0 87.6 93.3 91.2 a12 90.0 85.1 89.6 87.9 a13 90.0 98.5 89.9 90.2 a14 90.0 78.4 91.3 81.6 a15 90.0 95.8 90.3 90.9 a16 90.0 84.7 90.2 90.1

Average Error (dea.) 4.21 1.56 5.28

a1 1 90.0 98.0 89.3 100.2

and, by similar triangles,

2) The Influence in the Plane-Intersection Method: In Fig. 2(a) as- sume plane FO translates to plane AF by A D as before, while the other planes F1, F 2 , F3 stand still. By the plane-intersection method, the polyhedral edge

ab

on plane

FO

will become the new edge

3

after translation. This portion of figure is redrawn as Fig. 2(c) in which point v is the intersection of edges Ei and

fi

on face F1 and lines vp2 and

vp;

are the perpendicular distances from point v to faces FO and FA. It can be shown that = A D . Let the length of

vpg

be denoted by L. Then

(9) From (8) and (9), the polyhedral edge changes due to a translation A D in the previous two methods are related by the ratio D : L. In general, D

>>

L, so it implies the backprojection method is better than the plane-intersection method. However, when polyhedral faces F 2 and F3 are nearly parallel, then the ratio D : L becomes less than one; therefore, the plane-intersection method may become better. Notice that if faces FZ and F3are exactly parallel, then only one of

TABLE IV

THE ESTIMATION RESULT OF ALL VISIBLE POLYHEDRAL FACES IN EXPERIMENT 2 A-THE FACE EQUATION ESTIMATION

Face The Estimated Face Equation F1 0.200X+O.364Y-0.91OZ = -526.3 F2 -0.69OX-0.168Y-0,7042 = -392.6 F3 0.734X-0.378Y-0.5642 = -312.9 F4 0.165X+0.882Y-O.4412 = -238.5

&THE ESTTMATION OF ANGLES BETWEEN ADJACENT FACE PAIRS

~ ~~ Angle (deg.) (den.) (F1 , F2) 116.2 116.2 (F1 , F3) 123.5 121.5 (F1 , F4) 135.0 139.0 (F2 , F3) 90.0 92.6 (F2 , F4) 90.0 92.8 (F3 , F4) 90.0 92.1

Average Error (den.) 2.25 Face True Value Measured Value Pair

them is visible. In this case, the plane-intersection method is not applicable, and the polyhedral edge can only be determined by the backprojection method.

v.

THE PARTIAL. GEOMETRIC MODELING OF THE POLYHEDRON A . Face Adjacency Relation

The face adjacency relation is an important description of the geometric model of the polyhedron. This relation can be determined by occlusion check conducted on the projected grid lines as follows. For each projected edge of a face found by an image processing technique, check to see if the projected edge is completely adjacent to the background area of the polyhedral scene?

If yes, then the projected edge is an occluding edge and the face has no adjacent face along the edge.

If no, then find any two touching or almost touching projected grid lines on both sides of the projected edge and check to make sure they are produced by the same set of laser sheets. Next, identify the two laser planes that produce these two projected grid lines, check to see if these two planes are coplanar?

a) if they are coplanar, then the projected edge is an internal edge of the polyhedron in the scene.

b)if they are not coplanar, find the 2 coordinate values of the two 3-D points corresponding to the intersection points

(7)

870 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAYIJUNE 1993

TABLE V

THE ESTIMATION RESULT OF POLYHEDRAL EDGE LENGTHS IN EXPERIMENT 2

Edge Type of True Measured Length (mm)

Projected Edge Length Method l(Error%) Method 2(Error%) Method 3(Error%) (mm) el internal 81.2 e2 internal 102.4 e3 internal 94.3 e4 occluding 70.0 e5 internal 20.0 e6 internal 20.0 e7 occluding 84.0 e8 occluding 100.0 e9 occluding 70.0 elo internal 20.0 e l l occluding 100.0 e12 occluding 84.0

Average Length Error

82.3(1.35%) 104.0(1.56%) 96.2(2.01%) 72.9(4.14%) 25.0(25.00%) 24.5(22.50%) 89.7(6.78%) 103.8(3.80%) 72.5(3.57%) 24.7(23.50%) 95.5(4.50%) 81.5(2.97%) 8.47%

of the projected edge and the two projected grid lines. The face containing the 3-D point of a smaller 2 coordinate value occludes the other face. The 2-D projected edge associated with the visible face is an occluding edge, and this same edge is a false edge of the other face.

B. The Computation of a Unique Set of Polyhedral Vertices Since the 3-D edges of each visible polyhedral face are computed by the backprojection method or the plane-intersection method. The 3-D vertices are the intersections of the edges. Notice that there may be redundant 3-D vertices that lie on two or three visible polyhedral faces. We need to merge them into one common vertex in order to derive a unique set of visible polyhedral vertices. There are three possible methods for the final vertex determination, as given below. Method 1: The Direct Averaging Method

Each final vertex of the polyhedron is the direct average of its different versions on the corresponding polyhedral faces that are determined separately based on the backprojection method for edge construction.

Method 2: The Averaging-After-Face-Assembly Method

Each final vertex of the polyhedron is obtained in the same way as in method 1 except that the averaging takes place after the faces are assembled. Here one of the involved faces is selected as the core face according to the depth parameter sensitivity measure given by

dD/D

given in (6). The remaining faces are then translated to merge with the core one.

Method 3: The Three-Face-Zntersection Method

Each final vertex of the polyhedron is computed as the intersection of three involved faces, whenever applicable. The other vertices are computed as in method 1.

A remark is in order here. The direct averaging method is an intuitive method which is commonly used [20]. The average-after- face-assembly method is similar to the first method in nature, but taking the object boundedness into consideration. The three-face- intersection method is a theoretically sound method in which the final vertices associated with each polyhedral face are coplanar. Notice that the first two methods may not satisfy this strict coplanarity condition of the face vertices. Theoretically speaking, these three method are equivalent,, but, in practice, they are somewhat different, depending

82.7(1.85%) 103.3(0.88%) 97.0(2.86%) 70.3(0.43%) 20.9(4.5%) 20.4(2.00%) 89.1(6.07%) 102.9(2.90%) 69.9(0.14%) 19.6(2.00%) 98.3(1.70%) 83.7(0.36%) 2.14% 61.3(24.51%) 74.4(27.34%) 69.9(25.87%) 76.2(8.86%) 41.5(107.50%) 37.6(88.00%) 89.9(7.02%) 103.5(3.50%) 74.7(6.71%) 44.2(121 .00%) 90.5(9.50%) 780.8(6.19%) 36.33% TABLE VI

THE ESTIMATION RESULT OF ANGLES BETWEEN POLYHEDRAL EDGE PAIRS IN EXPERIMENT 2

Angle True Measured Value (deg.) Value Method Method Method

(deg.) 1 2 3 a1 60.5 60.9 61.8 61.0 a2 48.5 48.3 48.6 50.1 a3 71.0 70.8 69.6 68.8 a4 90.0 97.7 88.9 87.5 a5 142.0 136.9 145.3 145.0 a7 90.0 . 99.5 88.1 89.9 ag 90.0 86.6 89.1 90.4 ag 122.0 110.0 122.5 123.1 a10 148.0 141.8 150.5 149.8 a l l 90.0 97.6 88.1 87.7 ai2 90.0 87.8 90.9 91.3 a13 90.0 101.8 88.0 88.2 . a14 90.0 96.4 88.4 85.2 a15 90.0 96.8 92.2 103.9 a16 90.0 95.0 90.2 83.6 a17 141.3 131.5 140.0 139.1 a18 128.7 119.4 129.6 128.2 a6 128.0 118.0 128.5 127.3 Average Error 6.32 1.36 2.62

on their sensitivity to the depth parameter. This phenomenon will be reported in the experiments presented next.

VI. EXPERIMENTAL RESULTS

In this section we shall report the following experimental results: 1) the face equations,

2) the angles between adjacent face pairs, 3) the occlusion relations between the visible faces, 4) the lengths of visible polyhedral edges, and 5) the angles between the edge pairs on each face.

We use two different polyhedra in the experiments. In each experiment a grid-coded polyhedral picture is digitized into an array of 512x512 pixels. The projected grid lines in the image are extracted and two sufficiently long grid lines on each visible polyhedral face are used to derive the face equation. A typical image processing result obtained from a polyhedral scene is given in Fig. 3.

Experiment 1. In this experiment a single view of a polyhedral object containing an occluded (Le. partially visible) face and three completely visible faces is analyzed.

(8)

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAYIJUNE 1993 871

(e)

Fig. 5. The reprojection of the polyhedral edges between Experiments 1 and 2. (a) and @) By method

(9

the vertices found by the three given methods to the image plane for 1. (c) and (d) By method 2. (e) and ( f ) By method 3.

Fig. 4(a) shows the reference labels of geometric entities of the polyhedral object used in this experiment. The estimation result of all visible polyhedral faces, including the face equations and the angles between adjacent face pairs, is listed in Table I. Table I1 gives the true and measured edge lengths between the final polyhedral vertices found by the three different methods given in the last section; the edge types of all projected polyhedral edges are also shown. Table 111 gives the estimation result of the angles between polyhedral edge pairs.

Experiment 2. In this experiment a different polyhedral object is analyzed, as shown in Fig. 4(b). The face equations and the angles between adjacent face pairs are given in Table IV. Table V gives the

true and measured edge lengths between the final vertices found by the three different methods for vertex determination. Table VI gives the estimation result of angles between polyhedral edge pairs.

In Fig. 5, the edges between the final vertices of the above two polyhedron found by the three methods are reprojected back to the image plane, as indicated by the white line segments. We find method 2 gives the best overall estimation results, method 1 next and method 3 finishes last. These results indicate that method 3 is more sensitive to the possible small variation in the depth parameter of the polyhedral face estimation while method 2 is very stable. On the other hand, the normal vector of the face equation of all visible polyhedral faces is rather accurate, since the overall error in the measurement of angles

(9)

872 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. 23, NO. 3, MAY/JUNE 1993

between adjacent face pairs is very small. We also find that the edge of each face constructed by the backprojection method always give a good individual face shape, because the small variation in the depth parameter of the face equation only affects the face shape a little. This is very useful for individual face recognition.

VII. CONCLUSION

A correspondenceless method has been presented to determine the plane equation of all visible polyhedral faces from a single view. It applies grid coding technique using two perpendicular sets of laser planes. The normal vector of each visible polyhedral face is first determined through the geometfic relations and the depth parameter of the face equation is then calculated based on the given dimensions of the grid on the code plate. A sensitivity analysis indicates that the estimation is rather robust if two sufficiently long projected grid line segments are used to infer the face equation parameters. To delimit the face boundary, we use two different methods for edge construction: the backprojection method and the plane-intersection method. The goodness measures of these two method are given. Finally, the unique set of polyhedral vertices is computed in three different ways. The experimental results indicate that the vertex determination by the averaging-after-face-assembly method gives the best result of the three possible methods. From the experimental results, we can see that our method is capable of producing a remarkable accurate result for the recovery of the 3-D structure of the visible polyhedral faces from a single image. At present we are extending the method to infer the cylindrical object and obtain some encouraging results that will be reported in a coming-up paper. Furthermore, for the image in which the planar face appears in accompany with other curved surface to form an object such as a cone, half sphere, or cylinder, we can first reconstruct the planar face and, then, determine the curved contour that is the intersection between the curved surface and the planar face. From this curved contour e.g., a circle, we can infer some shape parameters of the curved sufface, too.

ACKNOWLEDGMENT

The authors would like to thank the referees for constructive sug- gestions which lead to a better paper presentation in its current version.

REFERENCES

Y. F. Wang and J. K. Aggarwal, “An overview of geometric modeling using active sensing,” IEEE Cont. Syst. Mag., vol. 3, no. 2, pp. 5-13, June 1988.

R. A. Jarvis, “A perspective on range finding techniques for computer vision,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-5, pp.

122-139, Mar. 1983.

D. G. Lowe, “Three-dimensional object recognition from single two- dimensional images,” Artijicial Intell., vol. 31, pp. 355-395, 1987. J. Le Moigne and A. M. Waxman, “Projected light grids for short range navigation of autonomous robots,” in Proc. 7th Int. Coni Pattern Recog., July 1984, pp. 203-206.

G. Stockman and G. Hu, “Sensing 3-D surface patches using a projected grid,” in Proc. IEEE Computer Society Coni Computer Vision and Pattern Recognition, Miami Beach, FL, 1986, pp. 602-607. K. L. Boyer and A. C. Kak, “Color-encoded structure light for rapid range sensing,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-9, pp. 14-28, Jan. 1987.

Y. F. Wang, A. Mitiche, and J. K. Aggarwal, “Inferring local surface orientation with the aid of grid coding,” Proc. Third Workshop on

Computer VLsion: Representation and Control, Bellaire, MI, Oct. 1985, pp. 96-104.

[8] Y. F. Wang, A. Mitiche, and J. K. Aggarwal, “Computation of surface orientation and structure of objects using grid coding,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-9, no. 1, pp, 129-137, Jan. 1987.

[9] Y. F. Wang and J. K. Aggarwal, “Integration of active and passive sensing techniques for representing three-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 460471, Aug. 1989.

[lo]

M. Asada, H. Ichikawa, and S. Tsuji, “Determining surface orientation by projecting a stripe pattern,” IEEE Trans. Pattern Anal. Machine Intell., vol. 10, pp. 749-754, Sept. 1988.

[ ll ] N. Shrikhande and G. Stockman, “Surface orientation from a projected grid,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 65CUj55, June 1989.

[12] G. Hu and G. Stockman, “3-D surface solution using structured light and constraint propagation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 390-402, Apr. 1989.

[13] S. M. Dunn and R. L. Keizer, “Measuring the aera and volume of the human body with structured light,” IEEE Trans. Syst., Man, Cybern., vol. 19, pp. 1350-1364, Nov./Dec,1989.

[14] P. M. Will and K. S. Pennington, “Grid coding: A preprocessing technique for robot and machine vision,” Artijicial Intell., vol. 2, pp. [15] P. Vuylsteke and A. Oosterlinck, “Range image acquisition with a single binary-encoded light pattern,” IEEE Trans. Pattern Anal. Machine Intell., vol. 12, pp. 148-164, 1990.

[16] S. M. Dunn, R. L. Keizer and J. Yu, “Measuring the aera and volume of the humen body with structured light,” IEEE Trans. Syst., Man, Cybern , vol. 19, pp. 1350-1364, Nov./Dec. 1989.

[17] S. Umeyama, T. Kasvand, and M. Hospital, “Recognition and position- ing of three-dimensional objects by combining matchings of primitive local patterns,” Computer VLsion, Graphics, and Image Processing, vol. 44, pp. 58-76, 1988.

[18] M. Oshima and Y. Shirai, “Object recognition using three-dimensional information,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-5, no. 4, pp. 353-361, July 1983.

[19] D.-C. Tseng and Z. Chen, “Computing location and orientation of polyhedral surfaces using a laser-based vision system,” IEEE Trans. Robotics Automat., vol. 7, pp. 842448, Dec. 1991.

[20] E. L. Walker and M. Herman, “Geometric reasoning for constructing 3D scene description from images,” Artificial Intell., vol. 37, pp. 275-290, 1988.

[21] Z. Chen, D.-C. Tseng and J.-Y. Lin, “A simple vision algorithm for 3-D position determination using a single calibration object,” Pattern Recog., vol. 22, no. 2, pp. 173-187, 1989.

319-329, 1971.

Representation

of

Nonstructured Concurrency

by Petri Net Languages

Hyung Lee-Kwang and Joel Favrel

Abstract- The concurrency is classified into two types: structured concurrency and nonstructured concurrency. After showing that the nonstructured concurrency cannot be represented by the conventional notations in the Petri net language, a method to represent such concur- rency by the language is proposed. The proposed method allows us to utilize the existing approaches for analyzing properties of a nonstructured concurrency by the Petri net languages.

Manuscript received June 17, 1991; revised August 28, 1992.

H. Lee-Kwang is with the Department of Computer Science, Korea Ad- vanced Institute of Science and Technology (KAIST), Gusong-dong, Daejon, 305-701, Seoul Korea.

J. Favrel is with the Department d’hformatique, INSA de Lyon, 69621, Villeurbanne, France.

IEEE Log Number 9206193.

數據

Fig.  1.  The  geometry  of  the  vision  system.
Fig.  2.  The  two  methods  for  polyhedral  edge  length  estimation. (a)  The  overall view  of  the configuration
Fig. 3.  The extraction of projected grid lines  on a face from a polyhedral scene. (a)  The scene
Fig.  4.  The reference labels of geometric entities of  the polyhedral  objects  used  in  (a)  Experiment  1 and  (b)  Experiment  2
+2

參考文獻

相關文件

Use images to adapt a generic face model Use images to adapt a generic face model. Creating

‹ Based on the coded rules, facial features in an input image Based on the coded rules, facial features in an input image are extracted first, and face candidates are identified.

He(She) has a round face and two small eyes.. Are there

 If schools (including boarding sections of schools) have to arrange meals for students, they are advised to make reference to “Food Safety Advice on Prevention of COVID-19 and

To take collaborative actions to face the challenge arising from global climate change, we issued a circular in April 2017 to remind all schools to formulate and put in

In the simulated environment, his patients gain confidence to face the challenges in the real world.. Here is a successful story to demonstrate VR’s

You may spend more time chatting online than talking face-to-face with your friends or family.. So, are you a heavy

- A viewing activity on the positive attitude towards challenges, followed by a discussion on the challenges students face, how to deal with.