• 沒有找到結果。

When should we consider lens distortion in camera calibration

N/A
N/A
Protected

Academic year: 2021

Share "When should we consider lens distortion in camera calibration"

Copied!
15
0
0

加載中.... (立即查看全文)

全文

(1)

Printed in Great Britain. All rights reserved 0031 3203/95 $9.50+.00

0031-3203(94)00107-3

W H E N S H O U L D WE CONSIDER LENS DISTORTION IN

CAMERA CALIBRATION

S H E N G - W E N SHIH, t:~ YI-PING H U N G t * and WEI-SONG LIN$ t Institute of Information Science, Academia Sinica, Nankang, Taipei, Taiwan, 11529 R.O.C.

$ Institute of Electrical Engineering, National Taiwan University, Taipei, Taiwan, R.O.C. (Received 23 November 1993; in revised form 3 Au#ust 1994; received for publication 22 August 1994) Abstract--This work investigates the effect of neglecting lens distortion in camera calibration, and presents a theoretical analysis of the calibration accuracy. We derived an approximate upper envelope for the 2D prediction error, which is a function of a few factors including the number of calibration points, the observation error of 2D image points, the radial lens distortion coefficient, the image size and resolution. This error envelope provides a guide line for selecting both a proper camera calibration configuration and an appropriate camera model while satisfying the desired accuracy. Experimental results from both computer simulations and real experiments are included in this paper.

Camera calibration Lens distortion Accuracy assessment Error bound Camera model

l. I N T R O D U C T I O N

To infer 3D objects using two or more images, it is essential to know the relationship between the 2D image coordinate system and the 3D object coordinate system. This relationship can be described by the fol- lowing two transformations:

(i) Perspective projection o f a 3D object point onto a 2D image p o i n t - - g i v e n an estimate of a 3D object point and its error covariance, we can predict its projec- tion (mean and covariance) on the 2D image. This is useful for reducing the searching space in matching features between two images, or for hypothesis ver- ification in scene analysis.

(ii) Back projection of a 2D image point to a 3D r a y - - g i v e n a 2D image point, there is a ray in the 3D space that the corresponding 3D object point must lie on. If we have two (or more) views available, an esti- mate of the 3D point location can be obtained by using triangulation. This is useful for inferring 3D informa- tion from 2D image features.

The above 3 D - 2 D relationship can be specified by a column vector fl, which contains the geometric cam- era parameters specifying camera orientation and position, focal length, lens distortion, optical axis mis- alignment, and pixel size. Determining this 3 D - 2 D relationship, or equivalently, estimating fl, is called (geometric) camera calibration.

The techniques for camera calibration can be classi- fied into two categories: one that considers lens distor- tion, ~-4~ and one that neglects lens distortion. (5-m A typical linear technique that does not consider lens * Author to whom correspondence should be addressed.

distortion is the one estimating the perspective trans- formation matrix H. (6'8) The estimated H can be used directly for forward and backward 3 D - 2 D projection. If necessary, given the estimated H, the geometric camera parameters fl can be easily determined. (6'7'm

The Faig method tl~ is a good representative for those considering lens distortion. For methods of this type, equations are established that relate the camera parameters to the 3D object coordinates and 2D image coordinates of the calibration points. Nonlinear opti- mization techniques is then used to search for camera parameters with an objective to minimize residual errors of those equations. One disadvantage of this kind of method is that a good initial guess is required to start the nonlinear search.

A few years ago, Tsai proposed an efficient two- stage technique using the 'radial alignment constraint', t3~ His method involves a direct solution for most of the calibration parameters and some iterative solution for the remaining parameters. Some drawbacks of the Tsai method are pointed out in reference (4). Our experiences ~1°} also showed that the Tsai method can be worse than the simple linear method of ~m if lens distortion is relatively small.

Recently, Weng showed some experimental results using a two-step method, t4} The first step involves a closed-form solution based on a distortion-free camera model, and the second step improves the camera par- ameters estimated in the first step by taking into ac- count lens distortion. This method overcomes the ini- tial guess problem in the nonlinear optimization, and is more accurate than the Tsai method according to our experiments.

We have also developed a fast and accurate tech- nique for calibrating a camera, with lens distortion, by 447

(2)

solving linear equations. (2) Instead of using nonlinear optimization techniques, the estimation of radial lens distortion coefficient is transformed into an eigenvalue problem of an 8 x 8 matrix. This method provides an efficient and accurate solution for calibrating a practi- cal camera, and according to our experiment it is more accurate than the Tsai method.

However, considering lens distortion will not only complicate the camera calibration procedure, but also complicate the subsequent on-line processing (though not formidable) such as feature-point correspondence (in stereo) and camera re-calibration (in the case of having a moving camera). Notice that epipolar line is no longer a straight line if lens distortion is taken into account. Moreover, when lens distortion is small, if the noise in the 2D feature extraction is relatively large or the number of the calibration points is relatively small, the calibration results based on distortion camera model can be worse than those based on linear camera model. The question is then, 'when should we consider lens distortion in camera calibration?' or 'when is it worth all the trouble to consider lens distortion?' This work represents an effort towards the answer of this question.

2. C A M E R A M O D E L

Consider the pinhole camera model with lens distor- tion, as shown in Fig. 1. Let P be an object point in the 3D space, and r o = (xo, Yo, Zo) t be its coordinates, in millimeters, with respect to a fixed object coordinate system (OCS). Let the camera coordinate system (CCS), also in millimeters, have its x - y plane parallel to the image plane (such that x axis is parallel with the horizontal direction of the image, and y axis is parallel with the vertical one), with its origin located at the lens center and its z axis aligned with the optical axis of the lens (see Fig. 1). Let r c = (Xc, Yc, Zc) t be coordinates of the 3D point P with respect to the CCS. Suppose there

is no lens distortion, the corresponding image point of P on the image plane would be Q (see Fig. 1). However, due to the effect of lens distortion, the actual image point is Q'. Let s t = (u~, v~) t denote the 2D image co- ordinates (in pixels), with respect to the computer image coordinate system (ICS), of the actual image point Q', where the origin of ICS is set at the center of the frame memory coordinate [e.g. the origin of the ICS is set at (256, 256) for a 512 by 512 image].

As shown in Fig. 2, the 3 D - 2 D transformation from

ro to s~ can be divided into the following four steps: 2.1. Translation and rotation from the OCS to the CCS

The transformation from r o to r c can be expressed as

,r r r3

~ c : T°~o with T ° = I O O t ~ ] = r a rar 5 r9r 6 t2 , 0 0

(1)

i.e. yc = r4 r5 r6 t2 yo , (2) r 8 r 9 0 0

where tilde ( ~ ) denotes homogeneous coordinates, (I 1) t ° ~- ( t 1 t 2 t 3 ) t is a translation vector, and R ° is a 3 x 3 rotation matrix determined by the three Euler angles, tp, 0, ~k, rotating about the z, y, z axes sequentially. 2.2. Perspective projection from a 3D object point in the CCS to a 2D image point on the image plane

Let f be the distance between CCS and ICS plane as shown in Fig. 1, and referred to as the 'effective focal length'. Let sr = (uF, v~-) t be the 2D coordinates (in millimeters) of the undistorted image point Q lying on

Image Plane Parallel to Image Plane 'h o Optical Axis (Perpendicular to Image Plane) Lens ~Center O C S

Fig 1. Pinhole camera model with lens distortion, where P is a 3D object point, Q and Q' are its undistorted and distorted image points, respectively. OCS--Object Coordinate System (31)). CCS--Camera Coordinate

(3)

$/ U 0 , V 0,

~u,~

',

V in pixels

Fig. 2. Relation between different

~k, O, ~o

f t I , t 2, t 3

',

g

in millimeters transformation matrices. r 0

/

the image plane• Then, we have

xc

=fYL

uF = f , v r (3a)

Z C Z C

Alternatively, we can express this perspective pro- jection in the homogeneous coordinates as

0

g F = H C ~ c with HeC- - 1 0 . (3b)

0 1/f 2.3. Lens distortion from Q to Q'

F o r practical reasons as mentioned in Tsai, TM we consider only the first term of the radial lens distortion, i.e.

s r = (1 - ~clls}llZ)s}, (4) where s} = (uk v})', is the coordinates of the distorted 2D image (in millimeters). In this paper, K has the unit of millimeter- 2.

2.4. Scalin9 and translation of 2D image coordinates The transformation from sk (in millimeters) to st (in pixels) involves: (i) scaling from millimeter to pixels, and (ii) translation due to misalignment of the sensor array with the optical axis of the lens. Hence,

~ , = r f ~ with r f = 0 1/6~ , (5)

0 0

where 6, a n d 6 v are the horizontal a n d vertical pixel spacing (millimeter/pixel), u0 and v 0 are the coordi- nates (in pixels) of the computer image coordinate system. F o r convenience, we will call (Uo, Vo) the prin- ciple point, since it is the coordinates of the piercing point of the principle axis (as well as optical axis)•

Using the above notations for camera parameters, fl = [t~ t: t 3 q5 0 ~'fh" ft, UO

UO] t.

The vertical scaling factor 3~ is not included here because it is a k n o w n parameter when we use a solid state c a m e r a - - o t h e r -

wise, only the ratios f / b , and f / 6 v can be determined. C o m b i n i n g (1), (2), (3) and (4), we have

(1 - KO2)(ut - Uo)6, = f x°rl + Y°r2 + z°r3 + t l (6a) X o r 7 + y o r s + z o r 9 + t 3 (1 - ~cQZ)(vl -- Vo)6~ = f x°r* + y°rs + z°r6 + t2 (6b)

X o r 7 + y o r 8 + Z o r 9 + t 3

2 - 2

where 0 = ~ . 2 ( u I - Uo) 2 + G ( v , - Vo) 2 =

II&ll.

Notice that, suppose there is no optical distortion (i.e. x = 0 a n d ~ is an identity operator, see Fig. 2), the relationship between r o a n d s t can be expressed as a linear transformation by combining (1), (2) and (4):

~ = HF o

(7) where H = T~/~v T °.

Hereafter, for simplicity, we will use u, v, x, y, z to denote ut, vt, Xo, Yo, Zo, respectively.

3. A LINEAR METHOD FOR CAMERA CALIBRATION

Given a set of 3D calibration points a n d their corre- sponding 2D image coordinates, the problem is to estimate fl, the parameters of our camera model. In- stead of estimating fl directly, we first estimate the composite parameters h [as described following equa- tion (10)], then the composite parameters can be decomposed into fl by methods described in G a n a p a - phy, (9} H u n g '6} and S t r a t F )

F r o m equation (7), the derivation of a linear method for camera calibration is quite simple [refer to refer- ences (6), (7) a n d (9)]. However, to observe the effects of lens distortion on camera calibration, it is necessary to derive the linear method in a different way.

Assuming that we have No,lib pairs of 2 D - 3 D cali- bration points, from (6a) and (6b) it can be shown that (see Appendix A)

(4)

where and

Ii

"

" "

:

"

" il

A = yj z s 1 0 0 0 0 0 0 x s yj z s 2 N e a l l b X 8 B ~- - - U j X j - - u i y j - - u j z I - - U j

l

VjXj

-- Vjyj

-- VjZj

-- Dj

• " i 2 N c a l i b X 4

u j - Uo)Q}xi ( u j - Uo)Q~y j (u~- Uo)O~z j ( u j - Uo)O 2

r

e i

I r3f/6"

+

r9u°l

P2

[ r 6 f / 6 v +

r9v°] P 3 - ,

Lt'fl' .+t Uo_j

Lt ft<s, + t' o_l

L;:J

Ih'l

[P,llt3

h8 8 x l

F< I::l

I=

.

2 N e a l l b x 4

F r o m the above definition of P3, we have, using equation (2),

[xj yj zj 1 ] P 3 = z c j , (9)

where Zcj is the z-component of the coordinates of the jth calibration point in the camera coordinate system.

Suppose we have a distortionless lens, i.e. x = 0, then equation (8) can be simplified to

Ap+ Bq=[A

B][;]

= [ A B' " - b ] [ h l ] = e ~ O , (10) where B' is the matrix space obtained by removing the last column o f B , b=- [...ujvy..] t (i.e. FB' i - b ] = B) and h = [hi hz h3 h4 h5 h6 h7 h8 h9 hlo h11] t. Notice that the small error vector, e, in equation (10) is due to the measurement error of the 3D and 2D coordinates of the calibration points.

Hence, the parameters to be estimated, It, can be computed by minimizing the following error function,

lie II 2, with respect to It:

Ilel12 = I I A h - bll 2, (11) where A = [A B'].

The optimal solution of (11) is well known to be

/~ = (AtA) - 1Atb, (12)

if there are more than six noncoplanar calibration points. The estimated composite parameters can be further decomposed into/3, when necessary.

4. ACCURACY ASSESSMENT

In this section we will derive an approximate error envelope for the linear calibration method shown in Section 3, which does not consider lens distortion. The effects of both the measurement noise and the model- ing error (the negligence of lens distortion) are con- sidered.

The error envelope is based on the following as- sumptions:

Assumption 1. The 3D positions of the calibration points are known exactly. In practice, the positions of control points can be determined with a high accuracy, much higher than that represented by the size of the back- projected pixel at the corresponding depth. Further- more, the 3D position error can always be transformed to an equivalent 2D measuring error.

Assumption 2. The only source of measurement noise is the error in estimating the image coordinates of the calibration points, i.e. the 2D observation noise (in pixels). In both horizontal and vertical directions, we assume the 2D observation noise are independent and identically distributed Gaussian with zero mean and the variance, a 2,

(5)

Assumption 3. The depth components of both calibra- tion and test points, Zo can be approximately replaced by a constant (i.e. the depth of field is small relative to object distance). This assumption holds in many com- puter vision applications, since the depth of field for a practical camera is usually limited to a small range comparing to the object distance.

4.1. Definition of error measure

To evaluate the accuracy of the camera calibration for 3D vision application, it is necessary to define certain kinds of error measure. Consider Fig. 3, O is the estimated optical center. Q is the observed 2D image point corresponding to the 3D test point P. (~ is obtained by projecting the 3D test point P onto the image plane using the estimated camera parameters ft. Because of the estimation error in fi, the predicted 2D image point (~ will not be at the same location as the observed image point Q. By using the estimated cam- era parameter fl, we can also compute the 3D ray, OQ, by back-projecting a 3D line from 0 through Q to 3D space. In general, OQ does not pass through the test point P. Based on the above notations, we can define two error measures as follows.

(i) the 3D angular error, i.e. the angle (in degree) ~ P()Q.

(ii) the 2D prediction error, i.e. the image distance (in pixels) between the Q and (~.

There is a relationship between the 2D prediction error and the 3D angular error (see Fig. 3), which can be approximated by

3D angular error ~ 2D prediction error × 6a, (13) d~ where d~ denotes the average distance from the image point to the estimated lens center, and 6 a stands for the average pixel spacing (see Appendix B).

In the following, we are going to give the intuition of the error function (11). Hereafter, for convenience,

we will use (fi, 0) to denote the predicted 2D image coordinates of the 3D calibration point of (x, y, z), i.e.

/] h l x + ]12Y + ]/3Z "~- ]'14 (14a) ~- ~/9X -'~- ]110y + ]l 1 1Z -'l- 1'

and

h s x + ]16y + h7 z -I- ]18

e-~gx + fhoy + ~llZ + 1,

(]4b)

Notice that the error function (11) is equivalent to the following equation (see Appendix C)

where

. . . r r 12

Ile[12:llAh-bll2=

~

|(u~-~j) z

|ocq

j : l L L f 3 J

^ A A

ZC~J = fl9X j d- haoy j + hl12 j + 1.

f3

In equation (15), the 2D prediction error of each calibration point is weighted by the factor Zc/t3, which means that the linear calibration method tends to minimize the 2D prediction error of those points far away from the camera. But, from Assumption 3 and equation (15), we have

z

x_ZC

j = 1 k, t 3 / '

(16) which amounts to say that equation (12) is the optimal solution that minimizes the 2D prediction error.

4.2. The 2D prediction error as a function of the number of calibration points and 2D observation noise

Consider the ideal case that both 2D and 3D co- ordinates are noise free, equation (10) can be written as

^.~" Estimated Lens Center 2Vrna~ . . . Optical Axis (3D Test Point) Front ~ ... ... ~"v

Plane The 3D Angular Error (in degree)

~ T h e 2D Prediction Error (in pixel) Fig. 3. The error measures used in this paper.

(6)

yj zj 1 0 0 0 P~

0 0 0 Xj y j zj P2

i . . .

UtruejXj -- UiruejY j -- l, ltruejZ j "~ __ VtruejX j -- VtruejY j -- UtruejZ j

[ . .

= 0 .

-.i::: t.3

(17) Now suppose the observation noise along u-axis (v-axis) is nu(nv), i.e.

U = U t r u e + n u and v = v .... + n . , (18) where (ut .... Vtr.~) is the true image coordinate (noise free) a n d (u, v) is the measured one (noisy). Substituting Utr~ a n d vt~.¢ in (18) into (17), we have

Ii ... i]I

y~ zj 1 0 0 0 P1 0 0 0 xj yj zj P2

t

....

-- uj

1

P 3 UjXj -- u j y j -- UjZj +

LT'

?

nujXj nujYj nujZj nuj

+ P3 = O.

Lnvlx~ n~'~.Yi nv!z~ n~.i

(19)

Substituting (9) into (19) a n d dividing (19) by t3, it follows that yj z) 1 0 0 0 0 0 0 0 xj yj zj P

t ....

UjXj -- u j y j -- UjZj -- Uj

l

+ q

[ Tx,

-:,

(20)

(21) C o m p a r i n g equations (10) a n d (20), we have

I.,31 I. lzc 3

e=--L.:~,~j~

t'r'J

Using equations (10) a n d (11), we have

Ah = b + e - br, (22)

where b r is true value of the noise corrupted vector b. Ideally, the test points are noise free, and in practice, both the calibration a n d the test points are selected from the same working volume. Thus the 2D predic- tion error calculated by using the test points can be approximated by the one using the noise-free version of the calibration points. Therefore, the following work is to find the 2D prediction error tested by using the noise free calibration points. Let us denote the expecta- tion of the root mean square 2D prediction error as e,, then we have

e.,,2=-_ Ncanb E ./E= 0 [ ( U t r u e j - - a J ) 2 " ~ - ( V t r u e j - - ~ J ) 2 ] "

(23) By Assumption 3, we have { Z c ~ 2 f N ... F / ~ .\2 Ncalib \ t 3 . / k j=o L \ 3 / + (Vtrue ~ - 0i) 2 . (24) F r o m equations (22) and (15), equation (24) can be further simplified as

N e a l i b x ( Z c ~ 2 x ~. - E [ I l b r - A/~II2]. 2 (25) \ t 3 /

Since all the measurements in equation (22) are noise-free, it is obvious that the solution of (22) ob- tained by pseudo inverse is equal to the true value of h, and thus it has zero residual error, i.e.

b r - A h = b r - A(AtA)- lAtb w

= (b + e) - A(A'A)- 1At(b + e) = 0. (26) F r o m equation (12) and (26), we have

Ilbr - Ahll 2 = b + e -- A(AtA) - IAtbll2 = Ilm(m'm)- 1AIell 2. (27) The expectation of equation (27) is (see Appendix D)

EIIbr - Afell z - 1 laZz2 (28)

t~

Taking average of the above equation over the Ncali b Zc/t 3 [see points and dividing it with the constant 2 2 equation (25)], we have the expectation of the average square 2D prediction error

l l a 2

e 2 - (in pixels). (29)

Ncalib 4.3. The modeling error

Suppose the 2 D - 3 D pairs of the calibration points are noiseless, but the lens we used has certain a m o u n t of lens distortion, i.e. ~c :~ 0. And i f a linear calibration method which dose not consider lens distortion is used, then the deduced 2D prediction error is called the modeling error. Usually, the 2D prediction error

(7)

has two sources, one is the effect of the measuring noise, which has been discussed in the previous subsec- tion. Another is due to the i m p r o p e r modeling of the camera, which will be dealt with in this subsection.

Notice that the linear calibration m e t h o d will mini- mize the 2D prediction error, [see equation (16)], subject to the distortion free camera model and pro- viding that the assumption 3 is true. The idea of the following work is to find an a p p r o x i m a t e solution that is close to the optimal solution and convenient for us to compute its 2D prediction error. The 2D prediction error of the a p p r o x i m a t e solution is then used as the upper envelope, since the optimal solution will always have a smaller error. When more approximations are used in the derivation of the approximate solution, the envelope will be more conservative. Because of that the a p p r o x i m a t e solution deviates more from the optimal solution• The derivation of the upper envelope of the modeling error is described below.

4.3.1. Find an approximate relation between the esti- mated composite parameters and the true one. Rewrite (8) as follows:

N o w suppose that 02 can be a p p r o x i m a t e d to a constant, i.e. z ~ Oj ~ M, for all j, where M is a constant to be determined later. Substituting the above approx- imations to (31), yields

Ah ~ (1 - ~¢Mzc/t3)b. (32)

F r o m (32), we have

h .... ~ (1 -- tcMzc/t3)(AtA) - 1Atb = (1 -- l~Mzc/t3)fL (33) where h,r,~ is an exact solution of equation (30)

4.3.2. Find the relation of the true image point (u t .... Vt~,e ) and its undistorted image point (Uua, Vun).fRefer to Fig. 4(a).] Using the estimated parameters h the pre- dicted image point (tl, ~) (in pixels) is defined in equa- tion (14). Also, if the true parameters &true w e r e used, then the undistorted image point (U,d, Vud) (in pixels) will be hlx + hzY + h3z + h4 Uua - (34a) h 9 x + h l o y + h l l z + 1' hsx + h6y + hvz + hs Vua - (34b) h 9 x + h l o y + h l l Z + 1'

Uj--Uo)~Xj (Uj--Uo)O~Yj (Uj--Uo)O~Zj (Ui--Uo)O~ P3

A , = b - ,~ l(vj_ Vo)O~xj (vj - v?)o~yj

(v~- ~o)O~zj (vj-.Vo)O~

(30)

Since Zcj= [xj yj zj l I P 3 and Zc~ ~Zc2 ~ ... ZCN ~ Zc (see Assumption 3), we have

'uj - Uo)O~ Zcj

t 3 A h ~ b - x

(vj - Vo)O~ zf~

t ~ 3 '. .~

In general, the principle point (uo, vo), is negligible c o m p a r i n g to the image points, (u j, v j), we have (see the definition of ICS)

t3 [(UJ -- V0)02

I

where the subscript 'ud', denotes that the coordinate is undistorted, i.e. to obtain the correct coordinates, one further step is necessary to compensate the effects of the lens distortion. F r o m equation (5), we have

=[(U-Uo)6u] and s r (35)

, =[(Uud--Uo)6u],

sr L(v

- Vo)6v J L(V.d -- Vo)& j

In practice, K0 z <<1 (e.g. ~ = 0 . 0 0 0 3 5 m m 2, 0~a~ = 25.80mm 2, and K02max = 0•01). Therefore from equa- tion (4), it follows that

[(u,,.°-,o)] =,/(, _

q

(Vtrue -- VO) J

L(v.,

- Vo) J

, ( l + x 0 2 ' [ ( ( : ~ -- vo,U°)]'J (36,

4.3.3. Find the relation of the predicted 2D image point (ft, ~) and its true undistorted image point (Uua, Vua ). [Refer to Fig. 4(a) and (b).] By equations (14), (33) and (34), it can be shown that (see Appendix E).

F

7

. . . . l-Uud-Uo7

/ a - u ° i ~ ( 1 ~- ~ " , 1 l.

(37)

L D - - Vo d L V . a - Vo J

4.3.4. Find the error of the predicted 2D image point (~, ~) with respect to its true image point (u t .... Vtrue)" [Refer to Fig. 4(a) and (b).] By (36) and (37), the error of the predicted 2D image point, (a, ~9), with respect to

(8)

(a)

[

Utrue]

Vtrue I

Vud (v,~ - Vo)~, J

.... :

f,.

o,.1

(b)

neglect g

Fig. 4. (a) This figure shows the relation between the true image point [t4true ,/)true] t and its undistorted image point, [U,r.,, V,,.o]', when the true parameters are known. (b) The 2D predicted image [tl, ~] is computed using the camera parameters estimated using linear calibration method without considering lens distortions.

the corresponding true image point, (ut . . . . Vtrue), is:

Vtrue

.... - . - , O

k(Vt~.¢ - v0 - (~ - v0) A

L(Vua -

Vo)(l + ~cO 2) --

(v.d --

VO)(I + ~¢M) J

= F ~ ( u u d - U o ) ( O 2 -

M ) ] . (38)

kK(vu, - Vo)(~ 2 - M) J

F o r convenience, we would like to calculate the mean square 2D prediction error by multiplying both the errors in two directions by their scale factors, 6u, 6~, respectively. Let U and V denote

(u.a-Uo)

a n d

(vua-Vo),

respectively. As explained before, ~cO2 << 1. Hence, from (38), the 2D square error becomes

[ ( ~ u ( U t r u e - /~)-] 2

~ K ~ u U ( O 2 - M )

]

2

¢~t,( v . . . . --V) A

~'

I K J ~ V ( o 2

M)J

= K2Q2a(k92 -- M ) 2, (39)

where Qua2 =- (U8,)2 + (Vfv)2 and ~2 = ((u - Uo)6.) 2 +

((V -- VO)l~y) 2 2

Now, the mean square 2D error is calculated using the following equations

Umax Pmax r,~ -- 1 ~ l K2(Q2 - -

M)2((U~u)2

//maxVmax 0 0

+(V6,,)z)dU dV,

l Umax Vmax f f ~c2(Q~d--

M)2oZ.adUdV,

(40) 13rnax Umax 0 0

where e, zM is the mean square of the modeling error, and u . . . . Vm~ are the length, in pixels, of half the maximal size (in pixels) of the image in either direction (see Fig. 3). If we integrate the error on the disk whose radius, R = x/~-~Um,02 + (6~Vm,0 z, equals to half the diagonal size of the image sensor, then we will have

1 R2~ ! ~ K2(Q 2 -- M)2~2 d 0 d o o = •2 + (in millimeters) 6 (41) Minimize (41) subject to M, we have

M = 2R2/3, and e 2 = K2R6/36. (42) Recall that we claim that 02 can be approximated to a constant, which yields equation (33), b u t in prac- tice, this is usually n o t a good approximation. Besides, s i n c e / ; is the optimal solution that minimize e~, a n d (1 +

KMzc/t3)htrue

(an approximation) is used to re- place/~(the optimal solution) to calculate the 2D error, the obtained results is the upper envelope of the 2D error (in millimeters).

With the average pixei spacing (see Appendix B), the 2D error envelope can be represented in pixel, which yields

R 6

e,~ = x 2 (in pixels). (43)

366a 2

M a n y techniques can be used to determine the value of x, but we recommended to use the method we proposed in Shih, (2) since least efforts is needed to adapt the method described in Section 3 to estimate

K'.

4.4.

The envelope of total 2D prediction error

Assume that the interaction between measurement noise and modeling error is small. Then, we have the approximate total mean square 2D prediction error by c o m b i n i n g (29) a n d (43):

2 ..~ ezM + 2 (44)

~Envelope ~n"

Notice that the second term, e~ 2, of equation (44) is an expectation value, which means that the violation of the approximate upper envelope, eEnve~ope, is pos- sible.

(9)

5. EXPERIMENTAL

RESULTS

In this section, we will show some experimental results obtained by both computer simulations and real experiments. In the simulations, we assume the 3D positions of the calibration points are known exactly, and the only source of measurement noise is the error in estimating the image coordinates of the calibration points, i.e. the 2D observation noise. The reason for doing so in the simulation is because, for our applica- tions, it is easier to control the 3D measurement noise such that it has much smaller effect than the 2D ob- servation noise has. Let a denote the standard devia- tion of the 2D observation noise. Unless specified explicitly, the following parameters are used in the simulations (most of these parameters are obtained

0.8--

Ncahb

= 60

0.7

• ~. 0.6

o

ID

0.3]

-I1

0.2-

°'° 5'~'-~'-i

'

X 10 -4

Radial lens distortion coefficient x Fig. 5. The simulated 2D prediction error.

Nc,~ti b = 60 0.8

~" 0

.

7

~

0.6

"~,

O.5

o

o

0.4

-=

0.3

~ 0.2

o., °

× 10 -4

ram -2

Radial lens distortion coefficient x Fig. 6. The predicted error envelope.

from a real experiment using nonlinear calibration method Wengl4~). The images are of 480 × 512 pixels. The synthetic camera is assumed to have the effective focal length of f = 25.2847 mm and the pixel size of 6 u = 0.01566 mm and 6 v = 0.013 mm in horizontal and vertical directions, respectively. The radial lens distor- tion coefficient is 0.00035 mm-2. The extrinsic camera parameters include three Euler angles, 45.22, 0.95 and 45.52 all in degrees, rotating about z-, y-, z- axis suc- cessively, and the transition vector (138.82, t36.81,

1811.11) mm. The calibration and test points are se- lected from a volume having the depth (in the direction of the optical axis) of 500 mm.

The first experiment observed the effects of both the 2D observation noise and the lens distortion in camera calibration using distortionless model. Each simulated data point shown in Fig. 5 is the average often random trials, while the number of calibration points are set to 60 points. As shown by the V-shape curves in Fig. 5, the 2D prediction error is proportional to the amount of lens distortion, i.e. eEx p OC K, as we expected, where eEx p denotes the 2D prediction error evaluated in ex- periments. Also, it increases as the 2D observation noise or a increases, i.e. eExp OC tr. Each curve shown in Fig. 5, from bottom to top is obtained by using the 2D observation noise having the standard deviation, cr = 0.0, 0.1, 0.2 .... ,1.0 pixels, respectively. Figure 6 shows the error envelope obtained by using equation (44). Although the basic Assumption 3 does not hold (z-components can vary in the range of 1300-1800 mm), our error envelope still predicts the actual 2D predic- tion error quite precisely.

This envelope was tested further by the next experi- ment. Here, four of the intrinsic parameters were gen- erated randomly (see Table 1). The calibration and test points were generated from 20 planes, which were equally spaced with Zin c mm. Hence, we were using a working volume having the depth of 20 x Zin c mm. On each plane, we generated Np random points for cali- bration, which yielded totally

Ncali b =

20 × Np calibra- tion points. The reason we set up such a configuration is to simulate the real equipment we have. Both Z~nc and Np in this experiment were also randomly selected (as shown in Table 1), but the number of test points is fixed to 200.

Totally, 10,000 trials were simulated. For each ran- dom trial, the computed 2D prediction error was nor- malized by its theoretic envelope. Figure 7 shows the histogram of the normalized error which shows that, in most trials the 2D prediction error is close to and less than the theoretic envelope, i.e. the normalized error < 1 with high probability. Still, there are some points which exceed the theoretic envelop. This is partially because of that e2 is an expectation value, not an upper bound. Figures 8-14 show the distribution of the random trials with the normalized 2D prediction error as vertical axis and the parameter we are inter- ested in as horizontal axis, where darkness represents the occurrence frequency of the random trials. Some parameters do not show strong relation to the predic-

(10)

Table 1. The interval of parameters tested in the second experiment

Interval of the uniform

Parameters distribution Focal length, f Principle point, Uo Principle point, Vo Lens distortion, r 2D noise, a Distance between successive plane, Z~n~ No. of calibration points on each plane, Np 12.5 mm ~ 75 mm - 2 0 pixels ~ + 20 pixels - 20 pixels ~ + 20 pixels -0.0005 mm-2 ~ + 0.0005 mm -2 0.0 pixel ~ 1.0 pixel 1 mm ~ 25mm 1 point ~ 10 points*

* Integer random nfimber.

tion error, which are the effective focal length, f , the principle point, (Uo, Vo), and depth of the working vol- ume, see Figs 8-11. In Fig. 14, we can see that when the lens distortion is very small, i.e. Ixl .~ 0, the effects

0"2, 01 , 7'0 ' 1:~0' 1~0' 2~0' 2'~0' 3:~0' 3~0' 4:~0' 4~0 Number of trials (total = 10000)

Fig. 7. Histogram of the normalized 2D prediction error with 10,000 trials.

i t

"6 ¢ q Z u0 (in pixels)

Fig. 9. Distribution of the normalized 2D prediction error with respect to Uo.

2 t

• 3 !

v0 (in pixels)

Fig. 10. Distribt~tion of the normalized 2D prediction error with respect to Vo.

2 t

.6

t"q "d

Z

Effective focal length (mm)

Fig. 8. Distribution of the normalized 2D prediction error with respect to the effective focal length.

Depth of the wroking volume (mm)

Fig. 11. Distribution of the normalized 2D prediction error with respect to the depth of the working volume.

(11)

2.0. L. ... ~ . . . ~ < ... 0. o z ' 0'.l ' 0'.2 ' 0'.3 ' 014 ' ¢.5 ' 0'.6 ' 017 ' 0!8 ' ¢.9 ' L

Standard deviation of the 2D observation noise (in pixel)

Fig. 12. Distribution of the normalized 2D prediction error with respect to the standard deviation of the 2D observation

noise. 2.0-

o•1.5

o 1 £ r-4 0.5 O Z

ii+

i ! I

' 4 b ' d o ' 8b ' 1 6 0 ' d o ' l , l O ' l ~ o ' l + o ' :

Number of calibration points

Fig. 13. Distribution of the normalized 2D prediction error with respect to the number of calibration points.

2.0. 1..' o e ~ 1 £ ~ 0.5 o z ~ ' A + ' - b ' - + 2 ' - ' l ' b ' 1 r ~ , 3 ' ~ i JX |0..4

Lens distortion coefficient mm -2

Fig. 14. Distribution of the normalized 2D prediction error with respect to the lens distortion coefficient.

Fig. 15. A typical image of the calibration plate containing 25 calibration points used in the real experiments.

0 . 5 -

" •

0 . 4 e- 0 . 3 - O ' ~ 0.2-

So.

Error envelope for

i image size : 480 x 512 image size : 355 x 300

*l'~lllllOllllOllOOll

'

/

l

/

i : Experimental Data [

60

' 2b ' 4b ' 6"0 ' s b ' i 6 0 ' l ~ o ' l ~ o ' l ~ o ' 1~0'2

N,.~b: Number of calibration points Fig. 16. Comparison of the calibration error obtained in the

real experiments with our predicted upper envelope.

of the 2 D o b s e r v a t i o n noise d o m i n a t e s , and the £Envelope is more of an error expectation than of an upper envelope. Therefore, the normalized error varies in a larger extent. Figures 12 and 13 show that when the number of calibration points is small or the 2D observation noise is large, the approximate envelope tends to be violated. At the beginning we expect that the smaller the depth of the working volume is (to let the Assumption 3 be true), the more correct the en- velope is. But due to the effects of the r a n d o m 2D observation noise (recall that we need n o n c o p l a n a r points for calibration, and the smaller the depth of the working volume the more singular the calibration pro- blem tends to be, since the calibration points tends to be on the same plane), we do not see this p h e n o m e n o n in Fig. 11.

The third experiment tested the envelope by a real experiment. With a P U L N i X T M - 7 4 5 E camera, and an ITI Series 15 ! frame grabber, we took 21 images of a m o v i n g calibration plate having 25 calibration points

(12)

on it, which was m o u n t e d on a translation stage. O n e image was taken each time the translation stage was moved toward the camera by 25 mm. A typical image is shown in Fig. 15. Thus we have 21 × 25 = 525 pairs of 2 D - 3 D coordinates of points. The image coordi- nates of the center for each circle is estimated, with an error of a b o u t 0.1 pixel. F o r N¢ali b = 10, 20, 30 . . . a n d 200, we r a n d o m l y chose Neali b points from the 525 2 D - 3 D pairs to calibrate the camera a n d used all remaining points to test the calibrated parameters. The above r a n d o m trials were repeated ten times to obtain ten sets of the 2D prediction error. Figure 16 shows the ten sets of data a n d two predicted envelopes based on two different effective image sizes (here x = 0 . 0 0 0 3 5 m m -2, which c o r r e s p o n d s to roughly 2 - 3 pixels of distortion near the four image corners). Since all the calibration a n d test points are distributed in the central part of the image, whose size is roughly of 355 by 300 pixels (see Fig. 15), the envelope calcu- lated with this image size is much closer to the experi- mental results. To use every pixels in the 480 × 512 image, the error envelope will be approximately three times of the experimental results.

extraction more accurate (reduce tr) a n d by increasing the n u m b e r of calibration points. Check if this process brings the theoretic envelope, eE,vc~ope, to the value smaller t h a n the specified one, espec.

(5) Ife~,wlop e still c a n n o t meet the requirement after the reduction of ~, in step 4), then try to reduce the effective size of the image to an acceptable level [see equation (43)].

(6) If the efforts in steps (4) a n d (5) fail to reduce er.v~ope such that esp~¢ > er.w~op~, then a nonlinear camera model should be considered in the camera calibration procedure as in Faig, ~" Shih, t2~ Tsai ~3) a n d Weng. ~4)

A linear camera model is always the first considera- tion of engineers. N o t only will it simplify the camera calibration procedure, b u t it will make the subsequent processing easier (e.g. eliminating the need of geo- metric correction). This paper provides a tool for mak- ing decisions based on the trade-off between accuracy a n d efficiency.

REFERENCES

6. CONCLUSIONS

In this paper, we have derived an approximate up- per envelope for the 2D prediction error. The effects of both the radial lens distortion a n d the 2D observation noise are considered. This envelope was tested by computer simulations and real experiments which show that the upper envelope is quite tight, i.e. it is close to the experimental results and still envelopes almost all of them from above. F o r 3D applications, e.g. stereo vision, it is of great importance to determine the accu- racy of 3D position estimation. K n o w i n g the 2D pre- diction error, the 3D position error can be derived as in Blostein/t2) Thus, the error envelope can be used as a criterion for deciding whether the linear camera model is suffÉcient or not, for a specific application. In the following, a general guide line is provided for using this error envelope:

(1) Determine the acceptable 2D prediction error a n d 3D angular error. If the specified error envelope is given as the 3D angular error, then equation (13) is used to translate it to the 2D prediction error. F o r convenience, let us denote this specified 2D error en- velope as ~spee.

(2) Calculate the approximate error envelope, eEnvelor,, by equation (44) according to the parameters of the equipments to be used.

(3) If espee > EEnvelope then it is good enough to use the linear camera model.

(4) If espec < er,ve~ope then try to reduce e., in equa- tion (44) as much as possible, by making the feature

1. W. Faig, Calibration of close-range photogrammetry sys- tems: mathematical formulation, Photoorammetric En- oineerin 0 and Remote Sensino 41(12), 1479-1486 (1975). 2. S.W. Shih, Y. P. Hung and W. S. Lin, Accurate linear

technique for camera calibration considering lens distor- tion by solving an eigenvalue problem, Optical Enoineer- in9 32(1), 138-149 (1993).

3. R. Y. Tsai, A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses, IEEE Journal of Robotics and Automation RA-3(4), 323 344 (1987).

4. J. Weng, P. Cohen and M. Herniou, Camera calibration with distortion models and accuracy evaluation, IEEE Transactions on Pattern Analysis and Machine Intelli- gence 14(10), 965-980 (1992).

5. O. D. Faugeras and G. Toscani, The calibration problem for stereo, Proceedings Conf. of Computer Vision and Pattern Recoonition 15-20 (1986).

6. Y. P. Hung, Three dimensional surface reconstruction using a moving camera a model-based probabilistic ap- proach, Ph.D. dissertation, Division of Engineering, Brown University (1990).

7. T. M. Start, Recovering the camera parameters from a transformation matrix, DARPA Image Understanding Workshop, 264-271 (1984).

8. I. Sutherland, Three-dimensional data input by tablet, Proceedings of the I EEE, 62(4), 453-461 (1974). 9. S. Ganapaphy, Decomposition of transformation ma-

trices for robot vision, Proceedings Int. Conf. on Robotics and Automation 130-139 (1984).

10. Y. P. Hung and S. W. Shih, When should we consider lens distortion in camera calibration, IAPR Workshop on Machine Vision Applications, Tokyo, 367-370 (1990). 11. R.O. Duda and P.E. Hart, Pattern Recognition and

Scene Analysis, Wiley, New York (1973).

12. S. D. Blostein and T. S. Hung, Error analysis in stereo determination of 3-D point positions, IEEE Trans. Pat- tern Anal. Machine Intel. 9(6), 752 765 (1987).

About the Autbor--SHENG-WEN SHIH received the M.S. degree from the National Taiwan University in electrical engineering in 1990. Since then he joined the Institute of Information Science, Academia Sinica. He is now also a Ph.D. student in the Institute of Electrical Engineering at the National Taiwan University. His current research interests are in active vision, image sequence analysis and robotics.

(13)

About the Author--YI-PING H U N G received the B.S.E.E. degree from the National Taiwan University in 1982, the M.Sc. degree in electrical engineering, the M.Sc. degree in applied mathematics, the Ph.D. degree in electrical engineering, all from Brown University, in 1987, 1988 and 1990, respectively. He is currently an associate research fellow in the Institute of Information Science, Academia Sinica, and an adjunct associate professor at the National Taiwan University. His research interests include computer vision and robotics.

About the Author--WEI-SONG LIN was born in Taiwan R.O.C. in 1951. He received the B.S. degree in engineering science and M.S. degree in electrical engineering from National Cheng Kung University, Taiwan, R.O.C. in 1973 and 1975, respectively, and attained a distinguished paper award from the Association of Chinese Electrical Engineers at this time. He received the Ph.D. degree from the Institute of Electrical Engineering of National Taiwan University in 1982 and where he became an Associate Professor during 1983-1987. From 1982 to 1984 he was also the head of Electronic Instrument Division of Ching Ling Industrial Research Center where he worked mainly on design and implementation of electronic instruments. Since 1987 he has been a Professor at National Taiwan University. He is now a member of the International Association of Science and Technology for Development (lASTED), and National Committee of the Internal Union of Radio Science (URSI). He has received excellent research award of the National Science Council in 1991. His current research interests are in the field of computer control, computer-based sensing and instrumentation, analysis and design of dynamic control systems, system engineering and automation.

A P P E N D I X A Rewrite equations (6a) and (6b) as

1]l r f f l6. [

( 1 -

~:OZ)(u

Uo)[X y z

1]

= I x y z I r a f / 6 . 1

Lrd

u,S/6.J

(A1)

Ir71r ,-r...

(I -- KQE)(V - - Vo)[_X y z 1] I x y Z

l]Ir'f(3¢

'

r 9 i r 6 l / o,

ta

Lt.fl6<

(A2) From equation (A1) we have

( U - U o ) [ X y z 1] - ~ : e Z ( U - U o ) [ X y z 1]

Ltd

Frlf/6o

= Ix y z l]|rj/6.i

Ir3f/~u

L t i f / 6 .

Irj

r8 r9 t (A3) which leads to

I~

lf/6, +rvUo 1

FrT~

2f /6~ + rsUol

/ r s l

[x y

z 1 ] /

I r3f/ou + r9u°l

I + [ - - u x - - u y - - u z - u ] [

Irgl

q

Ltlf /fu +

t3Uoj Lt3j

+KO2(u--uo)[x y z

1] ra =0. (A4) r9

Similarly, from (A2) we have

Fr4f/6,

+ r7vo]

Vr7

[x

y .

[J~f l6,. + t,Vod

Lt3j

+KQ2(V+Vo)[X y z 1] =0. (A5)

Lt,j

For convenience, let us define the coefficient matrix, A, B and C, for camera calibration and some composite par- ameters, Pl, P2, P3, P and q, as following:

i

i

! i i

i

i !7

A [ x j yj zj 1 0 0 0 ~i

I

0 0 0 0 X j ~gj Zj ' 1._ : : : " ' ] B =- - - u j x j - - u j y j - - u j z j - - Idj - - V j X j - - v j y j - - VjZj - - V

c-I.,

= 1% - Vo)QJxj I..

yrlSl " +r' il

I , . i , . +,..o

P' =- I r 3 f l,$. + rgUo '

Lt~fl6. + t~u

i i "

(uj-- Uo)O]y j (ui-- Uo)O]z j (uj-- Uo) Q

(vs Vo)O]y j (vj--Vo)e]zj (vj--

Uo)Q]

Fr,f/6~ + rvVo]

Fq

r7

P - Irsf/do+ratl°[

P3-=/rs[

---"

[iJan'

rr" rhg

.I..I/,.="

- - - " " . o

L 1 J L1J

With the definitions of PI, P2 and P3, and applying (A4) and (A5) for all 2D-3D pairs, we have

(14)

with

r' '"' "4177'

Xj y j gj l 0 0 0 P~ ulx ~ -- u j y j -- UjZj +

Lo e °. * :, :, :,'.J ":

I(

i

:

:

i

-

us- Uo)O]xj % - Uo)O]yj % - Uo)O]:j % - Uo)O]

+ ~ [(vj - ~o)O]x~ % - Vo)O]y~:

(~ - ~o)O~zj:

% - :v°)°]

Of

=~.(uj-uo) +6~(v~-Vo).

2 2 2 2

(A6)

Dividing equation (A6) by t 3 and substituting the defini- tions of A, B, C , p and q into it, we have

Ap + Bq + xCq = O. (A7)

a p p r o x i m a t e disk

tge

sot APPENDIX B

As shown in Fig. Bl(a) and (b), the average distance from the image point to the estimated lens center, da, and the average pixel spacing, 6a, can be computed through the fol- lowing equations: 1 , , 2~ d~---:~-~ ~ r(t/)drd~/=--flnlsec(q,)+tan(t/,)l, (B1) / T t r / r o o r/r 2 ./2 2 - [a.ln [sec (%) + 3 . - - ~ ~(~)d~= tan(~.)l 7g 0 + 6~ lnlsec (~) + tan (~)1], (B2) where % =- tan- 1 ( f ) , R =

~/(~u.,,y + (a~Vm,,Y,

% = t a n - l ( 6 ~ ' ~ , and ~ t ~ = t a n - l ( 6 ~ ) A P P E N D I X C

We first show that Zcj/t a = h9x j + hloY j + hi tzj + 1. By direct computation, since

Z c j = [ X j yj Zj l I P a = [ x j yj zj 1] ra r9 = [ x I yj z I 1 ] q x t 3 .1

6,

: a

pixel

/

Fig. Bl(a). The average pixel spacing.

Fig. B l(b). The average distance to the estimated lens center. and q = [h9 hto hll 1], it is easy to see that

ZcJ = hgx j q- hloy j + h 1 lzj + 1, (C1) t3

and thus

~ - - - hgxj + h loyj + h ll + 1, Z j (C2) Now we shall prove that

2 Cj "

(c3).

By the definition of a and ~c/fs, we have

\ h 9 x j + hloY j + h 1 lzy + 1

× (hgxj +

?~oYj + hi~zj + 1)

= [ x j yj zj 1 0 0 0 0 --ujx~ --uly J - - u : j ] / k - - u s

= A ~ i h - b2j, (CA)

where A2~ and b2j denote the 2jth row of A and b, respectively. Similarly, for the v-direction, we have

( ~ j - v i ) ( ~ ) = [ O O O O x j yj zj 1 -v~x~ - v l y j -v~zj] x h - v j

= A2j+ a l l - b2j,+ r (C5)

From (C4) and (C5), it is obvious that (C3) holds.

A P P E N D I X D

Substituting equation (12) into (26), w e have

E[ II llr - A/7112 ] = E { II A(AtA) - - 1 Ale II ~ } = E {e'A(AtA)- I A'e}.

(Ol)

(15)

Since equation (59) is a scalar equation, taking trace opera- tion of its both sides will not affect the results. Therefore, we have

E{etA(AtA) - tAre} = E{trace [etA(AtA) - 1A'e] }, (D2) which yields

E{trace [etA(AtA)-lAter] } = E {trace [A(AtA)-1A'(eet)] }. (D3) From (D3), we have

E{trace [A(A'A)- 1A'(e et)] } = trace {A(A'A)- ~ A'. E[ee'] }.

(D4) According to the i.i.d, assumption (see Assumption 2), we have that the covariance matrix, E [e et], is diagonal with the same diagonal element, i.e.

E [ e e t] = a2z~ !

t ~ (D5)

Substituting (D5) into (D4), we have

0-2Z 2

trace {A(AtA) I A " E [ e e t] } = ~ trace {A(AtA) - ~A'}, t3

which leads to

_ O'2Z 2

a2z2 trace {(A,A)IAtA} = trace {Ii1 x 11 }

t~

t~

Therefore, we have

E [ Ilbv - A/~II 2] _ 110"22C 2 (D6)

e]

APPENDIX E

From equation (14), we have

l X -{- h 2 y + ]13Z + ]14 1 -- t c M z c / t 3

['19X +]lloY4-]lllZ + 1 I - - K M z c / t 3"

Recall that from (33)

h,,,e ~. (1 -- i<mzc/t3)h, i.e. hi ~/li x (1 - xMzc/t3), i = 1,2 ... 11, we have (El) h l x + h2y + h3z + h4 ~ (E2) h9x + hloy + hllZ + 1 - KMzc/t 3

Since z c = (hgx -b h 1 oY -t- h 1 lz + 1) x t 3 (see Appendix 3), equa- tion (E2) can be rewritten as

h l x + h2y + h3z + h 4 1

- x - - . ( E 3 )

h9x + h2y + h32 4- 1 1 - KM

In practice KM ~ xR2<< 1, thus

fi ~ Uud X (1 + xM). (E4)

Similarly, it can be shown that

~ v~a x (1 + rcM). (E5) Since KM~xRZe<< 1, and the principle point (Uo, Vo) is small comparing to most of the image points, so that (xMuo, xMvo) is negligible, which yields [from (E4) and (E5)]

• __ Uud - - U 0

[ u U o ] ~ ( 1 + K M ) [ ]. (E6)

參考文獻

相關文件

Wang, Unique continuation for the elasticity sys- tem and a counterexample for second order elliptic systems, Harmonic Analysis, Partial Differential Equations, Complex Analysis,

In this paper, we build a new class of neural networks based on the smoothing method for NCP introduced by Haddou and Maheux [18] using some family F of smoothing functions.

OGLE-III fields Cover ~ 100 square degrees.. In the top figure we show the surveys used in this study. We use, in order of preference, VVV in red, UKIDSS in green, and 2MASS in

In this paper, we extend this class of merit functions to the second-order cone complementarity problem (SOCCP) and show analogous properties as in NCP and SDCP cases.. In addition,

In the following we prove some important inequalities of vector norms and matrix norms... We define backward and forward errors in

In this paper we establish, by using the obtained second-order calculations and the recent results of [23], complete characterizations of full and tilt stability for locally

In this paper we establish, by using the obtained second-order calculations and the recent results of [25], complete characterizations of full and tilt stability for locally

In this paper, motivated by Chares’s thesis (Cones and interior-point algorithms for structured convex optimization involving powers and exponentials, 2009), we consider