• 沒有找到結果。

Camera calibration

N/A
N/A
Protected

Academic year: 2022

Share "Camera calibration"

Copied!
24
0
0

加載中.... (立即查看全文)

全文

(1)

Camera calibration

Digital Visual Effectsg Yung-Yu Chuang

with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes

Outline

• Camera projection models

C lib i

• Camera calibration

• Nonlinear least square methods

• A camera calibration tool

• Applications

• Applications

Camera projection models Camera projection models

Pinhole camera

(2)

Pinhole camera model

P (X,Y,Z)

origin P

p (x,y)

principal point (optical

center)

Pinhole camera model

Z xfX

Z y fY

Z

principal

 X

principal Z

point





















0 0 0

0 0

0 Y

X f

f fY

fX y

x



 





 





 

 

 

 0 0 1 0 1

0 0 0

~

1 f Z

Z fY y

 1

Pinhole camera model

principal

 X

principal point

























0 0 1 0

0 0 0 1 0 0

0

0 Y

X f

f fY

fX y

x



 





 







 





 

 

 

 0 0 1 0 1

0 0 1 0 1 0 0

0 0

~

1 f Z

Z fY y



 1

Principal point offset

principal point

intrinsic matrix

 

I X  X K x~ 0

only related to camera projection

























0 0 1 0

0 0 0 1 0

0 0

Y X y

f x f

fY fX y

x



 





 







 





 

 

 

 0 0 1 0 1

0 0 1 0 1 0 0 0

~ 1

0 Z

y f Z

fY y



 1

(3)

Intrinsic matrix



f 0 x0

Is this form of K good enough?



 



1 0 0

0 f y0 K



0 0 1

• non-square pixels (digital video)

• skew fa s x

• radial distortion





 0 0

0

y f

x s fa K



 

0 0 1

Distortion

No distortion Pin cushion Barrel

• Radial distortion of the image

Ca sed b imperfect lenses – Caused by imperfect lenses

– Deviations are most noticeable for rays that pass through the edge of the lens

through the edge of the lens

Camera rotation and translation









X X

' '

t

R



 



 

Z Y Z

Y 3 3

' '

 









 0 0

Y x X

f

x x ~K

 

Rt X

 











 



 



 



 0 0 1

0

~ 1

0 Z

y Y f

y Rt

extrinsic matrix







 

1 0 0 1 1

Two kinds of parameters

• internal or intrinsic parameters such as focal

l th ti l t t ti

length, optical center, aspect ratio:

what kind of camera?

• external or extrinsic (pose) parameters including rotation and translation:

where is the camera?

(4)

Other projection models Orthographic projection

• Special case of perspective projection

Di t f th COP t th PP i i fi it – Distance from the COP to the PP is infinite

Image World

– Also called “parallel projection”: (x, y, z) → (x, y)

Other types of projections

• Scaled orthographic

Al ll d k ti ”

– Also called “weak perspective”

• Affine projection

– Also called “paraperspective”

Illusion

(5)

Illusion Fun with perspective

Perspective cues Perspective cues

(6)

Fun with perspective

Ames room

Ames video BBC story Ames video BBC story

Forced perspective in LOTR

Camera calibration Camera calibration

Camera calibration

• Estimate both intrinsic and extrinsic parameters.

Two main categories:

Two main categories:

1. Photometric calibration: uses reference objects

ith k t

with known geometry

2. Self calibration: only assumes static scene, e.g.

structure from motion

(7)

Camera calibration approaches

1. linear regression (least squares)

2 li i i i

2. nonlinear optimization

Chromaglyphs (HP research)

Camera calibration Camera calibration

Linear regression

 

Rt X MX

K

x~ 

(8)

Linear regression

• Directly estimate 11 unknowns in the M matrix using known 3D points (X Y Z ) and measured using known 3D points (Xi,Yi,Zi) and measured feature positions (ui,vi)

Linear regression

Linear regression Linear regression

Solve for Projection Matrix M using least-square techniquesq

(9)

Normal equation

Given an overdetermined system

b Ax

the normal equation is that which minimizes the sum of the square differences between left and sum of the square differences between left and right sides

b A Ax

A

T

T

Linear regression

• Advantages:

All ifi f th i d i t i

– All specifics of the camera summarized in one matrix – Can predict where any world point will map to in the

image image

• Disadvantages:

’ ll b l

– Doesn’t tell us about particular parameters – Mixes up internal and external parameters

ifi h d hi b k

• pose specific: move the camera and everything breaks

– More unknowns than true degrees of freedom

Nonlinear optimization

• A probabilistic view of least square

F i

• Feature measurement equations

• Probability of M given {(ui,vi)}

P P

Optimal estimation

• Likelihood of M given {(ui,vi)}

It i l t bl (b t t il

P L

• It is a least square problem (but not necessarily linear least square)

• How do we minimize L?

(10)

Optimal estimation

• Non-linear regression (least squares), because the relations between û and u are non linear the relations between ûi and ui are non-linear functions of M

unknown parameters We could have terms like in thisf cos

unknown parameters

  R t X

K u u

u  ~ u ˆ uK   R t X

u

known constant

• We can use Levenberg-Marquardt method to known constant

minimize it

Nonlinear least square methods Nonlinear least square methods

Least square fitting

number of data points number of data points

number of parameters number of parameters

Linear least square fitting

y

tt

(11)

Linear least square fitting

y model parameters

t x x t

M t

y ( )  ( ; x )  

t

t x x t

M t

y ( )  ( ; x ) 

0

1

t

Linear least square fitting

y model parameters

t x x t

M t

y ( )  ( ; x )  

t

t x x t

M t

y ( )  ( ; x ) 

0

1

t

Linear least square fitting

y model parameters

t x x t

M t

y ( )  ( ; x )  

t

t x x t

M t

y ( )  ( ; x ) 

0

1

)

; ( )

( x y M t x

f  

t

f

i

( x )  y

i

M ( t

i

; x )

prediction residual prediction

Linear least square fitting

y model parameters

t x x t

M t

y ( )  ( ; x )  

t

t x x t

M t

y ( )  ( ; x ) 

0

1

)

; ( )

( x y M t x

f  

t

f

i

( x )  y

i

M ( t

i

; x )

prediction residual prediction

)

3

;

( t x x t x t

M ( t ; x ) x

0

x

1

tx

2

t

i li t

M x   

is linear, too.

(12)

Nonlinear least square fitting

t x t

x x e

e x t

M( ;x) 3 14 2 model

x T

x x

x , , , ] [ 1 2 4 4

 parameters x

)

; ( )

(x i i x

i y M t

f  

x ext x ext

y 12

 residuals

 

i x e x e

y34

Function minimization

Least square is related to function minimization.

It is very hard to solve in general Here we only consider It is very hard to solve in general. Here, we only consider a simpler problem of finding local minimum.

Function minimization Quadratic functions

Approximate the function with a quadratic function within a quadratic function within a small neighborhood

(13)

Quadratic functions

Ais positive definite.

All eigenvalues

negative definite All eigenvalues

are positive.

For all x, xTAx>0.

A is indefinite A is singular

Function minimization

Why?

By definition, if is a local minimizer,y , x* , h is small enough F(x*h)F(x*)

) h O(

) (x F' h ) F(x h)

F(x*  *T *2

Function minimization Function minimization

(14)

Descent methods Descent direction

Steepest descent method

the decrease of F(x) per the decrease of F(x) per unit along h direction

hsd is a descent direction because hTsd F’(x) = -F’(x)2 <0

Line search

that so Find

minimum is

)

( F(x0αh)

minimum is

)

0 (() F(x0αh)

0 0

x F

) (

T

 

 

) (

' x0 h F

x h x

FTα



) (x F' h 0

(15)

Line search

0 ) (

' x0 hF

hT α

) (x F' hF (x ) h 0

( ( ) ) ( )

'

'' '

0

h F

F h

h x F h

T T

T α

 

0 ) ( )

( 0 0

Hh h h h

h x F x

F h

T T

T T

α α

h h

T

 

Hh h

T

 

Steepest descent method

isocontour gradient

isocontour gradient

Steepest descent method

It has good performance in the initial stage of the iterative It has good performance in the initial stage of the iterative process. Converge very slow with a linear rate.

Newton’s method

(16)

Newton’s method

• Another view

1 h Hh g

h x h

x

h

T T

2 ) 1

( )

( )

(  F   F  

E

• Minimizer satisfies

E ' ( h

*

)  0 0 )

(

' hgHhE

g H h  

1

Newton’s method

g H h  

1

• It requires solving a linear system and H is not

g H h

always positive definite.

• It has good performance in the final stage of g p g the iterative process, where x is close to x*.

Gauss-Newton method

• Use the approximate Hessian

J J H

T

• No need for second derivative

• No need for second derivative

• H is positive semi-definite

Hybrid method

This needs to calculate second-order derivative which i h b il bl

might not be available.

(17)

Levenberg-Marquardt method

• LM can be thought of as a combination of steepest descent and the Newton method steepest descent and the Newton method.

When the current solution is far from the correct one the algorithm behaves like a correct one, the algorithm behaves like a

steepest descent method: slow, but guaranteed to converge When the current solution is close to converge. When the current solution is close to the correct solution, it becomes a Newton’s method

method.

Nonlinear least square

find try to ,

ts measuremen of

set a

Given x

Here minimal is

distance squared

that the so

vector parameter

best

the p

T

).

ˆ ( with ˆ,

Here, minimal.

is distance

squared

p x

x

x  f

Levenberg-Marquardt method Levenberg-Marquardt method

g I)h

J

(J

T

   

• μ=0 → Newton’s method

g I)h

J (J  

• μ→∞ → steepest descent method

• Strategy for choosing μ

St t ith ll – Start with some small μ

– If F is not reduced, keep trying larger μ until it does If F i d d t it d d f th t – If F is reduced, accept it and reduce μ for the next

iteration

(18)

Recap (the Rosenbrock function) ( )

2 2 2

2) 100( )

1 ( )

f((x,y) (1 x2)2 100(y x2)2 f

z    

Gl b l i i (1 1) Global minimum at (1,1)

Steepest descent

g x

xx   g x

k1

k

T

Hh h

h h

T

T

h Hh

xk

 

F'

1

xk

x1

x xmin

 g F x' k

1

xk

x2

xk

x1

x

g

1

xk

xmin

x2 xk1

(19)

In the plane of the steepest descent direction

h h

T

Hh h

h h

T

xk 1 x

x xk

Steepest descent (1000 iterations) ( )

Regularized Least-

Gauss-Newton method

g H x

x x H

1

g x

k1

k

• With the approximate Hessian

T

J J H

T

• No need for second derivative

• H is positive semi-definite H 1

xk

x1

x

g H - 1

1

xk

xmin

x2 xk1

(20)

Newton’s method (48 evaluations) ( )

Regularized Least-

Levenberg-Marquardt

• Blends steepest descent and Gauss-Newton

A h l f h d di i h

• At each step, solve for the descent direction h

g I)h

J (J

T

 

• If μ large steepest descent

g I)h

J

(J    

g h  

• If μ large, , steepest descent If ll G N t

g h

J) (J

h

T 1

• If μ small, , Gauss-Newton

h   (J

T

J)

1

g

Levenberg-Marquardt (90 evaluations)

Regularized Least-

A popular calibration tool

A popular calibration tool

(21)

Multi-plane calibration

Images courtesy Jean-Yves Bouguet, Intel Corp.

Advantage

• Only requires a plane

• Don’t have to know positions/orientations G d d il bl li !

• Good code available online!

Intel’s OpenCV library: http://www.intel.com/research/mrl/research/opencv/

Matlab version by Jean-Yves Bouget: y g

http://www.vision.caltech.edu/bouguetj/calib_doc/index.html

Zhengyou Zhang’s web site: http://research.microsoft.com/~zhang/Calib/

Step 1: data acquisition

Step 2: specify corner order Step 3: corner extraction

(22)

Step 3: corner extraction Step 4: minimize projection error

Step 4: camera calibration Step 4: camera calibration

(23)

Step 5: refinement Optimized parameters

Applications Applications

How is calibration used?

• Good for recovering intrinsic parameters; It is thus useful for many vision applications

thus useful for many vision applications

• Since it requires a calibration pattern, it is

ft t l th

often necessary to remove or replace the pattern from the footage or utilize it in some ways…

(24)

Example of calibration Example of calibration

Example of calibration

• Videos from GaTech

D T M k Of

• DasTatoo, MakeOf

• P!NG, MakeOf

• Work, MakeOf

• LifeInPaints MakeOf

• LifeInPaints, MakeOf

PhotoBook

M k Of PhotoBook MakeOf

參考文獻

相關文件

– It is not hard to show that calculating Euler’s phi function a is “harder than” breaking the RSAa. – Factorization is “harder than” calculating Euler’s phi function

– The The readLine readLine method is the same method used to read method is the same method used to read  from the keyboard, but in this case it would read from a 

One of the main results is the bound on the vanishing order of a nontrivial solution to the Stokes system, which is a quantitative version of the strong unique continuation prop-

Real Schur and Hessenberg-triangular forms The doubly shifted QZ algorithm.. Above algorithm is locally

“Find sufficiently accurate starting approximate solution by using Steepest Descent method” + ”Compute convergent solution by using Newton-based methods”. The method of

In this chapter we develop the Lanczos method, a technique that is applicable to large sparse, symmetric eigenproblems.. The method involves tridiagonalizing the given

In summary, the main contribution of this paper is to propose a new family of smoothing functions and correct a flaw in an algorithm studied in [13], which is used to guarantee

Like the proximal point algorithm using D-function [5, 8], we under some mild assumptions es- tablish the global convergence of the algorithm expressed in terms of function values,