### Camera calibration

Digital Visual Effectsg
*Yung-Yu Chuang*

*with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes*

**Outline**

• Camera projection models

C lib i

• Camera calibration

• Nonlinear least square methods

• A camera calibration tool

• Applications

• Applications

**Camera projection models** **Camera projection models**

**Pinhole camera**

**Pinhole camera model**

**P**
*(X,Y,Z)*

origin **P**

**p**
*(x,y)*

principal point (optical

center)

**Pinhole camera model**

*Z*
*x* *fX*

*Z*
*y* *fY*

*Z*

principal

* X*

principal *Z*

point

0 0 0

0 0

0 *Y*

*X*
*f*

*f*
*fY*

*fX*
*y*

*x*

0 0 1 0 1

0 0 0

~

1 *f* *Z*

*Z*
*fY*
*y*

1

**Pinhole camera model**

principal

* X*

principal point

0 0 1 0

0 0 0 1 0 0

0

0 *Y*

*X*
*f*

*f*
*fY*

*fX*
*y*

*x*

0 0 1 0 1

0 0 1 0 1 0 0

0 0

~

1 *f* *Z*

*Z*
*fY*
*y*

1

**Principal point offset**

principal point

**intrinsic matrix**

###

^{I}

^{X}* X*

**K**

**x**~ 0

only related to camera projection

0 0 1 0

0 0 0 1 0

0 _{0}

*Y*
*X*
*y*

*f*
*x*
*f*

*fY*
*fX*
*y*

*x*

0 0 1 0 1

0 0 1 0 1 0 0 0

~ 1

0 *Z*

*y*
*f*
*Z*

*fY*
*y*

1

**Intrinsic matrix**

*f* 0 *x*_{0}

**Is this form of K good enough?**

1 0 0

0 *f* *y*_{0}
**K**

0 0 1

• non-square pixels (digital video)

• skew *fa* *s* *x*

• radial distortion

0 _{0}

0

*y*
*f*

*x*
*s*
*fa*
**K**

0 0 1

**Distortion**

No distortion Pin cushion Barrel

• Radial distortion of the image

Ca sed b imperfect lenses – Caused by imperfect lenses

– Deviations are most noticeable for rays that pass through the edge of the lens

through the edge of the lens

**Camera rotation and translation**

*X* *X*

' '

**t**

**R**

*Z*
*Y*
*Z*

*Y* _{3} _{3}

' '

###

^{}

^{}

0 _{0}

*Y*
*x* *X*

*f*

*x* ^{x ~}^{K}

###

^{R}^{t}

^{X}###

0 0 1

0

~ 1

0 *Z*

*y* *Y*
*f*

*y* **Rt**

**extrinsic matrix**

1 0 0 1 1

**Two kinds of parameters**

*• internal or intrinsic parameters such as focal *

l th ti l t t ti

length, optical center, aspect ratio:

*what kind of camera?*

*• external or extrinsic (pose) parameters *
including rotation and translation:

*where is the camera?*

**Other projection models** **Orthographic projection**

• Special case of perspective projection

Di t f th COP t th PP i i fi it – Distance from the COP to the PP is infinite

Image World

– Also called “parallel projection”: (x, y, z) → (x, y)

**Other types of projections**

• Scaled orthographic

Al ll d “ k ti ”

– Also called “weak perspective”

• Affine projection

– Also called “paraperspective”

**Illusion**

**Illusion** **Fun with perspective**

**Perspective cues** **Perspective cues**

**Fun with perspective**

Ames room

Ames video BBC story Ames video BBC story

**Forced perspective in LOTR**

**Camera calibration** **Camera calibration**

**Camera calibration**

• Estimate both intrinsic and extrinsic parameters.

Two main categories:

Two main categories:

1. Photometric calibration: uses reference objects

ith k t

with known geometry

2. Self calibration: only assumes static scene, e.g.

structure from motion

**Camera calibration approaches**

1. linear regression (least squares)

2 li i i i

2. nonlinear optimization

**Chromaglyphs (HP research)**

**Camera calibration** **Camera calibration**

**Linear regression**

###

^{R}^{t}

^{X}

^{MX}**K**

**x**~

**Linear regression**

**• Directly estimate 11 unknowns in the M matrix **
*using known 3D points (X Y Z ) and measured *
*using known 3D points (X*_{i}*,Y*_{i}*,Z** _{i}*) and measured

*feature positions (u*

_{i}*,v*

*)*

_{i}**Linear regression**

**Linear regression** **Linear regression**

Solve for Projection Matrix M using least-square techniquesq

**Normal equation**

Given an overdetermined system

**b** **Ax**

the normal equation is that which minimizes the sum of the square differences between left and sum of the square differences between left and right sides

**b** **A** **Ax**

**A**

^{T}

###

^{T}

**Linear regression**

• Advantages:

All ifi f th i d i t i

– All specifics of the camera summarized in one matrix – Can predict where any world point will map to in the

image image

• Disadvantages:

’ ll b l

– Doesn’t tell us about particular parameters – Mixes up internal and external parameters

ifi h d hi b k

• pose specific: move the camera and everything breaks

– More unknowns than true degrees of freedom

**Nonlinear optimization**

• A probabilistic view of least square

F i

• Feature measurement equations

**• Probability of M given {(u**_{i}*,v** _{i}*)}

*P* *P*

**Optimal estimation**

**• Likelihood of M given {(u**_{i}*,v** _{i}*)}

It i l t bl (b t t il

*P*
*L*

• It is a least square problem (but not necessarily linear least square)

*• How do we minimize L?*

**Optimal estimation**

• Non-linear regression (least squares), because
*the relations between û and u are non linear *
*the relations between û*_{i}*and u** _{i}* are non-linear

**functions of M**

unknown parameters
We could have terms like in this*f* cos

###

unknown parameters

## ^{R} ^{t} ^{X}

^{R}

^{t}

^{X}

**K** **u** **u**

**u** ~ ^{u} ˆ ^{u} ^{K} ^{R} ^{t} ^{X}

^{u}

^{u}

^{K}

^{R}

^{t}

^{X}

**u**

known constant

• We can use Levenberg-Marquardt method to known constant

minimize it

**Nonlinear least square methods** **Nonlinear least square methods**

**Least square fitting**

number of data points number of data points

number of parameters number of parameters

**Linear** **least square fitting**

y

tt

**Linear** **least square fitting**

y model parameters

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

t

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

_{0}

###

_{1}

t

**Linear** **least square fitting**

y model parameters

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

t

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

_{0}

###

_{1}

t

**Linear** **least square fitting**

y model parameters

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

t

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

_{0}

###

_{1}

### )

### ; ( )

### ( *x* *y* *M* *t* **x**

*f*

t

*f*

_{i}### ( *x* ) *y*

_{i}*M* ( *t*

_{i}### ; **x** )

prediction residual prediction

**Linear** **least square fitting**

y model parameters

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

t

*t* *x* *x* *t*

*M* *t*

*y* ( ) ( ; **x** )

_{0}

###

_{1}

### )

### ; ( )

### ( *x* *y* *M* *t* **x**

*f*

t

*f*

_{i}### ( *x* ) *y*

_{i}*M* ( *t*

_{i}### ; **x** )

prediction residual prediction

### )

3### ;

### ( *t* *x* *x* *t* *x* *t*

*M* ( *t* ; **x** ) *x*

_{0}

### *x*

_{1}

*t* *x*

_{2}

*t*

i li t
*M* **x**

is linear, too.
**Nonlinear least square fitting**

*t*
*x*
*t*

*x* *x* *e*

*e*
*x*
*t*

*M*( ;**x**) _{3} ^{1} _{4} ^{2}
model

*x* *T*

*x*
*x*

*x* , , , ]
[ _{1} _{2} _{4} _{4}

parameters **x**

)

; ( )

(**x** _{i}_{i}**x**

*i* *y* *M* *t*

*f*

###

^{x}

^{e}

^{x}

^{t}

^{x}

^{e}

^{x}

^{t}###

*y* ^{1} ^{2}

residuals

###

*i* *x* *e* *x* *e*

*y* _{3} _{4}

**Function minimization**

Least square is related to function minimization.

It is very hard to solve in general Here we only consider It is very hard to solve in general. Here, we only consider a simpler problem of finding local minimum.

**Function minimization** **Quadratic functions**

Approximate the function with a quadratic function within a quadratic function within a small neighborhood

**Quadratic functions**

**A**is positive definite.

All eigenvalues

negative definite All eigenvalues

are positive.

For all x,
x^{T}Ax>0.

A is indefinite A is singular

**Function minimization**

Why?

By definition, if is a local minimizer,y , **x**** ^{*}** ,

**h**is small enough

**F(x**

****

^{*}**h)**

**F(x**

^{*}**)**

**)**
**h**
**O(**

**)**
**(x**
**F'**
**h**
**)**
**F(x**
**h)**

**F(x**** ^{*}**

****

^{*}

^{T}****

^{*}

^{2}**Function minimization** **Function minimization**

**Descent methods** **Descent direction**

**Steepest descent method**

**the decrease of F(x) per *** the decrease of F(x) per *
unit along h direction

**→**

**→**

h_{sd }is a descent direction because h^{T}_{sd }F’(x) = -F’(x)^{2 }<0

**Line search**

that so Find

minimum is

)

( **F(x**_{0} *α***h)**

minimum is

)

0 (() **F(x**^{0}*α***h)**

0 ^{0}

**x**
**F**

**)**
**(**

**T**

) (

' **x**_{0} **h**
**F**

**x** **h**
**x**

**F** **T** *α*

**)**
**(x**
**F'**
**h** _{0}

**Line search**

0 ) (

' **x**_{0}** h**
**F**

**h**^{T}*α*

**)**
**(x**
**F'**
**h** **F** **(x** **)**
**h** _{0}

###

^{(}

^{(}

^{)}

^{)}

^{(}

^{)}

###

'

'' '

0

**h**
**F**

**F**
**h**

**h**
**x**
**F**
**h**

**T**
**T**

**T** *α*

###

0 ) ( )

( _{0} _{0}

**Hh**
**h**
**h**
**h**

**h**
**x**
**F**
**x**

**F**
**h**

**T**
**T**

**T**
**T**

*α*
*α*

**h** **h**

^{T}###

**Hh** **h**

^{T}###

**Steepest descent method**

isocontour gradient

isocontour gradient

**Steepest descent method**

It has good performance in the initial stage of the iterative It has good performance in the initial stage of the iterative process. Converge very slow with a linear rate.

**Newton’s method**

**→**

**→**

**→**

**→**

**Newton’s method**

• Another view

### 1 **h** **Hh** **g**

**h** **x** **h**

**x**

**h**

^{T}

^{T}### 2 ) 1

### ( )

### ( )

### ( *F* *F*

*E*

• Minimizer satisfies

*E* ' ( **h**

^{*}

### ) 0 0 )

### (

### ' **h** **g** **Hh** *E*

**g** **H** **h**

^{}

^{1}

**Newton’s method**

**g** **H** **h**

^{}

^{1}

• It requires solving a linear system and H is not

**g** **H** **h**

always positive definite.

• It has good performance in the final stage of g p g the iterative process, where x is close to x*.

**Gauss-Newton method**

• Use the approximate Hessian

**J** **J** **H**

^{T}• No need for second derivative

• No need for second derivative

• H is positive semi-definite

**Hybrid method**

This needs to calculate second-order derivative which i h b il bl

might not be available.

**Levenberg-Marquardt method**

• LM can be thought of as a combination of steepest descent and the Newton method steepest descent and the Newton method.

When the current solution is far from the correct one the algorithm behaves like a correct one, the algorithm behaves like a

steepest descent method: slow, but guaranteed to converge When the current solution is close to converge. When the current solution is close to the correct solution, it becomes a Newton’s method

method.

**Nonlinear least square **

find try to ,

ts measuremen of

set a

Given **x**

Here minimal is

distance squared

that the so

vector parameter

best

the **p**

*T*

###

###

).

ˆ ( with ˆ,

Here, minimal.

is distance

squared

**p**
**x**

**x**

**x** *f*

###

###

###

**Levenberg-Marquardt method** **Levenberg-Marquardt method**

**g** **I)h**

**J**

**(J**

^{T}###

• μ=0 → Newton’s method

**g** **I)h**

**J** **(J**

• μ→∞ → steepest descent method

• Strategy for choosing μ

St t ith ll – Start with some small μ

– If F is not reduced, keep trying larger μ until it does If F i d d t it d d f th t – If F is reduced, accept it and reduce μ for the next

iteration

**Recap (the Rosenbrock function)** **(** **)**

2 2 2

2) 100( )

1 ( )

*f*((*x*,*y*) (1 *x*^{2})^{2} 100(*y* *x*^{2})^{2}
*f*

*z*

Gl b l i i *(1 1)*
*Global minimum at (1,1)*

**Steepest descent**

**g** **x**

**x** **x** **g** **x**

_{k}_{}

_{1}###

_{k}###

**T**

**Hh** **h**

**h** **h**

**T**

###

**T**

### **h** **Hh**

**x****k**

*F'*

**1**

**x****k**_{}

*x*1

*x* **x**^{min}

** g** * F x*'

_{k}**1**

**x****k**_{}

*x*2

**x****k**

*x*1

*x*

**g**

**1**

**x****k**_{}

**x**min

*x*2 **x****k**_{}**1**

**In the plane of the steepest descent direction**

**h** **h**

^{T}**Hh** **h**

**h** **h**

###

**T**

###

###

**x**_{k}_{1}**x**

**x** _{} **x**_{k}

**Steepest descent (1000 iterations)** **(** **)**

**Regularized Least-**

**Gauss-Newton method**

**g** **H** **x**

**x** **x** **H**

^{}

^{1}**g** **x**

_{k}_{}

_{1}###

_{k}###

• With the approximate Hessian

**T**

**J** **J** **H**

^{T}• No need for second derivative

• H is positive semi-definite ^{H}^{1}

**x****k**

*x*1

*x*

**g**
**H**
**-** ^{}^{1}

**1**

**x****k**_{}

**x**min

*x*2 **x****k**_{}**1**

**Newton’s method (48 evaluations)** **(** **)**

**Regularized Least-**

**Levenberg-Marquardt**

• Blends steepest descent and Gauss-Newton

A h l f h d di i h

• At each step, solve for the descent direction h

**g** **I)h**

**J** **(J**

^{T}###

• If μ large steepest descent

**g** **I)h**

**J**

**(J**

**g** **h**

• If μ large, , steepest descent If ll G N t

**g** **h**

**J)** **(J**

**h**

^{T}^{1}

• If μ small, , Gauss-Newton

**h** **(J**

^{T}**J)**

^{}

^{1}

**g**

**Levenberg-Marquardt (90 evaluations)**

**Regularized Least-**

**A popular calibration tool**

**A popular calibration tool**

**Multi-plane calibration**

Images courtesy Jean-Yves Bouguet, Intel Corp.

Advantage

• Only requires a plane

• Don’t have to know positions/orientations G d d il bl li !

• Good code available online!

– Intel’s OpenCV library: http://www.intel.com/research/mrl/research/opencv/

– Matlab version by Jean-Yves Bouget: y g

http://www.vision.caltech.edu/bouguetj/calib_doc/index.html

– Zhengyou Zhang’s web site: http://research.microsoft.com/~zhang/Calib/

**Step 1: data acquisition**

**Step 2: specify corner order** Step 3: corner extraction

### Step 3: corner extraction **Step 4: minimize projection error**

**Step 4: camera calibration** **Step 4: camera calibration**

**Step 5: refinement** **Optimized parameters**

**Applications** **Applications**

**How is calibration used?**

• Good for recovering intrinsic parameters; It is thus useful for many vision applications

thus useful for many vision applications

• Since it requires a calibration pattern, it is

ft t l th

often necessary to remove or replace the pattern from the footage or utilize it in some ways…

**Example of calibration** **Example of calibration**

**Example of calibration**

• Videos from GaTech

D T M k Of

• DasTatoo, MakeOf

• P!NG, MakeOf

• Work, MakeOf

• LifeInPaints MakeOf

• LifeInPaints, MakeOf

**PhotoBook**

**M k Of**
**PhotoBook**
**MakeOf**