Optical flow Three assumptions

26  Download (0)

Full text

(1)

Motion estimation

Digital Visual Effectsg Yung-Yu Chuang

with slides by Michael Black and P. Anandan

Motion estimation

• Parametric motion (image alignment) T ki

• Tracking

• Optical flow

Parametric motion

direct method for image stitching

Tracking

(2)

Optical flow Three assumptions

• Brightness consistency S i l h

• Spatial coherence

• Temporal persistence

Brightness consistency

Image measurement (e g brightness) in a small region Image measurement (e.g. brightness) in a small region remain the same although their location may change.

Spatial coherence

• Neighboring points in the scene typically belong to the

• Neighboring points in the scene typically belong to the same surface and hence typically have similar motions.

• Since they also project to nearby pixels in the image, Since they also project to nearby pixels in the image, we expect spatial coherence in image flow.

(3)

Temporal persistence

The image motion of a surface patch changes gradually over time.

Image registration

Goal: register a template image T(x) and an input image I(x) where x (x y)T (warp I so that it image I(x), where x=(x,y)T. (warp I so that it matches T)

Image alignment: I(x) and T(x) are two images Tracking: T(x) is a small patch around a point p in Tracking: T(x) is a small patch around a point p in

the image at t. I(x) is the image at time t+1.

O ti l fl T( ) d I( ) t h f i

Optical flow: T(x) and I(x) are patches of images at t and t+1.

warp

T I

warp fixed

Simple approach (for translation)

• Minimize brightness difference

 

2

 

y x

y x T v y u x I v

u E

,

) 2

, ( ) , ( )

, (

Simple SSD algorithm

For each offset (u, v) ( ) compute E(u,v);

Choose (u, v) which minimizes E(u,v);

Problems:

Problems:

• Not efficient N b i l

• No sub-pixel accuracy

(4)

Lucas-Kanade algorithm Lucas Kanade algorithm

Newton’s method

• Root finding for f(x)=0 March x and test signs

• March x and test signs

• Determine Δx (small→slow; large→ miss)

Newton’s method

• Root finding for f(x)=0

Newton’s method

• Root finding for f(x)=0 T l ’ i

Taylor’s expansion:

 1 '' ( )

2

) ( ' ) ( )

(

f f f

f 0

 

0

0

 '' (

0

)

2

  ) 2

( ' ) ( )

(

x

f x f x

f x

f

 ) ( ) ' ( )

(

x f x f x

f

(

x0

  ) 

f

(

x0

) 

f

(

x0

) 

f

  

) (x

f

) ( '

) (

n n

n f x

x

f

 

) ( '

) (

1

n n

n f x

x x f

x

 

)

(

xn f

(5)

Newton’s method

• Root finding for f(x)=0

) (

f

) ( '

) (

n n

n f x

x

f

 

) (

n f

x0 x1

x2

Newton’s method

pick up x=x0 iiterate

) (x

compute

f

) ( '

) (

x f

x x   f

update x by x+Δx til

until converge

Finding root is useful for optimization because Minimize g(x) → find root for f(x)=g’(x)=0

Minimize g(x) find root for f(x) g (x) 0

Lucas-Kanade algorithm

 

I x u y v T x y

v u

E( , )

 

( , ) ( , )

2

y x

y y

,

) ( ) (

) (

y

x vI

uI y x I v y u x

I(( ,,y )) (( ,,y)) x y

 

I(x,y) T(x,y) uIx vIy

2

y x

y

y x

y

,

) , ( ) , (

 

y x

y x

x I x y T x y uI vI

u I E

,

) , ( ) , ( 2 0

 

y x

y x

y I x y T x y uI vI

v I

E 2 ( , ) ( , )

0 v x,y

Lucas-Kanade algorithm

 

E Ix I x y T x y uIx vIy )

, ( ) , ( 2

0

 

xy x y y x y

u , ( , ) ( , )

 

E I I x y T x y uI vI

) ( ) ( 2

0

y x

y x

y I x y T x y uI vI

v , 2I ( , ) ( , ) 0

 

 

 

y

x xy

x y

x y

x x

y x I y x T I v

I u

I I

y x I y x T I v I I u

I

2

, ,

, 2

) ( ) (

) , ( ) , (

 

y x

y y

x y y

x y

xI u I v I T x y I x y

I

, ,

,

) , ( ) , (

 

2

 

 

xy

x y

x y x y

x x

y x I y x T I

y x I y x T I v

u I I

I

I I I

, 2

, ,

2

) ( ) (

) , ( ) , (

 

  

y x

y y

x y y

x y

xI I v I T x y I x y

I

, , ,

) , ( ) , (

(6)

Lucas-Kanade algorithm

iterate

shift I(x y) with (u v) shift I(x,y) with (u,v)

compute gradient image Ix, Iy compute error image T(x y) I(x y) compute error image T(x,y)-I(x,y) compute Hessian matrix

solve the linear system solve the linear system (u,v)=(u,v)+(∆u,∆v) til

until converge

 

Ix2

IxIy

Ix

T(x,y) I(x,y)

 

y y x

x

y y

x

y x

y x y

x x

y x I y x T I

y y

v u I I

I

, 2

, ,

) , ( ) , (

) , ( ) , (



 

x,y x,y x,y

Parametric model

 

I x u y v T x y

v u

E( , )

 

( , ) ( , )

2

y x

y y

,

) ( ) (

) (

 

W(x;p) x

p) ( ) ( ) 2

( I T

E Our goal is to find

pto minimize E(p)

 

x

x p) W(x;

p) ( ) ( )

( I T

E

d

x

 

pto minimize E(p) for all x in T’s domain

T y x y

x p d d

d y

d

x , ( , )



p) translation W(x;

x xy

xx y

d x d

d

1



 

Ax d p)

affine W(x;

T

y yy yx

d y d

d ,

1 1





 d Ax p) affine W(x;

T y x yy yx xy

xx d d d d d

d

p( , , , , , )

Parametric model

 

x

x Δp) p

W(x; ) ( ) 2

( T

minimize I

x

with respect to Δp

WΔp p p) W W(x;

Δp) p

W(x;

) (

)

( Δp

p p) W W(x;

Δp) p

W(x;

I

I

p Δp W p) x

W(x;

I

I( )

p x





WΔp x

p) W(x;

2

) ( )

( I T

minimize

x I

p p p)

( ; ) ( )

(

Parametric model

image gradient

warped image target image

W 2

image gradient





x

x p Δp

p) W

W(x; ) ( )

( I T

I

Jacobian of the warp

x x

x x

p W p

W p

W p

W

W

 

n y y

y

n y

p W p

W p

W

p p

p p

W p p

W

2 1

2 1

p p1 p2 pn

(7)

Jacobian matrix

• The Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function partial derivatives of a vector-valued function.

m

F

:

Rn

R

)

, ,

(

x1 x2 xn

F

)) ,

, ( ),

, , ( ), ,

, (

(

f1 x1 x2

xn f2 x1 x2

xn

fm x1 x2

xn

 

f

f

or

) ,

,

(

1 2 n

F x x x

J

   

 

xn

f x

f

1

1 1

) ,

, (

or

2

1 f fm

f

 

 

m m

n

f f

1

  )

, ,

(

x1 x2

xn

  

 

 

xm

 

xmn

1

Δx x x

Δx

x

) ( ) ( )

(

F JF

F

  

Jacobian matrix

   0 , 0 , 2 

3

:

R

    

R

F t

r

sin  cos 

 cos

sin sin

r v

r u

 ) 

, , ( ) , ,

(

r t u v

F

  

ttt v r

cos 









 

u u u

t t r t





 

  ) , ,

(    

v v v

u u r r u

JF



 



 

v v r v





 

 sin sin cos sin sin cos sin sin cos

cos cos

sin

r r

r r



 

 cos  sin 0

cos s

s cos s

s

r

Parametric model

image gradient

warped image target image

W 2

image gradient





x

x p Δp

p) W

W(x; ) ( )

( I T

I

Jacobian of the warp

x x

x x

p W p

W p

W p

W

W

 

n y y

y

n y

p W p

W p

W

p p

p p

W p p

W

2 1

2 1

p p1 p2 pn

Jacobian of the warp

 Wx Wx Wx Wx

y y

y

n y

x

W W

W

p p

p Wp

p W

2 1

For example for affine

 p p1 p2 pn

For example, for affine







  xx xy x dxx x dxyy dx y

d x d

d (1 )

p) 1

W(x;  





y yy

yx y

yy

yx y d x d y d

d d

d (1 )

1 1 p)

W(x;





1 0 0

0

0 1 0 0

y x

y x

p W

y

p

dxx dyx dxy dyy dx dy

(8)

Parametric model





WΔp x

p) W(x;

2

) ( )

( I T

min

I

arg 

 

x

x p Δp

p)

W(x; ) ( )

( I T

I

W (W( )) WΔ ( )

0 I I I T

T

min

Δp

arg

x

x p Δp

p) W p W(x;

W ( ) ( )

0 I I I T

T

 

x

p) W(x;

p x H W

Δp 1 I T( ) I( )

T

WT W

x p

W p

H I W I

(Approximated) Hessian

Lucas-Kanade algorithm

iterate

1) warp I with W(x;p) 1) warp I with W(x;p)

2) compute error image T(x,y)-I(W(x,p)) 3) compute gradient image with W(x p)I 3) compute gradient image with W(x,p) 4) evaluate Jacobian at (x;p)

5) compute p

W

W

I

I

5) compute

6) compute Hessian

7) t

p

I

W T

7) compute 8) solve

9) d t b

x

p) W(x;

p x

W T( ) I( ) I

Δp

9) update p by p+Δ

until converge

Δp

 

W x W(x;p)

H

Δp 1 I T( ) I( )

T

 

x

p) W(x;

p x H

Δp I T( ) I( )

 

x

p) W(x;

p x H W

Δp 1 I T( ) I( )

T

Coarse-to-fine strategy

J Jw fi I

ain

J I

J warp Jw refine I

a

+

J warp Jw refine I

a

pyramid construction

pyramid construction

a

+

J warp Jw refine I

a

+

a

out

(9)

Application of image alignment Direct vs feature-based

• Direct methods use all information and can be very accurate but they depend on the fragile very accurate, but they depend on the fragile

“brightness constancy” assumption.

• Iterative approaches require initialization

• Iterative approaches require initialization.

• Not robust to illumination change and noise images

images.

• In early days, direct method is better.

• Feature based methods are now more robust and potentially faster

and potentially faster.

• Even better, it can recognize panorama without initialization

initialization.

Tracking Tracking

Tracking

(u, v)

I(x,y,t) (u, v) I(x+u,y+v,t+1)

(10)

Tracking

0 ) , , ( ) 1 , ,

(xu yv t I x y t brightness constancy I

0 ) , , ( ) , , ( ) , , ( ) , , ( ) , ,

(x y t uI x y t vI x y t I x y t I x y t

I x y t

0 ) , , ( ) , , ( ) , ,

(x y t vI x y t I x y t

uIx yy t

0

I v I u

IxuIyvIt 0 optical flow constraint equation I optical flow constraint equation

Optical flow constraint equation

Multiple constraints Area-based method

• Assume spatial smoothness

(11)

Area-based method

• Assume spatial smoothness

 

y x

t y

xu I v I

I v

u

E( , ) 2

y x,

Area-based method

must be invertible must be invertible

Area-based method

• The eigenvalues tell us about the local image structure

structure.

• They also tell us how well we can estimate the fl i b th di ti

flow in both directions.

• Link to Harris corner detector.

Textured area

(12)

Edge Homogenous area

KLT tracking

• Select features by

M i f b i di i il i

, ) ( 1 2 min

• Monitor features by measuring dissimilarity

Aperture problem

(13)

Aperture problem Aperture problem

Demo for aperture problem

• http://www.sandlotscience.com/Distortions/Br eathing Square htm

eathing_Square.htm

• http://www.sandlotscience.com/Ambiguous/Ba b l Ill i ht

rberpole_Illusion.htm

Aperture problem

• Larger window reduces ambiguity, but easily violates spatial smoothness assumption

violates spatial smoothness assumption

(14)

KLT tracking

http://www ces clemson edu/~stb/klt/

http://www.ces.clemson.edu/ stb/klt/

KLT tracking

http://www ces clemson edu/~stb/klt/

http://www.ces.clemson.edu/ stb/klt/

(15)

SIFT tracking (matching actually)

Frame 0  Frame 10

SIFT tracking

Frame 0  Frame 100

SIFT tracking

Frame 0  Frame 200

KLT vs SIFT tracking

• KLT has larger accumulating error; partly because our KLT implementation doesn’t have because our KLT implementation doesn t have affine transformation?

SIFT i i i l b t

• SIFT is surprisingly robust

• Combination of SIFT and KLT (example)

http://www.frc.ri.cmu.edu/projects/buzzard/smalls/

(16)

Rotoscoping (Max Fleischer 1914)

1937

Tracking for rotoscoping

Tracking for rotoscoping Waking life (2001)

(17)

A Scanner Darkly (2006)

• Rotoshop, a proprietary software. Each minute of animation required 500 hours of work

of animation required 500 hours of work.

Optical flow Optical flow

Single-motion assumption

Violated by

M i di i i

• Motion discontinuity

• Shadows

• Transparency

• Specular reflection

• Specular reflection

• …

Multiple motion

(18)

Multiple motion Simple problem: fit a line

Least-square fit Least-square fit

(19)

Robust statistics

• Recover the best fit for the majorityof the data

data

• Detect and reject outliers

Approach

Robust weighting

T t d d ti

Truncated quadratic

Robust weighting

G & M Cl

Geman & McClure

(20)

Robust estimation Fragmented occlusion

(21)

Regularization and dense optical flow

• Neighboring points in the scene typically belong to the

• Neighboring points in the scene typically belong to the same surface and hence typically have similar motions.

• Since they also project to nearby pixels in the image, Since they also project to nearby pixels in the image, we expect spatial coherence in image flow.

(22)

Input image Horizontal ti

Vertical motion

motion motion

(23)

Application of optical flow

video video matching

(24)

Input for the NPR algorithm Brushes

Edge clipping Gradient

(25)

Smooth gradient Textured brush

Edge clipping Temporal artifacts

Frame-by-frame application of the NPR algorithm

(26)

Temporal coherence References

• B.D. Lucas and T. Kanade, An Iterative Image Registration Technique with an Application to Stereo Vision, Proceedings of the 1981 DARPA Image U d di W k h 1981 121 130

Understanding Workshop, 1981, pp121-130.

• Bergen, J. R. and Anandan, P. and Hanna, K. J. and Hingorani, R., Hierarchical Model-Based Motion Estimation, ECCV 1992, pp237-252.

• J. Shi and C. Tomasi, Good Features to Track, CVPR 1994, pp593-600.

• Michael Black and P. Anandan, The Robust Estimation of Multiple Motions:

Parametric and Piecewise-Smooth Flow Fields, Computer Vision and Image , p g Understanding 1996, pp75-104.

• S. Baker and I. Matthews, Lucas-Kanade 20 Years On: A Unifying

Framework, International Journal of Computer Vision, 56(3), 2004, pp221 , p , ( ), , pp - 255.

• Peter Litwinowicz, Processing Images and Video for An Impressionist Eff t SIGGRAPH 1997

Effects, SIGGRAPH 1997.

• Aseem Agarwala, Aaron Hertzman, David Salesin and Steven Seitz, Keyframe-Based Tracking for Rotoscoping and Animation, SIGGRAPH 2004, pp584-591.

Figure

Updating...

References

Related subjects :