• 沒有找到結果。

Optical flow

N/A
N/A
Protected

Academic year: 2022

Share "Optical flow"

Copied!
102
0
0

加載中.... (立即查看全文)

全文

(1)

Motion estimation

Digital Visual Effectsg Yung-Yu Chuang

with slides by Michael Black and P. Anandan

(2)

Motion estimation

• Parametric motion (image alignment) T ki

• Tracking

• Optical flow

(3)

Parametric motion

direct method for image stitching

(4)

Tracking

(5)

Optical flow

(6)

Three assumptions

• Brightness consistency S i l h

• Spatial coherence

• Temporal persistence

(7)

Brightness consistency

Image measurement (e g brightness) in a small region Image measurement (e.g. brightness) in a small region remain the same although their location may change.

(8)

Spatial coherence

• Neighboring points in the scene typically belong to the

• Neighboring points in the scene typically belong to the same surface and hence typically have similar motions.

• Since they also project to nearby pixels in the image, Since they also project to nearby pixels in the image, we expect spatial coherence in image flow.

(9)

Temporal persistence

The image motion of a surface patch changes gradually over time.

(10)

Image registration

Goal: register a template image T(x) and an input image I(x) where x (x y)T (warp I so that it

image I(x), where x=(x,y)T. (warp I so that it matches T)

Image alignment: I(x) and T(x) are two images

Tracking: T(x) is a small patch around a point p in Tracking: T(x) is a small patch around a point p in

the image at t. I(x) is the image at time t+1.

O ti l fl T( ) d I( ) t h f i

Optical flow: T(x) and I(x) are patches of images at t and t+1.

warp

T I

warp fixed

(11)

Simple approach (for translation)

• Minimize brightness difference

 

2

 

y x

y x T v

y u x

I v

u E

,

) 2

, ( )

, (

) , (

(12)

Simple SSD algorithm

For each offset (u, v) ( )

compute E(u,v);

Choose (u, v) which minimizes E(u,v);

Problems:

Problems:

• Not efficient N b i l

• No sub-pixel accuracy

(13)

Lucas-Kanade algorithm

Lucas Kanade algorithm

(14)

Newton’s method

• Root finding for f(x)=0 March x and test signs

• March x and test signs

• Determine Δx (small→slow; large→ miss)

(15)

Newton’s method

• Root finding for f(x)=0

(16)

Newton’s method

• Root finding for f(x)=0 T l ’ i

Taylor’s expansion:

 1 '' ( )

2

) (

' )

( )

( f f f

f

0

 

0

0

 '' (

0

)

2

 

) 2 (

' )

( )

( xf x f xf x

f

 ) ( ) ' ( )

( x f x f x

f ( x

0

  )  f ( x

0

)  f ( x

0

) 

f   

) (x f

) (

'

) (

n n

n

f x

x

f

 

) (

'

) (

1

n n

n

f x

x x f

x

 

)

( x

n

f

(17)

Newton’s method

• Root finding for f(x)=0

) (

f

) (

'

) (

n n

n

f x

x

f

 

) (

n

f

x0 x1

x2

(18)

Newton’s method

pick up x=x0 iiterate

) (x

compute

f

) ( '

) (

x f

x x   f

update x by x+Δx til

until converge

Finding root is useful for optimization because Minimize g(x) → find root for f(x)=g’(x)=0

Minimize g(x) find root for f(x) g (x) 0

(19)

Lucas-Kanade algorithm

 

I x u y v T x y v

u

E( , )

 

( , ) ( , )

2

y x

y y

,

) (

) (

) (

y

x vI

uI y

x I v

y u x

I(( ,, y )) (( ,, y)) x y

 

I(x, y) T (x, y) uIx vIy

2

y x

y

y x

y

,

) , ( )

, (

 

y x

y x

x I x y T x y uI vI

u I E

,

) , ( )

, ( 2

0

 

y x

y x

y I x y T x y uI vI

v I

E 2 ( , ) ( , )

0 v x,y

(20)

Lucas-Kanade algorithm

 

E Ix I x y T x y uIx vIy )

, ( )

, ( 2

0

 

x y x y y x y

u , ( , ) ( , )

 

E I I x y T x y uI vI

) (

) (

2

0

y x

y x

y I x y T x y uI vI

v , 2I ( , ) ( , ) 0

 

 

 

y

x x y

x y

x y

x

x

y x I y

x T I v

I u

I I

y x I y

x T I v

I I u

I

2

, ,

,

2

) (

) (

) , ( )

, (

 

y x

y y

x

y y

x

y

xI u I v I T x y I x y

I

, ,

,

) , ( )

, (

 

2

 

 

y x

x y

x

y x y

x

x

y x I y

x T I

y x I y

x T I v

u I

I I

I I I

, 2

, ,

2

) (

) (

) , ( )

, (

 

  

y x

y y

x

y y

x

y

xI I v I T x y I x y

I

, , ,

) , ( )

, (

(21)

Lucas-Kanade algorithm

iterate

shift I(x y) with (u v) shift I(x,y) with (u,v)

compute gradient image Ix, Iy

compute error image T(x y) I(x y) compute error image T(x,y)-I(x,y) compute Hessian matrix

solve the linear system solve the linear system (u,v)=(u,v)+(∆u,∆v)

til

until converge

 

Ix2

IxIy

Ix

T (x, y) I(x, y)

 

y y x

x

y y

x

y x

y x y

x

x

y x I y

x T I

y y

v u I

I I

, 2

, ,

) , ( )

, (

) , ( )

, (



 

 x,y x,y x,y

(22)

Parametric model

 

I x u y v T x y v

u

E( , )

 

( , ) ( , )

2

y x

y y

,

) (

) (

) (

 

W(x;p) x

p) ( ) ( ) 2

( I T

E Our goal is to find

p to minimize E(p)

 

x

x p)

W(x;

p) ( ) ( )

( I T

E

d

x

 

p to minimize E(p) for all x in T’s domain

T y x

y

x p d d

d y

d

x , ( , )



p)

translation W(x;

x xy

xx y

d x d

d

1



 

Ax d p)

affine W(x;

T

y yy

yx

d y d

d ,

1 1





 d Ax

p)

affine W(x;

T y x

yy yx

xy

xx d d d d d

d

p ( , , , , , )

(23)

Parametric model

 

x

x Δp)

p

W(x; ) ( ) 2

( T

minimize I

x

with respect to Δp

W p Δp p) W

W(x;

Δp) p

W(x;

) (

)

( Δp

p p) W

W(x;

Δp) p

W(x;

I

I

p Δp W p) x

W(x;

I

I( )

p x





W Δp x

p) W(x;

2

) ( )

( I T

minimize

I

x

p p p)

( ; ) ( )

(

(24)

Parametric model

image gradient

warped image target image

W 2

image gradient





x

x p Δp

p) W

W(x; ) ( )

( I T

I

Jacobian of the warp

x x

x x

p W p

W p

W p

W

W

 

n y y

y

n y

p W p

W p

W

p p

p p

W p p

W

2 1

2 1

p p1 p2 pn

(25)

Jacobian matrix

• The Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function

partial derivatives of a vector-valued function.

m

F : R

n

R )

, ,

( x

1

x

2

x

n

F

)) ,

, ( ),

, ,

( ),

, ,

(

( f

1

x

1

x

2

x

n

f

2

x

1

x

2

x

n

f

m

x

1

x

2

x

n

  ff or

) ,

,

(

1 2 n

F

x x x

J

   

 

x

n

f x

f

1

1 1

) ,

, (

or

2

1

f f

m

f

 

 

m m

n

f f

1

  )

, ,

( x

1

x

2

x

n

  

 

  

n

m m

x

x

1

Δx x

x Δx

x ) ( ) ( )

( F J

F

F   

(26)

Jacobian matrix

   0 , 0 , 2

3

: R      R

F tr sin  cos 

cos

sin sin

r v

r u

 ) 

, , ( )

, ,

( r t u v

F   

 ttt

v r cos 









 

u u

u

t t

r t





 

  ) , ,

(    

v v

v

u u

r r u

JF



 



 

v v

r v





 

 sin sin cos sin sin cos sin sin

cos cos

cos sin

r r

r r



 

 cos  sin 0

cos s

s cos s

s

r

(27)

Parametric model

image gradient

warped image target image

W 2

image gradient





x

x p Δp

p) W

W(x; ) ( )

( I T

I

Jacobian of the warp

x x

x x

p W p

W p

W p

W

W

 

n y y

y

n y

p W p

W p

W

p p

p p

W p p

W

2 1

2 1

p p1 p2 pn

(28)

Jacobian of the warp

 Wx Wx Wx Wx

y y

y

n y

x

W W

W

p p

p W

p p

W

2 1

For example for affine

 p p1 p2 pn

For example, for affine







  xx xy x dxx x dxy y dx y

d x d

d (1 )

p) 1

W(x; 







y yy

yx y

yy

yx y d x d y d

d d

d (1 )

1 1 p)

W(x;





1 0

0 0

0 1

0 0

y x

y x

p W

y

p

dxx dyx dxy dyy dx dy

(29)

Parametric model





W Δp x

p) W(x;

2

) ( )

( I T

min

I

arg 





x

x p Δp

p)

W(x; ) ( )

( I T

I

W (W( )) W Δ ( )

0 I I I T

T

min

Δp

arg

x

x p Δp

p) W p W(x;

W ( ) ( )

0 I I I T

T

 

x

p) W(x;

p x H W

Δp 1 I T( ) I( )

T

WT W

x p

W p

H I W I

(Approximated) Hessian

(30)

Lucas-Kanade algorithm

iterate

1) warp I with W(x;p) 1) warp I with W(x;p)

2) compute error image T(x,y)-I(W(x,p)) 3) compute gradient image with W(x p)I 3) compute gradient image with W(x,p) 4) evaluate Jacobian at (x;p)

5) compute p

W

W

I

I

5) compute

6) compute Hessian

7) t

p

I

W T

7) compute 8) solve

9) d t b

x

p) W(x;

p x

W T( ) I( ) I

Δp

9) update p by p+Δ

until converge

Δp

 

W x W(x;p)

H

Δp 1 I T( ) I( )

T

 

x

p) W(x;

p x H

Δp I T( ) I( )

(31)

x

p) W(x;

p x H W

Δp 1 I T( ) I( )

T

(32)

Coarse-to-fine strategy

J Jw fi I

ain

J I

J warp Jw refine I

a

+

J warp Jw refine I

a

pyramid construction

pyramid construction

a

+

J warp Jw refine I

a

+

a

out

(33)

Application of image alignment

(34)

Direct vs feature-based

• Direct methods use all information and can be very accurate but they depend on the fragile very accurate, but they depend on the fragile

“brightness constancy” assumption.

• Iterative approaches require initialization

• Iterative approaches require initialization.

• Not robust to illumination change and noise images

images.

• In early days, direct method is better.

• Feature based methods are now more robust and potentially faster

and potentially faster.

• Even better, it can recognize panorama without initialization

initialization.

(35)

Tracking

Tracking

(36)

Tracking

(u, v)

I(x,y,t) (u, v) I(x+u,y+v,t+1)

(37)

Tracking

0 )

, , ( )

1 ,

,

(x u y v t I x y t brightness constancy I

0 )

, , ( )

, , ( )

, , ( )

, , ( )

, ,

(x y t uI x y t vI x y t I x y t I x y t

I x y t

0 )

, , ( )

, , ( )

, ,

(x y t vI x y t I x y t

uIx yy t

0

I v I u

Ixu Iyv It 0 optical flow constraint equation I optical flow constraint equation

(38)

Optical flow constraint equation

(39)

Multiple constraints

(40)

Area-based method

• Assume spatial smoothness

(41)

Area-based method

• Assume spatial smoothness

 

y x

t y

xu I v I

I v

u

E( , ) 2

y x,

(42)

Area-based method

must be invertible must be invertible

(43)

Area-based method

• The eigenvalues tell us about the local image structure

structure.

• They also tell us how well we can estimate the fl i b th di ti

flow in both directions.

• Link to Harris corner detector.

(44)

Textured area

(45)

Edge

(46)

Homogenous area

(47)

KLT tracking

• Select features by

M i f b i di i il i

, ) ( 1 2 min

• Monitor features by measuring dissimilarity

(48)

Aperture problem

(49)

Aperture problem

(50)

Aperture problem

(51)

Demo for aperture problem

• http://www.sandlotscience.com/Distortions/Br eathing Square htm

eathing_Square.htm

• http://www.sandlotscience.com/Ambiguous/Ba b l Ill i ht

rberpole_Illusion.htm

(52)

Aperture problem

• Larger window reduces ambiguity, but easily violates spatial smoothness assumption

violates spatial smoothness assumption

(53)
(54)
(55)

KLT tracking

http://www ces clemson edu/~stb/klt/

http://www.ces.clemson.edu/ stb/klt/

(56)

KLT tracking

http://www ces clemson edu/~stb/klt/

http://www.ces.clemson.edu/ stb/klt/

(57)

SIFT tracking (matching actually)

Frame 0  Frame 10

(58)

SIFT tracking

Frame 0  Frame 100

(59)

SIFT tracking

Frame 0  Frame 200

(60)

KLT vs SIFT tracking

• KLT has larger accumulating error; partly

because our KLT implementation doesn’t have because our KLT implementation doesn t have affine transformation?

SIFT i i i l b t

• SIFT is surprisingly robust

• Combination of SIFT and KLT (example)

http://www.frc.ri.cmu.edu/projects/buzzard/smalls/

(61)

Rotoscoping (Max Fleischer 1914)

1937

(62)

Tracking for rotoscoping

(63)

Tracking for rotoscoping

(64)

Waking life (2001)

(65)

A Scanner Darkly (2006)

• Rotoshop, a proprietary software. Each minute of animation required 500 hours of work

of animation required 500 hours of work.

(66)

Optical flow

Optical flow

(67)

Single-motion assumption

Violated by

M i di i i

• Motion discontinuity

• Shadows

• Transparency

• Specular reflection

• Specular reflection

• …

(68)

Multiple motion

(69)

Multiple motion

(70)

Simple problem: fit a line

(71)

Least-square fit

(72)

Least-square fit

(73)

Robust statistics

• Recover the best fit for the majority of the data

data

• Detect and reject outliers

(74)

Approach

(75)

Robust weighting

T t d d ti

Truncated quadratic

(76)

Robust weighting

G & M Cl

Geman & McClure

(77)

Robust estimation

(78)

Fragmented occlusion

(79)
(80)
(81)

Regularization and dense optical flow

• Neighboring points in the scene typically belong to the

• Neighboring points in the scene typically belong to the same surface and hence typically have similar motions.

• Since they also project to nearby pixels in the image, Since they also project to nearby pixels in the image, we expect spatial coherence in image flow.

(82)
(83)
(84)
(85)
(86)
(87)
(88)

Input image Horizontal ti

Vertical motion

motion motion

(89)
(90)
(91)

Application of optical flow

video video

matching

(92)
(93)

Input for the NPR algorithm

(94)

Brushes

(95)

Edge clipping

(96)

Gradient

(97)

Smooth gradient

(98)

Textured brush

(99)

Edge clipping

(100)

Temporal artifacts

Frame-by-frame application of the NPR algorithm

(101)

Temporal coherence

參考文獻

相關文件

To an accuracy of 3 pixels, 72% of interest points are repeated (have correct position), 66% have the correct position and scale, 64% also have correct orientation, and in total 59%

‹ Based on the coded rules, facial features in an input image Based on the coded rules, facial features in an input image are extracted first, and face candidates are identified.

 civilian life and opportunities ©2011 Yen-Ping Shan All rights reserved

Now, nearly all of the current flows through wire S since it has a much lower resistance than the light bulb. The light bulb does not glow because the current flowing through it

It is intended in this project to integrate the similar curricula in the Architecture and Construction Engineering departments to better yet simpler ones and to create also a new

Light rays start from pixels B(s, t) in the background image, interact with the foreground object and finally reach pixel C(x, y) in the recorded image plane. The goal of environment

(It is also acceptable to have either just an image region or just a text region.) The layout and ordering of the slides is specified in a language called SMIL.. SMIL is covered in

contributions to the nearby pixels and writes the final floating point image to a file on disk the final floating-point image to a file on disk. • Tone mapping operations can be