• 沒有找到結果。

Faces and Image-Based Lighting

N/A
N/A
Protected

Academic year: 2022

Share "Faces and Image-Based Lighting"

Copied!
33
0
0

加載中.... (立即查看全文)

全文

(1)

Faces and Image-Based Lighting

Digital Visual Effectsg Yung-Yu Chuang

with slides by Richard Szeliski, Steve Seitz, Alex Efros, Li-Yi Wei and Paul Debevec

Outline

• Image-based lighting

• 3D acquisition for faces

• Statistical methods (with application to face super-resolution)p )

• 3D Face models from single images

• Image based faces

• Image-based faces

• Relighting for faces

Image-based lighting Image based lighting

Rendering

• Rendering is a function of geometry, reflectance lighting and viewing reflectance, lighting and viewing.

• To synthesize CGI into real scene, we have to t h th b f f t

match the above four factors.

• Viewing can be obtained from calibration or structure from motion.

• Geometry can be captured using 3D y p g photography or made by hands.

• How to capture lighting and reflectance?

• How to capture lighting and reflectance?

(2)

Reflectance

• The Bidirectional Reflection Distribution Function

Given an incoming ray and outgoing ray – Given an incoming ray and outgoing ray

what proportion of the incoming light is reflected along outgoing ray?

surface normal surface normal

Answer given by the BRDF:

Rendering equation

) ω , p ( i Li

ωi

) ω , p ( i Li

p ωo

) ω p,

( o

Lo

5D light field )

L ( L ( )

5D light field

 ) ω p,

( o

Lo Le(p,ωo)

i i i

i

o,ω ) (p,ω )cosθ ω

ω p,

2 ( Li d

2

(p, o, i) i(p, i) i i

s

Complex illumination

 ) ω p,

( o

Lo Le(p,ωo)

s2 f(p,ωo,ωi)Li(p,ωi)cosθi dωi

 ) ω p,

( o

B 2 f(p,ωoi)Ld(p,ωi)cosθidωi

s

reflectance lighting

Point lights

Classically, rendering is performed assuming point light sources

light sources

directional source

(3)

Natural illumination

People perceive materials more easily under natural illumination than simplified illumination natural illumination than simplified illumination.

I t R D d T d Ad l

Images courtesy Ron Dror and Ted Adelson

Natural illumination

Rendering with natural illumination is more expensive compared to using simplified expensive compared to using simplified illumination

directional source natural illumination

Environment maps

Miller and Hoffman 1984 Miller and Hoffman, 1984

Acquiring the Light Probe

(4)

HDRI Sky Probe Clipped Sky + Sun Source

Lit by sun only y y Lit by sky only y y y

(5)

Lit by sun and sky y y Illuminating a Small Scene

Real Scene Example

• Goal: place synthetic objects on tableGoal: place synthetic objects on table

(6)

Light Probe / Calibration Grid g Modeling the Scene

light-based model light-based model

real scene

The Light-Based Room Model Rendering into the Scene

• Background PlateBackground Plate

(7)

Rendering into the scene

• Objects and Local Scene matched to SceneObjects and Local Scene matched to Scene

Differential rendering

• Local scene w/o objects, illuminated by modelLocal scene w/o objects, illuminated by model

Differential rendering

= - =

Differential rendering

+

+

(8)

Differential Rendering

• Final ResultFinal Result

Environment map from single image?

Eye as light probe! (Nayar et al) Results

(9)

Application in “Superman returns” Capturing reflectance

Application in “The Matrix Reloaded”

3D acquisition for faces

3D acquisition for faces

(10)

Cyberware scanners

face & head scanner whole body scannery

Making facial expressions from photos

• Similar to Façade, use a generic face model and view dependent texture mapping

and view-dependent texture mapping

• Procedure

1. Take multiple photographs of a person 2. Establish corresponding feature points 3. Recover 3D points and camera parameters 4. Deform the generic face model to fit points 5. Extract textures from photos

Reconstruct a 3D model

input photographs

generic 3D pose more deformed

generic 3D face model

p

estimation features model

Mesh deformation

– Compute displacement of feature points Apply scattered data interpolation – Apply scattered data interpolation

generic model displacement deformed model

(11)

Texture extraction

• The color at each point is a weighted combination of the colors in the photos combination of the colors in the photos

• Texture can be:

– view-independent – view-dependent

• Considerations for weighting

– occlusion – smoothness

– positional certaintyp y – view similarity

Texture extraction

Texture extraction Texture extraction

view-independent view-dependent

(12)

Model reconstruction

Use images to adapt a generic face model Use images to adapt a generic face model.

Creating new expressions

• In addition to global blending we can use:

R i l bl di – Regional blending – Painterly interface

Creating new expressions

New expressions are created with 3D morphing:

+ =

+

/2 /2

Applying a global blend

Creating new expressions

+

x

+

x

=

Applying a region-based blend

(13)

Creating new expressions

+ + +

+ + +

=

Using a painterly interface

Drunken smile

Animating between expressions

Morphing over time creates animation:

“neutral” “joy”

Video

(14)

Spacetime faces Spacetime faces

black & white cameras color cameras

video projectors

time time

Face surface Face surface

(15)

time

stereo

time

stereo active stereo

time

spacetime stereo

stereo active stereo

Spacetime Stereo

time

surface motion surface motion

time=1

(16)

Spacetime Stereo

time

surface motion surface motion

time=2

Spacetime Stereo

time

surface motion surface motion

time=3

Spacetime Stereo

time

surface motion surface motion

time=4

Spacetime Stereo

time

surface motion surface motion

time=5

(17)

Spacetime Stereo

time

surface motion surface motion

Better

• spatial resolution

• temporal stableness time

• temporal stableness

Spacetime stereo matching

Video Fitting

(18)

FaceIK Animation

3D face applications: The one 3D face applications: Gladiator

extra 3M

extra 3M

(19)

Statistical methods Statistical methods

Statistical methods

para observed

f(z)+

z y

para- meters

observed signal )

| ( max

* P z y

z  max P ( z | y )

Example: super-resolution

z

z

( | ) ( )

max P y z P z

super-resolution de-noising

de-blocking

) max (

y P

z

de-blocking

Inpainting

) ( )

| (

min L y z L z

z

Statistical methods

para observed

f(z)+

z y

para- meters

observed signal )

( )

| ( min

* L y z L z

z  min L ( y | z )  L ( z )

z

z

)

2

(z f

data y  a-priori

2

evidence knowledge

Statistical methods

There are approximately 10240 possible 1010 There are approximately 10 possible 1010 gray-level images. Even human being has not seen them all yet. There must be a strong seen them all yet. There must be a strong statistical bias.

Takeo Kanade Takeo Kanade

Approximately 8X1011 blocks per day per person.

(20)

Generic priors

“S th i d i ”

“Smooth images are good images.”

x

x V z

L ( )  ( ( ))

x

) 2

(d  d

Gaussian MRF  

(d) d

Gaussian MRF

T d

d

2

T d

T d T d T T

d d



 

) (

) 2

( 2

Huber MRF

Generic priors

Example-based priors

“E i ti i d i ”

“Existing images are good images.”

six 200200 Images  Images  2,000,000 pairs

pairs

Example-based priors

L(z)

(21)

Example-based priors

high-resolution

low-resolution

Model-based priors

“Face images are good images when Face images are good images when working on face images …”

Parametric

model Z=WX+ L(X)

model

) ( )

| ( min

* L y z L z

z(y )  ( )

z

X *  min L ( y | WX   )  L ( X )

 

*

*

) ( )

| ( min WX z

X L WX

y L

X

x

 

PCA

• Principal Components Analysis (PCA):

approximating a high dimensional data set approximating a high-dimensional data set with a lower-dimensional subspace

**

**

** **

** **

** ****

** **

**

** First principal componentFirst principal component Second principal component

Second principal component

Original axes Original axes

**

** ** **

**

******** **

**

****

** **

Data points Data points

PCA on faces: “eigenfaces”

Average

Average First principal componentFirst principal component Average

Average face face

Other Other components components

For all except average, For all except average,o a e cept a e age,o a e cept a e age,

“gray” = 0,

“gray” = 0,

“white” > 0,

“white” > 0,

“black” < 0

“black” < 0black < 0black < 0

(22)

Model-based priors

“Face images are good images when Face images are good images when working on face images …”

Parametric

model Z=WX+ L(X)

model

) ( )

| ( min

* L y z L z

z(y )  ( )

z

X *  min L ( y | WX   )  L ( X )

 

*

*

) ( )

| ( min WX z

X L WX

y L

X

x

 

Super-resolution

(a) (b) (c) (d) (e) (f)

(a) Input low 24×32 (b) Our results (c) Cubic B-Spline (a) Input low 24×32 (b) Our results (c) Cubic B Spline (d) Freeman et al. (e) Baker et al. (f) Original high 96×128

Face models from single images Face models from single images

Morphable model of 3D faces

• Start with a catalogue of 200 aligned 3D Cyberware scans

Cyberware scans

• Build a model of average shape and texture

• Build a model of average shape and texture, and principal variations using PCA

(23)

Morphable model

shape examplars texture examplars

Morphable model of 3D faces

• Adding some variations

Reconstruction from single image

Rendering must be similar to the input if we guess right

g g

Reconstruction from single image

prior

shape and texture priors are learnt from database ρ is the set of parameters for shading including camera pose, lighting and so onp , g g

(24)

Modifying a single image Animating from a single image

Video Exchanging faces in images

(25)

Exchange faces in images Exchange faces in images

Exchange faces in images Exchange faces in images

(26)

Morphable model for human body

Image-based faces (lip sync.)

Video rewrite (analysis) Video rewrite (synthesis)

(27)

Results

• Video database

2 i t f JFK – 2 minutes of JFK

• Only half usable

• Head rotation

• Head rotation

training video R d li Read my lips.

I never met Forest Gump.

Morphable speech model

Preprocessing Prototypes (PCA+k-mean clustering)

W fi d I d C f h t t i

We find Iiand Ci for each prototype image.

(28)

Morphable model

analysis

I α β

analysis synthesis

Morphable model

analysis synthesis

Synthesis Results

(29)

Results

Relighting faces Relighting faces

Light is additive

lamp #1 lamp #2

Light stage 1.0

(30)

Light stage 1.0

64x32 lighting directions

Input images

Reflectance function

occlusion flare

Relighting

(31)

Results Changing viewpoints

Results Video

(32)

3D face applications: Spiderman 2 Spiderman 2

real synthetic

real synthetic

Spiderman 2

video video

Light stage 3

(33)

Light stage 6 Application: The Matrix Reloaded

Application: The Matrix Reloaded References

• Paul Debevec, Rendering Synthetic Objects into Real Scenes:

Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography

Illumination and High Dynamic Range Photography, SIGGRAPH 1998.

• F. Pighin, J. Hecker, D. Lischinski, D. H. Salesin, and R.

Szeliski Synthesizing realistic facial expressions from Szeliski. Synthesizing realistic facial expressions from photographs. SIGGRAPH 1998, pp75-84.

• Li Zhang, Noah Snavely, Brian Curless, Steven M. Seitz, S ti F High R l ti C t f M d li g d Spacetime Faces: High Resolution Capture for Modeling and Animation, SIGGRAPH 2004.

• Blanz, V. and Vetter, T., A Morphable Model for the S th i f 3D F SIGGRAPH 1999 187 194 Synthesis of 3D Faces, SIGGRAPH 1999, pp187-194.

• Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar, Acquiring the R fl t Fi ld f H F SIGGRAPH 2000 Reflectance Field of a Human Face, SIGGRAPH 2000.

• Christoph Bregler, Malcolm Slaney, Michele Covell, Video Rewrite: Driving Visual Speeach with Audio, SIGGRAPH 1997.

• Tony Ezzat, Gadi Geiger, Tomaso Poggio, Trainable Videorealistic Speech Animation, SIGGRAPH 2002.

參考文獻

相關文件

• To enhance teachers’ knowledge and understanding about the learning and teaching of grammar in context through the use of various e-learning resources in the primary

In this paper, we build a new class of neural networks based on the smoothing method for NCP introduced by Haddou and Maheux [18] using some family F of smoothing functions.

• Similar to Façade, use a generic face model and view-dependent texture mapping..

• Paul Debevec, Rendering Synthetic Objects into Real Scenes:. Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic

www.edb.gov.hk&gt; School Administration and Management&gt; Financial Management &gt; Notes to School Finance&gt; References on Acceptance of Advantages and Donations by Schools

 Retrieval performance of different texture features according to the number of relevant images retrieved at various scopes using Corel Photo galleries. # of top

These images are the results of relighting the synthesized target object under Lambertian model (left column) and Phong model (right column) with different light directions ....

Tekalp, “Frontal-View Face Detection and Facial Feature Extraction Using Color, Shape and Symmetry Based Cost Functions,” Pattern Recognition Letters, vol.. Fujibayashi,