• 沒有找到結果。

Image-based lighting

N/A
N/A
Protected

Academic year: 2022

Share "Image-based lighting"

Copied!
132
0
0

加載中.... (立即查看全文)

全文

(1)

Faces and Image-Based Lighting

Digital Visual Effectsg Yung-Yu Chuang

with slides by Richard Szeliski, Steve Seitz, Alex Efros, Li-Yi Wei and Paul Debevec

(2)

Outline

• Image-based lighting

• 3D acquisition for faces

• Statistical methods (with application to face super-resolution)p )

• 3D Face models from single images

• Image based faces

• Image-based faces

• Relighting for faces

(3)

Image-based lighting

Image based lighting

(4)

Rendering

• Rendering is a function of geometry, reflectance lighting and viewing

reflectance, lighting and viewing.

• To synthesize CGI into real scene, we have to t h th b f f t

match the above four factors.

• Viewing can be obtained from calibration or structure from motion.

• Geometry can be captured using 3D y p g photography or made by hands.

• How to capture lighting and reflectance?

• How to capture lighting and reflectance?

(5)

Reflectance

• The Bidirectional Reflection Distribution Function

Given an incoming ray and outgoing ray – Given an incoming ray and outgoing ray

what proportion of the incoming light is reflected along outgoing ray?

surface normal surface normal

Answer given by the BRDF:

(6)

Rendering equation

) ω , p

(

i

L

i

ω

i

) ω , p

(

i

L

i

p

ω

o

) ω p,

(

o

L

o

5D light field

) (

L L ( )

5D light field

 ) ω p,

(

o

L

o

L

e

( p , ω

o

)

i i

i i

o

, ω ) ( p, ω ) cos θ ω

ω p,

2

( L

i

d

2

 ( p,

o

,

i

)

i

( p,

i

)

i i

s

(7)

Complex illumination

 ) ω p,

(

o

L

o

L

e

( p , ω

o

)

s2

f ( p, ω

o

, ω

i

) L

i

( p, ω

i

) cos θ

i

d ω

i

 ) ω p,

(

o

B

2

f ( p, ω

o

, ω

i

) L

d

( p, ω

i

) cos θ

i

d ω

i

s

reflectance lighting

(8)

Point lights

Classically, rendering is performed assuming point light sources

light sources

directional source

(9)

Natural illumination

People perceive materials more easily under

natural illumination than simplified illumination natural illumination than simplified illumination.

I t R D d T d Ad l

Images courtesy Ron Dror and Ted Adelson

(10)

Natural illumination

Rendering with natural illumination is more expensive compared to using simplified

expensive compared to using simplified illumination

directional source natural illumination

(11)

Environment maps

Miller and Hoffman 1984 Miller and Hoffman, 1984

(12)

Acquiring the Light Probe

(13)

HDRI Sky Probe

(14)

Clipped Sky + Sun Source

(15)

Lit by sun only y y

(16)

Lit by sky only y y y

(17)

Lit by sun and sky y y

(18)

Illuminating a Small Scene

(19)
(20)

Real Scene Example

• Goal: place synthetic objects on tableGoal: place synthetic objects on table

(21)

Light Probe / Calibration Grid g

(22)

Modeling the Scene

light-based model light-based model

real scene

(23)

The Light-Based Room Model

(24)

Rendering into the Scene

• Background PlateBackground Plate

(25)

Rendering into the scene

• Objects and Local Scene matched to SceneObjects and Local Scene matched to Scene

(26)

Differential rendering

• Local scene w/o objects, illuminated by modelLocal scene w/o objects, illuminated by model

(27)

Differential rendering

=

- =

(28)

Differential rendering

+

+

(29)

Differential Rendering

• Final ResultFinal Result

(30)

Environment map from single image?

(31)

Eye as light probe! (Nayar et al)

(32)

Results

(33)

Application in “Superman returns”

(34)

Capturing reflectance

(35)

Application in “The Matrix Reloaded”

(36)

3D acquisition for faces

3D acquisition for faces

(37)

Cyberware scanners

face & head scanner whole body scannery

(38)

Making facial expressions from photos

• Similar to Façade, use a generic face model and view dependent texture mapping

and view-dependent texture mapping

• Procedure

1. Take multiple photographs of a person 2. Establish corresponding feature points

3. Recover 3D points and camera parameters 4. Deform the generic face model to fit points 5. Extract textures from photos

(39)

Reconstruct a 3D model

input photographs

generic 3D pose more deformed

generic 3D face model

p

estimation features model

(40)

Mesh deformation

– Compute displacement of feature points Apply scattered data interpolation

– Apply scattered data interpolation

generic model displacement deformed model

(41)

Texture extraction

• The color at each point is a weighted combination of the colors in the photos combination of the colors in the photos

• Texture can be:

– view-independent – view-dependent

• Considerations for weighting

– occlusion – smoothness

– positional certaintyp y – view similarity

(42)

Texture extraction

(43)

Texture extraction

(44)

Texture extraction

view-independent view-dependent

(45)

Model reconstruction

Use images to adapt a generic face model Use images to adapt a generic face model.

(46)

Creating new expressions

• In addition to global blending we can use:

R i l bl di – Regional blending – Painterly interface

(47)

Creating new expressions

New expressions are created with 3D morphing:

+ =

+

/2 /2

Applying a global blend

(48)

Creating new expressions

+

x

+

x

=

Applying a region-based blend

(49)

Creating new expressions

+ + +

+ + +

=

Using a painterly interface

(50)

Drunken smile

(51)

Animating between expressions

Morphing over time creates animation:

“neutral” “joy”

(52)

Video

(53)

Spacetime faces

(54)

Spacetime faces

black & white cameras color cameras

video projectors

(55)

time

(56)

time

Face surface Face surface

(57)

time

stereo

(58)

time

stereo active stereo

(59)

time

spacetime stereo

stereo active stereo

(60)

Spacetime Stereo

time

surface motion surface motion

time=1

(61)

Spacetime Stereo

time

surface motion surface motion

time=2

(62)

Spacetime Stereo

time

surface motion surface motion

time=3

(63)

Spacetime Stereo

time

surface motion surface motion

time=4

(64)

Spacetime Stereo

time

surface motion surface motion

time=5

(65)

Spacetime Stereo

time

surface motion surface motion

Better

• spatial resolution

• temporal stableness time

• temporal stableness

(66)

Spacetime stereo matching

(67)

Video

(68)

Fitting

(69)

FaceIK

(70)

Animation

(71)

3D face applications: The one

(72)

3D face applications: Gladiator

extra 3M

extra 3M

(73)

Statistical methods

Statistical methods

(74)

Statistical methods

para observed

f(z)+

z y

para- meters

observed signal )

| ( max

* P z y

z  max P ( z | y )

Example: super-resolution

z

z

) ( )

|

max P ( y z P z

super-resolution de-noising

de-blocking

) max (

y P

z de-blocking

Inpainting

) ( )

| (

min L y z L z

z

(75)

Statistical methods

para observed

f(z)+

z y

para- meters

observed signal )

( )

| (

min

* L y z L z

z  min L ( y | z )  L ( z )

z

z

)

2

(z f

data y  a-priori

2

evidence knowledge

(76)

Statistical methods

There are approximately 10240 possible 1010 There are approximately 10 possible 1010 gray-level images. Even human being has not seen them all yet. There must be a strong

seen them all yet. There must be a strong statistical bias.

Takeo Kanade Takeo Kanade

Approximately 8X1011 blocks per day per person.

(77)

Generic priors

“S th i d i ”

“Smooth images are good images.”

x

x V

z

L ( )  ( ( ))

x

)

2

( d  d Gaussian MRF   ( d ) d Gaussian MRF

T

d d

2

T d

T d

T d

T T

d d

 

 

) (

) 2

(

2

Huber MRF 

(78)

Generic priors

(79)

Example-based priors

“E i ti i d i ”

“Existing images are good images.”

six 200200 Images  Images  2,000,000 pairs

pairs

(80)

Example-based priors

L(z)

(81)

Example-based priors

high-resolution

low-resolution

(82)

Model-based priors

“Face images are good images when Face images are good images when working on face images …”

Parametric

model Z=WX+ L(X)

model

) ( )

| (

min

* L y z L z

z(y )  ( )

z

X *  min L ( y | WX   )  L ( X )

 

*

*

) (

)

| (

min WX z

X L WX

y L

X

x

 

(83)

PCA

• Principal Components Analysis (PCA):

approximating a high dimensional data set approximating a high-dimensional data set with a lower-dimensional subspace

**

**

** **

** **

** ****

** **

**

** First principal componentFirst principal component Second principal component

Second principal component

Original axes Original axes

**

** ** **

**

******** **

**

****

** **

Data points Data points

(84)

PCA on faces: “eigenfaces”

Average

Average First principal componentFirst principal component Average

Average face face

Other Other components components

For all except average, For all except average,o a e cept a e age,o a e cept a e age,

“gray” = 0,

“gray” = 0,

“white” > 0,

“white” > 0,

“black” < 0

“black” < 0black < 0black < 0

(85)

Model-based priors

“Face images are good images when Face images are good images when working on face images …”

Parametric

model Z=WX+ L(X)

model

) ( )

| (

min

* L y z L z

z(y )  ( )

z

X *  min L ( y | WX   )  L ( X )

 

*

*

) (

)

| (

min WX z

X L WX

y L

X

x

 

(86)

Super-resolution

(a) (b) (c) (d) (e) (f)

(a) Input low 24×32 (b) Our results (c) Cubic B-Spline (a) Input low 24×32 (b) Our results (c) Cubic B Spline

(d) Freeman et al. (e) Baker et al. (f) Original high 96×128

(87)

Face models from single images

Face models from single images

(88)

Morphable model of 3D faces

• Start with a catalogue of 200 aligned 3D Cyberware scans

Cyberware scans

• Build a model of average shape and texture

• Build a model of average shape and texture, and principal variations using PCA

(89)

Morphable model

shape examplars texture examplars

(90)

Morphable model of 3D faces

• Adding some variations

(91)

Reconstruction from single image

Rendering must be similar to

the input if we guess right

g g

(92)

Reconstruction from single image

prior

shape and texture priors are learnt from database ρ is the set of parameters for shading including camera pose, lighting and so onp , g g

(93)

Modifying a single image

(94)

Animating from a single image

(95)

Video

(96)

Exchanging faces in images

(97)

Exchange faces in images

(98)

Exchange faces in images

(99)

Exchange faces in images

(100)

Exchange faces in images

(101)

Morphable model for human body

(102)

Image-based faces

(lip sync.)

(103)

Video rewrite (analysis)

(104)

Video rewrite (synthesis)

(105)

Results

• Video database

2 i t f JFK – 2 minutes of JFK

• Only half usable

• Head rotation

• Head rotation

training video R d li

Read my lips.

I never met Forest Gump.

(106)

Morphable speech model

(107)

Preprocessing

(108)

Prototypes (PCA+k-mean clustering)

W fi d I d C f h t t i

We find Ii and Ci for each prototype image.

(109)

Morphable model

analysis

I α β

analysis

synthesis

(110)

Morphable model

analysis

synthesis

(111)

Synthesis

(112)

Results

(113)

Results

(114)

Relighting faces

Relighting faces

(115)

Light is additive

lamp #1 lamp #2

(116)

Light stage 1.0

(117)

Light stage 1.0

64x32 lighting directions

(118)

Input images

(119)

Reflectance function

occlusion flare

(120)

Relighting

(121)

Results

(122)

Changing viewpoints

(123)

Results

(124)

Video

(125)

3D face applications: Spiderman 2

(126)

Spiderman 2

real synthetic

real synthetic

(127)

Spiderman 2

video video

(128)

Light stage 3

(129)

Light stage 6

(130)

Application: The Matrix Reloaded

(131)

Application: The Matrix Reloaded

(132)

References

• Paul Debevec, Rendering Synthetic Objects into Real Scenes:

Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography

Illumination and High Dynamic Range Photography, SIGGRAPH 1998.

• F. Pighin, J. Hecker, D. Lischinski, D. H. Salesin, and R.

Szeliski Synthesizing realistic facial expressions from Szeliski. Synthesizing realistic facial expressions from photographs. SIGGRAPH 1998, pp75-84.

• Li Zhang, Noah Snavely, Brian Curless, Steven M. Seitz,

S ti F High R l ti C t f M d li g d Spacetime Faces: High Resolution Capture for Modeling and Animation, SIGGRAPH 2004.

• Blanz, V. and Vetter, T., A Morphable Model for the S th i f 3D F SIGGRAPH 1999 187 194 Synthesis of 3D Faces, SIGGRAPH 1999, pp187-194.

• Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar, Acquiring the

R fl t Fi ld f H F SIGGRAPH 2000 Reflectance Field of a Human Face, SIGGRAPH 2000.

• Christoph Bregler, Malcolm Slaney, Michele Covell, Video

Rewrite: Driving Visual Speeach with Audio, SIGGRAPH 1997.

• Tony Ezzat, Gadi Geiger, Tomaso Poggio, Trainable Videorealistic Speech Animation, SIGGRAPH 2002.

參考文獻

相關文件

jogo) dos visitantes segundo o local de residência e por tipo de despesa Structure of per-capita non-shopping spending (excluding gaming expenses) of visitors by place of

Entrada de visitantes segundo o mês e por local de emissão do documento de viagem Monthly Visitor Arrivals By Place of Issue of Travel Document. 18-

Entrada de visitantes segundo o mês e por local de emissão do documento de viagem Monthly Visitor Arrivals By Place of Issue of Travel Document. 18-

Use images to adapt a generic face model Use images to adapt a generic face model. Creating

• void Preprocess(const Scene *scene) Called after scene has been initialized; do scene Called after scene has been initialized; do scene- dependent computation such as photon

A novel surrogate able to adapt to any given MLL criterion The first cost-sensitive multi-label learning deep model The proposed model successfully. Tackle general

Entrada de visitantes em excursões segundo o mês e por local de residência Monthly Inbound Visitor Arrivals in Package Tour by Place of Residence. 30-

Entrada de visitantes segundo o mês e por local de emissão do documento de viagem Monthly Visitor Arrivals by Issuing Place of Travel Document. 2-