• 沒有找到結果。

Image-Based Lighting & Textures

N/A
N/A
Protected

Academic year: 2022

Share "Image-Based Lighting & Textures"

Copied!
123
0
0

加載中.... (立即查看全文)

全文

(1)

Image-Based Lighting & Textures

Digital Visual Effects, Spring 2006 Yung-Yu Chuang

2005/6/7

with slides by Alex Efros, Li-Yi Wei and Paul Debevec

(2)

Announcements

• Winners for project #2

• Voting for project #3

• Final project:

– Checkpoint this Sunday, send me team, topic and brief progress update

– demo on 6/30 (Friday) 10:00am in this room – Report due on 7/3 (Monday) 11:59pm

(3)

Outline

• Image-based lighting

• Texture synthesis

• Acceleration by multi-resolution and TSVQ

• Patch-based texture synthesis

• Image analogies

(4)

Image-based lighting

(5)

Rendering

• Rendering is a function of geometry, reflectance, lighting and viewing.

• To synthesize CGI into real scene, we have to match the above four factors.

• Viewing can be obtained from calibration or structure from motion.

• Geometry can be captured using 3D photography or made by hands.

• How to capture lighting and reflectance?

(6)

Reflectance

• The Bidirectional Reflection Distribution Function

Given an incoming ray and outgoing ray

what proportion of the incoming light is reflected along outgoing ray?

Answer given by the BRDF:

surface normal

(7)

Capturing reflectance

(8)

Application in “The Matrix Reloaded”

(9)

Rendering equation

p

= ) ω p, (

o

L

o

L

e

( p , ω

o

)

i i

i i

o

, ω ) ( p, ω ) cos θ ω

ω p,

2

( L

i

d

s

+ ρ

) ω p, (

o

L

o

5D light field

ω

i

ω

o

) ω , p

(

i

L

i

(10)

Point lights

Classically, rendering is performed assuming point light sources

directional source

(11)

Environment maps

Miller and Hoffman, 1984

(12)

Complex illumination

= ) ω p, (

o

L

o

L

e

( p , ω

o

)

i i

i i

o

, ω ) ( p, ω ) cos θ ω

ω p,

2

f ( L

i

d

s

+

= ) ω p, (

o

B

2

f ( p, ω

o

, ω

i

) L

d

( p, ω

i

) cos θ

i

d ω

i

s

(13)

• Basis Functions are pieces of signal that can be used to produce approximations to a function

×

×

×

c 1

= c 2

= c 3

=

Basis functions

(14)

• We can then use these coefficients to reconstruct an approximation to the original signal

1 ×

c =

=

=

2 × c

3 × c

Basis functions

(15)

• We can then use these coefficients to reconstruct an approximation to the original signal

( ) =

=

x B

c

N i

i i 1

Basis functions

(16)

Orthogonal basis functions

• Orthogonal Basis Functions

– These are families of functions with special properties

– Intuitively, it’s like functions don’t overlap each other’s footprint

• A bit like the way a Fourier transform breaks a functions into component sine waves

( ) ( )

⎩ ⎨

= =

B

i

x B

j

x dx 1 0 i i j j

(17)

Basis functions

• Transform data to a space in which we can capture the essence of the data better

• Here, we use spherical harmonics, similar to

Fourier transform in spherical domain

(18)

Real spherical harmonics

• A system of signed, orthogonal functions over the sphere

• Represented in spherical coordinates by the function

where l is the band and m is the index within the band

( ) ( ) ( )

( ) ( )

( )

⎪ ⎩

⎪ ⎨

=

<

>

=

0 0 0 ,

cos

, cos

sin 2

, cos

cos 2

,

0

0

m

m m P

K

P m

K

P m

K y

l l

m l m

l

m l m

l m

l

θ

θ ϕ

θ ϕ

ϕ

θ

(19)

Real spherical harmonics

(20)

Reading SH diagrams

+

Not this direction

This

direction

(21)

Reading SH diagrams

+

Not this direction

This

direction

(22)

The SH functions

=

0

y

0

−1

=

y

1

= y

11

1

y

2

y

22

0

y

2 1

2

y

2

2

y

(23)

The SH functions

(24)

Spherical harmonics

(25)

Spherical harmonics

-1

-2 0 1 2

0

1

2

( , ) Y lm θ ϕ

y z x

xy yz 3 z

2

− 1 zx x

2

y

2

1 m

l

(26)

SH projection

• First we define a strict order for SH functions

• Project a spherical function into a vector of SH coefficients

( ) ( )

=

S

i

i f s y s ds

c

( ) l m

l

i = + 1 +

(27)

SH reconstruction

• To reconstruct the approximation to a function

• We truncate the infinite series of SH functions to give a low frequency approximation

( ) ∑ ( )

=

=

2

0

~ N

i

i

i y s c

s

f

(28)

Examples of reconstruction

(29)

An example

• Take a function comprised of two area light sources

– SH project them into 4 bands = 16 coefficients

⎥ ⎥

⎥ ⎥

⎢ ⎢

⎢ ⎢

− −

− −

238 0

0 425

0 642 0 001 0 317 0 837 0 940 0 0 417 0 0 278 0 679 0 930 0 908

329 0 1

. ,

,

. , . , . , . ,

. , , . , , . ,

. , . , . ,

. ,

.

(30)

Low frequency light source

• We reconstruct the signal

– Using only these coefficients to find a low frequency approximation to the original light source

(31)

SH lighting for diffuse objects

An Efficient Representation for Irradiance

Environment Maps, Ravi Ramamoorthi and Pat Hanrahan, SIGGRAPH 2001

• Assumptions

– Diffuse surfaces

– Distant illumination

– No shadowing, interreflection

irradiance is a function of surface normal

= ) ( p,ω

o

B

2

f ( p, ω

o

, ω

i

) L

d

( p, ω

i

) cos θ

i

d ω

i

s

) n ( ) ( E p ρ

=

n)

B(p,

(32)

Diffuse reflection

B = ρ E

radiosity

(image intensity)

reflectance

(albedo/texture) irradiance

(incoming light)

= ×

quake light map

(33)

Irradiance environment maps

Illumination Environment Map Irradiance Environment Map

L n

( )( )

Ω

= L ω n ω d ω

n

E ) (

(34)

Spherical harmonic expansion

Expand lighting (L), irradiance (E) in basis functions

0

( , )

l lm lm

( , )

l m l

L θ φ

+

L Y θ φ

= =−

= ∑ ∑

0

( , )

l lm lm

( , )

l m l

E θ φ

+

E Y θ φ

= =−

= ∑ ∑

= .67 + .36 + …

(35)

Analytic irradiance formula

Lambertian surface acts like low-pass filter

lm l lm

E = A L A

l

π

2 / 3 π

π / 4 0

( )

2 1

2 2

( 1) !

2 ( 2)( 1) 2 !

l

l l l

A l l even

l l

π

=

+ − ⎢

0 1 2 l

cosine term

(36)

9 parameter approximation

Exact image Order 0

1 term

RMS error = 25 %

-1

-2 0 1 2

0 1 2

( , ) Y

lm

θ ϕ

y z x

xy yz 3z2 1 zx x2 y2

l

m

(37)

9 Parameter Approximation

Exact image Order 1

4 terms

RMS Error = 8%

-1

-2 0 1 2

0 1 2

( , ) Y

lm

θ ϕ

y z x

xy yz 3z2 1 zx x2 y2

l

m

(38)

9 Parameter Approximation

Exact image Order 2

9 terms

RMS Error = 1%

For any illumination, average error < 3% [Basri Jacobs 01]

-1

-2 0 1 2

0 1 2

( , ) Y

lm

θ ϕ

y z x

xy yz 3z2 1 zx x2 y2

l

m

(39)

Comparison

Incident illumination

300x300

Irradiance map Texture: 256x256

Hemispherical Integration 2Hrs

Irradiance map Texture: 256x256 Spherical Harmonic

Coefficients 1sec

Time 300 300 256 256 × × × Time 9 256 256∝ × ×

(40)

Complex geometry

Assume no shadowing: Simply use surface normal

y

(41)

Natural illumination

For diffuse objects, rendering with natural illumination can be done quickly

directional source natural illumination

(42)

HDRI Sky Probe

(43)

Clipped Sky + Sun Source

(44)

Lit by sun only

(45)

Lit by sky only

(46)

Lit by sun and sky

(47)

Acquiring the Light Probe

(48)

Illuminating a Small Scene

(49)
(50)

Real Scene Example

• Goal: place synthetic objects on table

(51)

Light Probe / Calibration Grid

(52)

real scene real scene

Modeling the Scene

light-based model light-based model

(53)

The Light-Based Room Model

(54)

Rendering into the Scene

• Background Plate

(55)

Rendering into the scene

• Objects and Local Scene matched to Scene

(56)

Differential rendering

• Local scene w/o objects, illuminated by model

(57)

Differential rendering

= =

- -

(58)

Differential Rendering

• Final Result

(59)

Environment map from single image?

(60)

Eye as light probe! (Nayar et al)

(61)

Cornea is an ellipsoid

(62)

Results

(63)

Application in “The Matrix Reloaded”

(64)

Texture synthesis

(65)

Texture synthesis

• Given a finite sample of some texture, the goal is to synthesize other samples from that same texture.

– The sample needs to be "large enough"

synthesis

generated image input image

(66)

The challenge

• How to capture the essence of texture?

• Need to model the whole spectrum: from

repeated to stochastic texture

(67)

N

i

Markov property

• P(f

i

|f

i-1

,f

i-2

,f

i-3

,…f

0

)= P(f

i

|f

i-1

,f

i-2

,…f

i-n

)

ffi Ffi-1

Ff0

S

fi Ffi-2

Ffi-n

• P(f

i

|f

S-{i}

)= P(f

i

|f

Ni

)

(68)

Motivation from language

• [Shannon’48] proposed a way to generate English-looking text using N-grams:

– Assume a generalized Markov model

– Use a large text to compute probability

distributions of each letter given N-1 previous letters

precompute or sample randomly

– Starting from a seed repeatedly sample this Markov chain to generate new letters

– One can use whole words instead of letters too.

(69)

Mark V. Shaney (Bell Labs)

• Results (using alt.singles corpus):

"One morning I shot an elephant in my arms and kissed him.”

"I spent an interesting evening recently with a grain of salt"

• Notice how well local structure is preserved!

– Now let’s try this in 2D...

(70)

– Assuming Markov property, what is conditional probability distribution of p, given the neighbourhood window?

– Instead of constructing a model, let’s directly search the input image for all such neighbourhoods to produce a

histogram for p

– To synthesize p, just pick one match at random

SAMPLE

p

Ideally

Infinite sample image

generated image

(71)

finite sample image

Generated image

p

– However, since our sample image is finite, an exact neighbourhood match might not be present

– So we find the best match using SSD error (weighted by a

Gaussian to emphasize local structure), and take all samples within some distance from that match

Using Gaussian-weighted SSD is very important

SAMPLE

In reality

(72)

Neighborhood size matters

(73)

More results

Increasing window size

(74)

french canvas rafia weave

More results

(75)

wood

More results

brick wall

(76)

Failure cases

Growing garbage Verbatim copying

(77)

Inpainting

• Growing is in “onion peeling” order

– within each “layer”, pixels with most neighbors are synthesized first

(78)

Inpainting

(79)

Inpainting

(80)

Results

(81)

Summary of the basic algorithm

• Exhaustively search neighborhoods

(82)

Neighborhood

• Neighborhood size determines the quality &

cost

3×3 5×5 7×7

9×9 11×11 41×41

423 s 528 s 739 s

1020 s 1445 s 24350 s

(83)

Summary

• Advantages:

– conceptually simple

– models a wide range of real-world textures – naturally does hole-filling

• Disadvantages:

– it’s slow

– it’s a heuristic

(84)

Acceleration by Wei & Levoy

• Multi-resolution

• Tree-structure

(85)

Multi-resolution pyramid

High resolution Low resolution

(86)

Multi-resolution algorithm

(87)

Benefits

• Better image quality & faster computation

1 level

5×5 3 levels

5×5 1 level

11×11

(88)

Results

Random Oriented

Regular Semi-regular

(89)

Failures

• Non-planar structures

• Global information

(90)

Acceleration

• Computation bottleneck: neighborhood search

(91)

Nearest point search

• Treat neighborhoods as high dimensional points

1 2 3 4 5 6 7 8 9 10 11 12

Neighborhood

1 2 3 4 5 6 7 8 9 10 11 12

High dimensional point/vector

(92)

Tree-Structured Vector Quantization

(93)

Timing

• Time complexity : O(log N) instead of O(N)

– 2 orders of magnitude speedup for non-trivial images

1941 secs 503 secs 12 secs

Efros 99 Full searching TSVQ

(94)

Results

Input Exhaustive: 360 s TSVQ: 7.5 s

(95)

p p

Patch-based methods

• Observation: neighbor pixels are highly correlated

Input image

non-parametric sampling

B B

Idea:

Idea: unit of synthesis = block unit of synthesis = block

Exactly the same but now we want P(B|N(B))

Much faster: synthesize all pixels in a block at once

Synthesizing a block

(96)

Algorithm

– Pick size of block and size of overlap – Synthesize blocks in raster order

– Search input texture for block that satisfies overlap constraints (above and left)

– Paste new block into resulting texture

• blending

• use dynamic programming to compute minimal error boundary cut

(97)

Input texture

B1 B2

Random placement of blocks

block

B1 B2

Neighboring blocks constrained by overlap

B1 B2

Minimal error boundary cut

(98)

min. error boundary

Minimal error boundary

overlapping blocks vertical boundary

_ _ 2 2 = =

overlap error

(99)

Results

(100)

Results

(101)

Failure cases

(102)

GraphCut textures

(103)

GraphCut textures

(104)

Photomontage

(105)

Photomontage

(106)

Image Analogies

(107)

Coherence search

(108)

Image Analogies Implementation

(109)

Image Analogies Implementation

(110)

Image Analogies Implementation

(111)

Balance between approximate and

coherence searches

(112)

Learn to blur

(113)

Super-resolution

(114)
(115)

Colorization

(116)

Artistic filters

(117)

:

B B’

::

(118)
(119)

:

B B’

::

(120)
(121)

Texture by numbers

(122)

Texture by numbers

(123)

References

• Paul Debevec, Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range

Photography, SIGGRAPH 1998.

• Alexei A. Efros, Thomas K. Leung, Texture Synthesis by Non-parametric Sampling, ICCV 1999.

• Li-Yi Wei, Marc Levoy, Fast Texture Synthesis Using Tree-Structured Vector Quantization, SIGGRAPH 2000.

• Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, David H. Salesin, Image Analogies, SIGGRAPH 2001.

• Alexei A. Efros, William T. Freeman, Image Quilting for Texture Synthesis and Transfer, SIGGRAPH 2001.

參考文獻

相關文件

Creating full view panoramic image mosaics and texture-mapped models, SIGGRAPH 1997, pp251-258. Lowe, Recognising Panoramas,

• Paul Debevec, Rendering Synthetic Objects into Real Scenes:. Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic

 Retrieval performance of different texture features according to the number of relevant images retrieved at various scopes using Corel Photo galleries. # of top

◦ Action, State, and Reward Markov Decision Process Reinforcement Learning.

Core vector machines: Fast SVM training on very large data sets. Using the Nystr¨ om method to speed up

Core vector machines: Fast SVM training on very large data sets. Multi-class support

These images are the results of relighting the synthesized target object under Lambertian model (left column) and Phong model (right column) with different light directions ....

• Johannes Kopf, Michael Cohen, Dani Lischinski, Matt Uyttendaele, Joint Bilateral Upsampling, SIGGRAPH 2007. • Jiawen Chen, Sylvain Paris, Fredo Durand, Real-time Edge-Aware