3D photography
Digital Visual Effects Yung-Yu Chuang
with slides by Szymon Rusinkiewicz, Richard Szeliski, Steve Seitz and Brian Curless
3D photography
• Acquisition of geometry and material
Range acquisition
Range acquisition taxonomy
range acquisition
contact
transmissive
reflective non-optical optical
industrial CT
mechanical (CMM, jointed arm)
radar sonar ultrasound
MRI
ultrasonic trackers magnetic trackers
inertial (gyroscope, accelerometer)
Range acquisition taxonomy
optical methods
passive
active
shape from X:
stereo motion shading texture focus defocus
active variants of passive methods
Stereo w. projected texture Active depth from defocus Photometric stereo
time of flight triangulation
Outline
• Passive approaches
– Stereo
– Multiview approach
• Active approaches
– Triangulation
– Shadow scanning
• Active variants of passive approaches
– Photometric stereo
– Example-based photometric stereo
Passive approaches
Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923
Stereo
Stereo
• One distinguishable point being observed
– The preimage can be found at the intersection of the rays from the focal points to the image points
Stereo
• Many points being observed
– Need some method to establish correspondences
Components of stereo vision systems
• Camera calibration
• Image rectification: simplifies the search for co rrespondences
• Correspondence: which item in the left image c orresponds to which item in the right image
• Reconstruction: recovers 3-D information from t he 2-D correspondences
Epipolar geometry
• Epipolar constraint: corresponding points must l ie on conjugate epipolar lines
– Search for correspondences becomes a 1-D problem
0
'
Fx
x
Image rectification
• Warp images such tha t conjugate epipolar li nes become collinear and parallel to u axis
Disparity
• With rectified images, disparity is just (horizont al) displacement of corresponding features in th e two images
– Disparity = 0 for distant points – Larger disparity for closer points
– Depth of point proportional to 1/disparity
Reconstruction
• Geometric
– Construct the line segment perpendicular to R and R' that intersects both rays and take its mid-point
Basic stereo algorithm
For each epipolar line
For each pixel in the left image
• compare with every pixel on same epipolar line in right image
• pick pixel with minimum match cost
Improvement: match windows
Basic stereo algorithm
• For each pixel
– For each disparity
• For each pixel in window
– Compute difference
– Find disparity with minimum SSD
Reverse order of loops
• For each disparity
– For each pixel
• For each pixel in window
– Compute difference
• Find disparity with minimum SSD at each pixel
Incremental computation
• Given SSD of a window, at some disparity
Image 1
Image 2
Incremental computation
• Want: SSD at next location
Image 1
Image 2
Incremental computation
• Subtract contributions from leftmost column, a dd contributions from rightmost column
Image 1
Image 2
++ ++ +
-- -- - -- -- -
++ ++ +
Selecting window size
• Small window: more detail, but more noise
• Large window: more robustness, less detail
• Example:
Selecting window size
3 pixel window 20 pixel window
Non-square windows
• Compromise: have a large window, but higher w eight near the center
• Example: Gaussian
• Example: Shifted windows
Ordering constraint
• Order of matching features usually the same in both images
• But not always: occlusion
Dynamic programming
• Treat feature correspondence as graph problem
Left image features
Right image features
1 2 3 4
1 2 3 4
Cost of edges = similarity of regions between
image features
Dynamic programming
• Find min-cost path through graph
Left image features
Right image features
1 2 3 4
1 2 3 4
1
34 1 2 2
3 4
Energy minimization
• Another approach to improve quality of corresp ondences
• Assumption: disparities vary (mostly) smoothly
• Minimize energy function:
Edata+lEsmoothness
• Edata: how well does disparity match data
• Esmoothness: how well does disparity match that of neighbors – regularization
Energy minimization
• If data and energy terms are nice (continuous, s mooth, etc.) can try to minimize via gradient d escent, etc.
• In practice, disparities only piecewise smooth
• Design smoothness function that doesn’t penal ize large jumps too much
– Example: V(a,b)=min(|a-b|, K)
Stereo as energy minimization
• Matching Cost Formulated as Energy
– “data” term penalizing bad matches
– “neighborhood term” encouraging spatial smoothness
) , (
) , ( )
, ,
( x y d x y x d y
D I - J
similar) something
(or
d2 and d1
labels with
pixels adjacent
of cost
2 1
2 1, ) (
d d
d d V
-
) 2 , 2 ( ), 1 , 1 (
2 , 2 1
, 1 )
, (
,
) ( , )
, , (
y x y
x neighbors
y x y
x y
x
y
x
V d d
d
y
x
D
E
Energy minimization
• Hard to find global minima of non-smooth funct ions
– Many local minima – Provably NP-hard
• Practical algorithms look for approximate mini ma (e.g., simulated annealing)
Stereo results
ground truth scene
– Data from University of Tsukuba
http://cat.middlebury.edu/stereo/
Results with window correlation
normalized correlation (best window size)
ground truth
Results with graph cuts
ground truth graph cuts
(Potts model E,
expansion move algorithm)
Stereo evaluation
Stereo—best algorithms
Volumetric multiview approaches
• Goal: find a model consistent with images
• “Model-centric” (vs. image-centric)
• Typically use discretized volume (voxel grid)
• For each voxel, compute occupied / free (for some algorithms, also color, etc.)
Photo consistency
• Result: not necessarily the correct scene
• Many scenes produce the same images
All scenes
Photo-consistent scenes True scene Reconstructed
scene
Silhouette carving
• Find silhouettes in all images
• Exact version:
– Back-project all silhouettes, find intersection
Binary Images
Silhouette carving
• Find silhouettes in all images
• Exact version:
– Back-project all silhouettes, find intersection
Silhouette carving
• Limit of silhouette carving is visual hull or line hull
• Complement of lines that don’t intersect obje ct
• In general not the same as object
– Can’t recover “pits” in object
• Not the same as convex hull
Silhouette carving
• Discrete version:
– Loop over all voxels in some volume
– If projection into images lies inside all silhouettes, m ark as occupied
– Else mark as free
Silhouette carving
Voxel coloring
• Seitz and Dyer, 1997
• In addition to free / occupied, store color at each voxel
• Explicitly accounts for occlusion
Voxel coloring
• Basic idea: sweep through a voxel grid
– Project each voxel into each image in which it is visible
– If colors in images agree, mark voxel with color – Else, mark voxel as empty
• Agreement of colors based on comparing standa rd deviation of colors to threshold
Voxel coloring and occlusion
Voxel coloring and occlusion
• Problem: which voxels are visible?
• Solution: constrain camera views
– When a voxel is considered, necessary occlusion info rmation must be available
– Sweep occluders before occludees
– Constrain camera positions to allow this sweep
Voxel coloring sweep order
Scene Traversal
Voxel coloring camera positions
Inward-looking
Cameras above scene
Outward-looking
Cameras inside scene
Seitz
Image acquisition
•Calibrated Turntable
•360° rotation (21 images)
Selected Dinosaur Images
Selected Flower Images
Voxel coloring results
Dinosaur Reconstruction
72 K voxels colored 7.6 M voxels tested
7 min. to compute on a 250MHz SGI
Flower Reconstruction
70 K voxels colored 7.6 M voxels tested
7 min. to compute on a 250MHz SGI
Space carving
Image 1 Image N
…...
Initialize to a volume V containing the true scene
Repeat until convergence
Choose a voxel on the current surface Carve if not photo-consistent
Project to visible input images
Multi-pass plane sweep
• Faster alternative:
– Sweep plane in each of 6 principal directions – Consider cameras on only one side of plane – Repeat until convergence
Multi-pass plane sweep
True Scene Reconstruction
Multi-pass plane sweep
Multi-pass plane sweep
Multi-pass plane sweep
Multi-pass plane sweep
Input image (1 of 45) Reconstruction
Reconstruction Reconstruction
Space carving results: African violet
Input image (1 of 100)
Reconstruction
Space carving results: hand
Active approaches
Time of flight
• Basic idea: send out pulse of light (usually lase r), time how long it takes to return
t c
r
2 1
Laser scanning (triangulation)
• Optical triangulation
– Project a single stripe of laser light
– Scan it across the surface of the object
– This is a very precise version of structured light scanning
• Other patterns are possible
Digital Michelangelo Project
http://graphics.stanford.edu/projects/mich/
Cyberware
face and hand full body
XYZRGB
XYZRGB
Shadow scanning
Desk Lamp
Camera
Stick or pencil
Desk
http://www.vision.caltech.edu/bouguetj/ICCV98/
Basic idea
• Calibration issues:
– where’s the camera wrt. ground plane?
– where’s the shadow plane?
• depends on light source position, shadow edge
Two Plane Version
• Advantages
– don’t need to pre-calibrate the light source
– shadow plane determined from two shadow edges
Estimating shadow lines
Shadow scanning in action
accuracy: 0.1mm over 10cm ~ 0.1% error
Results
Textured objects
accuracy: 1mm over 50cm ~ 0.5% error
Scanning with the sun
accuracy: 1cm over 2m
~ 0.5% error
Scanning with the sun
Active variants of
passive approaches
The BRDF
• The Bidirectional Reflection Distribution Function
– Given an incoming ray and outgoing ray
what proportion of the incoming light is reflected along outg oing ray?
surface normal
(l,v) (l n)
I
Diffuse reflection (Lambertian)
kd
v l, )
( albedo
Assuming that light strength is 1.
Photometric stereo
N
L1 L2 V L3
• Can write this as a matrix equation:
Solving the equations
More than three lights
• Get better results by using more lights
• Least squares solution:
• Solve for N, kd as before
Trick for handling shadows
• Weight each equation by the pixel brightness:
• Gives weighted least-squares matrix equation:
• Solve for N, kd as before
Photometric Stereo Setup
Procedure
• Calibrate camera
• Calibrate light directions/intensities
• Photographing objects (HDR recommended)
• Estimate normals
• Estimate depth
Estimating light directions
• Trick: place a chrome sphere in the scene
– the location of the highlight tells you where the light source is
• Use a ruler
Photographing objects
Normalize light intensities
Estimate normals
Depth from normals
Results
Limitations
• Big problems
– doesn’t work for shiny things, semi-translucent thing s
– shadows, inter-reflections
• Smaller problems
– calibration requirements
• measure light source directions, intensities
• camera response function
Example-based photometric stereo
• Estimate 3D shape by varying illumination, fixed camera
• Operating conditions
– any opaque material
– distant camera, lighting – reference object available
– no shadows, interreflections, transparency
same surface normal
“Orientation consistency”
Virtual views
Velvet
Virtual Views
Brushed Fur
Virtual Views
Active stereo with structured light
• Project “structured” light patterns onto the object
– simplifies the correspondence problem
camera 2 camera 1
projector
camera 1
projector
Li Zhang’s one-shot stereo
Spacetime Stereo
http://grail.cs.washington.edu/projects/stfaces/
3D Model Acquisition Pipeline
3D Scanner 3D Scanner
3D Model Acquisition Pipeline
View Planning View Planning
3D Scanner 3D Scanner
3D Model Acquisition Pipeline
Alignment Alignment View Planning
View Planning
3D Scanner 3D Scanner
3D Model Acquisition Pipeline
3D Scanner 3D Scanner
Alignment Alignment
Merging Merging View Planning
View Planning
Volumetric reconstruction
Signed distance function
Results
The Digital Michelangelo Project
• Goal: scan 10 sculptures by Michelangelo
• High-resolution (“quarter-millimeter”) geometr y
• Stanford University, led by Marc Levoy
Systems, projects and applica
tions
Scanning the David
height of gantry: 7.5 meters weight of gantry: 800 kilograms
Range processing pipeline
• steps
1. manual initial alignment 2. ICP to one existing scan
3. automatic ICP of all overlapping pairs 4. global relaxation to spread out error 5. merging using volumetric method
• 480 individually aimed scans
• 2 billion polygons
• 7,000 color images
• 32 gigabytes
• 30 nights of scanning
• 22 people
Statistics about the scan
photograph 1.0 mm computer model
Comparison
Results
The Great Buddha Project
• Great Buddha of Kamakura
• Original made of wood, completed 1243
• Covered in bronze and gold leaf, 1267
• Approx. 15 m tall
• Goal: preservation of cultural heritage
• Institute of Industrial Science, University of Tokyo, led by
Katsushi Ikeuchi
Scanner
• Cyrax range scanner by Cyra Technologies
• Laser pulse time-of-flight
• Accuracy: 4 mm
• Range: 100 m
Processing
• 20 range images (a few million points)
• Simultaneous all-to-all ICP
• Variant of volumetric merging (parallelized)
Results
Applications in VFX
• 3D scanning
• Hybrid camera for IMAX
• View interpolation
3D scanning
XYZRGB Inc.
IMAX 3D
• 6K resolution, 42 linear bits per pixel
• For CG, it typically takes 6 hours for a frame
• 45-minute IMAX 3D CG film requires a 100-CPU r endering farm full-time for about a year just for rendering
• For live-action, camera is bulky (like a refrigera tor)
Hybrid stereo camera
Live-action sequence
Hybrid input
left
right
Hybrid input
left
right
left
Combine multiple hires to lores
Results
View interpolation
Bullet time video
View interpolation
High-quality video view interpolation