• 沒有找到結果。

Reflectance and shading

在文檔中 Computer Vision: (頁 84-90)

14 Recognition5 Segmentation

2.2 Photometric image formation

2.2.2 Reflectance and shading

When light hits an object’s surface, it is scattered and reflected (Figure2.15a). Many different models have been developed to describe this interaction. In this section, we first describe the most general form, the bidirectional reflectance distribution function, and then look at some more specialized models, including the diffuse, specular, and Phong shading models. We also discuss how these models can be used to compute the global illumination corresponding to a scene.

The Bidirectional Reflectance Distribution Function (BRDF)

The most general model of light scattering is the bidirectional reflectance distribution func-tion(BRDF).5Relative to some local coordinate frame on the surface, the BRDF is a four-dimensional function that describes how much of each wavelength arriving at an incident directionvˆiis emitted in a reflected directionvˆr(Figure2.15b). The function can be written in terms of the angles of the incident and reflected directions relative to the surface frame as

fri, φi, θr, φr; λ). (2.81) The BRDF is reciprocal, i.e., because of the physics of light transport, you can interchange the roles of vˆi andvˆr and still get the same answer (this is sometimes called Helmholtz reciprocity).

5Actually, even more general models of light transport exist, including some that model spatial variation along the surface, sub-surface scattering, and atmospheric effects—see Section12.7.1—(Dorsey, Rushmeier, and Sillion 2007;Weyrich, Lawrence, Lensch et al. 2008).

2.2 Photometric image formation 63 Most surfaces are isotropic, i.e., there are no preferred directions on the surface as far as light transport is concerned. (The exceptions are anisotropic surfaces such as brushed (scratched) aluminum, where the reflectance depends on the light orientation relative to the direction of the scratches.) For an isotropic material, we can simplify the BRDF to

fri, θr,|φr− φi|; λ) or fr(ˆvi, ˆvr, ˆn; λ), (2.82) since the quantitiesθirandφr− φican be computed from the directionsvˆi,ˆvr, andn.ˆ

To calculate the amount of light exiting a surface point p in a directionvˆrunder a given lighting condition, we integrate the product of the incoming lightLi(ˆvi; λ) with the BRDF (some authors call this step a convolution). Taking into account the foreshortening factor cos+θi, we obtain

Lr(ˆvr; λ) =Z

Li(ˆvi; λ)fr(ˆvi, ˆvr, ˆn; λ) cos+θidˆvi, (2.83) where

cos+θi= max(0, cos θi). (2.84)

If the light sources are discrete (a finite number of point light sources), we can replace the integral with a summation,

Lr(ˆvr; λ) =X

i

Li(λ)fr(ˆvi, ˆvr, ˆn; λ) cos+θi. (2.85)

BRDFs for a given surface can be obtained through physical modeling (Torrance and Sparrow 1967;Cook and Torrance 1982;Glassner 1995), heuristic modeling (Phong 1975), or through empirical observation (Ward 1992;Westin, Arvo, and Torrance 1992;Dana, van Gin-neken, Nayar et al. 1999;Dorsey, Rushmeier, and Sillion 2007;Weyrich, Lawrence, Lensch et al.2008).6 Typical BRDFs can often be split into their diffuse and specular components, as described below.

Diffuse reflection

The diffuse component (also known as Lambertian or matte reflection) scatters light uni-formly in all directions and is the phenomenon we most normally associate with shading, e.g., the smooth (non-shiny) variation of intensity with surface normal that is seen when ob-serving a statue (Figure2.16). Diffuse reflection also often imparts a strong body color to the light since it is caused by selective absorption and re-emission of light inside the object’s material (Shafer 1985;Glassner 1995).

6Seehttp://www1.cs.columbia.edu/CAVE/software/curet/for a database of some empirically sampled BRDFs.

Figure 2.16 This close-up of a statue shows both diffuse (smooth shading) and specular (shiny highlight) reflection, as well as darkening in the grooves and creases due to reduced light visibility and interreflections. (Photo courtesy of the Caltech Vision Lab,http://www.

vision.caltech.edu/archive.html.)

While light is scattered uniformly in all directions, i.e., the BRDF is constant,

fd(ˆvi, ˆvr, ˆn; λ) = fd(λ), (2.86) the amount of light depends on the angle between the incident light direction and the surface normalθi. This is because the surface area exposed to a given amount of light becomes larger at oblique angles, becoming completely self-shadowed as the outgoing surface normal points away from the light (Figure2.17a). (Think about how you orient yourself towards the sun or fireplace to get maximum warmth and how a flashlight projected obliquely against a wall is less bright than one pointing directly at it.) The shading equation for diffuse reflection can thus be written as

Ld(ˆvr; λ) =X

i

Li(λ)fd(λ) cos+θi=X

i

Li(λ)fd(λ)[ˆvi· ˆn]+, (2.87)

where

[ˆvi· ˆn]+= max(0, ˆvi· ˆn). (2.88)

Specular reflection

The second major component of a typical BRDF is specular (gloss or highlight) reflection, which depends strongly on the direction of the outgoing light. Consider light reflecting off a mirrored surface (Figure2.17b). Incident light rays are reflected in a direction that is rotated by180around the surface normaln. Using the same notation as in Equations (2.29–2.30),ˆ

2.2 Photometric image formation 65

v^ ^i•n = 1 0 < v^ ^i•n < 1

v^ ^i•n <0 v^ ^i•n = 0

vi

v

n^

v -v

si

180°

v

^

^

(a) (b)

Figure 2.17 (a) The diminution of returned light caused by foreshortening depends onˆvi· ˆn, the cosine of the angle between the incident light directionˆviand the surface normaln. (b)ˆ Mirror (specular) reflection: The incident light ray directionˆviis reflected onto the specular directionˆsiaround the surface normaln.ˆ

we can compute the specular reflection directionˆsias ˆ

si= vk− v= (2ˆnˆnT − I)vi. (2.89)

The amount of light reflected in a given direction vˆr thus depends on the angleθs = cos−1(ˆvr· ˆsi) between the view direction ˆvrand the specular directionsˆi. For example, the Phong(1975) model uses a power of the cosine of the angle,

fss; λ) = ks(λ) coskeθs, (2.90) while theTorrance and Sparrow(1967) micro-facet model uses a Gaussian,

fss; λ) = ks(λ) exp(−c2sθ2s). (2.91) Larger exponentske(or inverse Gaussian widthscs) correspond to more specular surfaces with distinct highlights, while smaller exponents better model materials with softer gloss.

Phong shading

Phong(1975) combined the diffuse and specular components of reflection with another term, which he called the ambient illumination. This term accounts for the fact that objects are generally illuminated not only by point light sources but also by a general diffuse illumination corresponding to inter-reflection (e.g., the walls in a room) or distant sources, such as the

0.0 0.1 0.2 0.3 0.4 0.5

-90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 Ambient Diffuse Exp=10 Exp=100 Exp=1000

0.0 0.1 0.2 0.3 0.4 0.5

-0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5

Ambient Diffuse Exp=10 Exp=100 Exp=1000

(a) (b)

Figure 2.18 Cross-section through a Phong shading model BRDF for a fixed incident illu-mination direction: (a) component values as a function of angle away from surface normal;

(b) polar plot. The value of the Phong exponentkeis indicated by the “Exp” labels and the light source is at an angle of30away from the normal.

blue sky. In the Phong model, the ambient term does not depend on surface orientation, but depends on the color of both the ambient illuminationLa(λ) and the object ka(λ),

fa(λ) = ka(λ)La(λ). (2.92)

Putting all of these terms together, we arrive at the Phong shading model, Lr(ˆvr; λ) = ka(λ)La(λ) + kd(λ)X

i

Li(λ)[ˆvi· ˆn]++ ks(λ)X

i

Li(λ)(ˆvr· ˆsi)ke. (2.93)

Figure2.18 shows a typical set of Phong shading model components as a function of the angle away from the surface normal (in a plane containing both the lighting direction and the viewer).

Typically, the ambient and diffuse reflection color distributionska(λ) and kd(λ) are the same, since they are both due to sub-surface scattering (body reflection) inside the surface material (Shafer 1985). The specular reflection distributionks(λ) is often uniform (white), since it is caused by interface reflections that do not change the light color. (The exception to this are metallic materials, such as copper, as opposed to the more common dielectric materials, such as plastics.)

The ambient illumination La(λ) often has a different color cast from the direct light sourcesLi(λ), e.g., it may be blue for a sunny outdoor scene or yellow for an interior lit with candles or incandescent lights. (The presence of ambient sky illumination in shadowed areas is what often causes shadows to appear bluer than the corresponding lit portions of a scene). Note also that the diffuse component of the Phong model (or of any shading model) depends on the angle of the incoming light sourceˆvi, while the specular component depends on the relative angle between the viewer vr and the specular reflection directionsˆi (which itself depends on the incoming light directionvˆiand the surface normaln).ˆ

2.2 Photometric image formation 67 The Phong shading model has been superseded in terms of physical accuracy by a number of more recently developed models in computer graphics, including the model developed by Cook and Torrance(1982) based on the original micro-facet model ofTorrance and Sparrow (1967). Until recently, most computer graphics hardware implemented the Phong model but the recent advent of programmable pixel shaders makes the use of more complex models feasible.

Di-chromatic reflection model

TheTorrance and Sparrow(1967) model of reflection also forms the basis of Shafer’s (1985) di-chromatic reflection model, which states that the apparent color of a uniform material lit from a single source depends on the sum of two terms,

Lr(ˆvr; λ) = Li(ˆvr, ˆvi, ˆn; λ) + Lb(ˆvr, ˆvi, ˆn; λ) (2.94)

= ci(λ)mi(ˆvr, ˆvi, ˆn) + cb(λ)mb(ˆvr, ˆvi, ˆn), (2.95) i.e., the radiance of the light reflected at the interface,Li, and the radiance reflected at the sur-face body,Lb. Each of these, in turn, is a simple product between a relative power spectrum c(λ), which depends only on wavelength, and a magnitude m(ˆvr, ˆvi, ˆn), which depends only on geometry. (This model can easily be derived from a generalized version of Phong’s model by assuming a single light source and no ambient illumination, and re-arranging terms.) The di-chromatic model has been successfully used in computer vision to segment specular col-ored objects with large variations in shading (Klinker 1993) and more recently has inspired local two-color models for applications such Bayer pattern demosaicing (Bennett, Uytten-daele, Zitnick et al. 2006).

Global illumination (ray tracing and radiosity)

The simple shading model presented thus far assumes that light rays leave the light sources, bounce off surfaces visible to the camera, thereby changing in intensity or color, and arrive at the camera. In reality, light sources can be shadowed by occluders and rays can bounce multiple times around a scene while making their trip from a light source to the camera.

Two methods have traditionally been used to model such effects. If the scene is mostly specular (the classic example being scenes made of glass objects and mirrored or highly pol-ished balls), the preferred approach is ray tracing or path tracing (Glassner 1995; Akenine-M¨oller and Haines 2002;Shirley 2005), which follows individual rays from the camera across multiple bounces towards the light sources (or vice versa). If the scene is composed mostly of uniform albedo simple geometry illuminators and surfaces, radiosity (global illumination) techniques are preferred (Cohen and Wallace 1993;Sillion and Puech 1994;Glassner 1995).

Combinations of the two techniques have also been developed (Wallace, Cohen, and Green-berg 1987), as well as more general light transport techniques for simulating effects such as the caustics cast by rippling water.

The basic ray tracing algorithm associates a light ray with each pixel in the camera im-age and finds its intersection with the nearest surface. A primary contribution can then be computed using the simple shading equations presented previously (e.g., Equation (2.93)) for all light sources that are visible for that surface element. (An alternative technique for computing which surfaces are illuminated by a light source is to compute a shadow map, or shadow buffer, i.e., a rendering of the scene from the light source’s perspective, and then compare the depth of pixels being rendered with the map (Williams 1983;Akenine-M¨oller and Haines 2002).) Additional secondary rays can then be cast along the specular direction towards other objects in the scene, keeping track of any attenuation or color change that the specular reflection induces.

Radiosity works by associating lightness values with rectangular surface areas in the scene (including area light sources). The amount of light interchanged between any two (mutually visible) areas in the scene can be captured as a form factor, which depends on their relative orientation and surface reflectance properties, as well as the1/r2fall-off as light is distributed over a larger effective sphere the further away it is (Cohen and Wallace 1993; Sillion and Puech 1994;Glassner 1995). A large linear system can then be set up to solve for the final lightness of each area patch, using the light sources as the forcing function (right hand side).

Once the system has been solved, the scene can be rendered from any desired point of view.

Under certain circumstances, it is possible to recover the global illumination in a scene from photographs using computer vision techniques (Yu, Debevec, Malik et al. 1999).

The basic radiosity algorithm does not take into account certain near field effects, such as the darkening inside corners and scratches, or the limited ambient illumination caused by partial shadowing from other surfaces. Such effects have been exploited in a number of computer vision algorithms (Nayar, Ikeuchi, and Kanade 1991;Langer and Zucker 1994).

While all of these global illumination effects can have a strong effect on the appearance of a scene, and hence its 3D interpretation, they are not covered in more detail in this book.

(But see Section12.7.1for a discussion of recovering BRDFs from real scenes and objects.)

在文檔中 Computer Vision: (頁 84-90)

相關文件