• 沒有找到結果。

Chapter 1 Introduction

1.2 System Overview

An overview of our proposed rendering architecture is shown in Figure 1.2. Our method is a multi-pass rendering algorithm implemented entirely on GPU. First, we rasterize the scene from eye view and light view to produce images stored in buffers that we need in subsequent passes. These image buffers are called Geometry buffer (G-buffer)[18], which can be reused many times in subsequent passes. The eye's G-buffer is then used to produce direct illumination of a scene with deffered shading. Next, we use image space blurring techniques, with 3D geometry information in the image buffer, to produce soft shadow. Then, the light view surface pixels with position information can be regarded as the first bounce position records of photons. Therefore we can continue tracing photon bounce from these pixels, and record positions when photons hit object surfaces. These recorded photons are so called photon maps, we then splat these photons along the mesh surfaces in the geometry shader to

generate indirect illumination. Finally, we upsample the low resolution indirect illumination image by using a geometry-aware filter, and then blend with the soft shadow image to produce the final output.

Figure 1.2 Overview of our proposed system.

4

Chapter 2

Related Works

Global illumination is a widely studied and long developed research topic. A fundamental difficulty in global illumination is the high computation cost incurred by indirect lighting. Here we briefly review related work of indirect lighting. Besides, some soft shadow researches are also mentioned in this chapter.

2.1 Photon Tracing

The concept of tracing photons forward from the light source into the scene was introduced to the graphics community by Appel[1]. With similar concept, Jensen[6]

introduced photon mapping. Conventional photon mapping has three major steps. First, shoot photons from the light source. Second, tracephotons forward from lights and store them in a k-d tree. Third, produce an image by tracing eye rays backwards and gathering nearby photons to approximate illumination where each ray intersects the scene. To reduce the final gathering step's expensive cost, many variations of photon mapping have been proposed.For example, Ma and McCool[10] use a hash grid rather than a k-d tree to store the photons.

2.2 Virtual Point Light

Another photon tracing family methods called virtual point light (VPL)[7]. Those VPLs are emitted from light sources, bouncing off surfaces according to material reflectance properties. Then render the scene lit by each VPL. Note that the VPL method is different from the photon mapping approach. Each VPL can influence the entire scene but photon mapping

5

are just a set of incident illumination. When scenes contain many thin parts or holes, the illumination effects such as caustics, cannot be adequately captured by using a small number of VPLs.

Reflective shadow map[3] treats each viewing pixel from light source as a VPL. Then compute the contribution of each surface normal while ignore the occlusion of objects.

Imperfection shadow map[16] bases on the observation that shadows caused by indirect lighting more blurred. So they render low resolution shadow map from each VPL, and use pull-push to fill holes of coarse shadow map. Then use these imperfect shadow maps to solve the occlusion problem of traditional VPL. Micro-Rendering [15] represents scene's surface by hierarchical point-base scene representation, and projects these points onto each viewing position to obtain indirect lighting information.

Although previous works can provide almost photo-realistic effect, but can only support once bouncing itself. In contrast, our method can handle multiple bounces without any pre-computation.

2.3 GPU-based Photon Mapping

With the rapidly increasing computation power of modern GPUs, recent works are focused on GPU-based solutions for global illumination. Purcell et al.[14] implement the first GPU-based photon mapping. Wang et al.[19] exploit GPU-based photon mapping, by approximating the entire photon tree as a compact illumination cut, then cluster visible pixels and apply final gathering only on the clustering center pixels, to reduce the cost of gathering.

Image space photon mapping (ISPM)[12] combines reflective shadow map and photon

6 tracing engine based on NVIDIA® CUDA™ and other highly parallel architectures. Although Optix improves the tracing speed of ISPM, the Optix ISPM implementation still remain sharp shadow edge. In order to combine with other soft shadow algorithm, we use our own implementation of ISM while remains using Optix for the photon tracing procedure.

2.4 Soft Shadow and Ambient Occlusion

Conventional soft shadow and ambient occlusion [9] effects are accurately computed by using distributed ray tracing[2]. However, it is expensive, requiring tens to hundreds of rays per pixel. Ambient Occlusion Volumes (AOV)[8, 11] has similar concept as shadow volume.

They use volume to compute the accessibility of each triangle in a scene. It has the same idea as the ISPM, by using rasterization to accelerate computation. In our experiments, the covered range of volume needed to rasterdetermines the fps of performance. As a result, we use image space soft shadow method to reduce the cost of GPU shading. Such as percentage-closer soft shadow (PCSS)[4] and image space gathering (ISG)[17].

Figure 2.3 Ambient Occlusion Volumes

7

Chapter 3 Algorithm

In this thesis, we proposed an approach to enhance the algorithm of McGuire and Luebke [11]. The enhancement simplifies the photon splatting phase and considers the soft shadow effect. The steps of our algorithm are below, note that all steps take place on the GPU. For each frame we perform :

1. Render G-buffers from each light's view.

2. Render G-buffer from eye's view.

3. Trace shadow map from eye's viewing world position map. (Use Opitx) 4. Trace photons from light's viewing world position map, and record bounced

photons in photon maps. (Use Optix)

5. Render indirect lighting by splatting photons, stored in photon map.

6. Render direct lighting with soft shadow by using image space gathering.

7. Up-sample indirect illumination by geometry filter and blend with direct lighting result.

The rest of this chapter is organized as follows. In Section 3.1, we show the initial data we need, and data structures of photon. In Section 3.2, we describe the photon tracing and Section 3.3 for photon splatting. In Section 3.4 we describe how we generate soft shadow by deffered shading and image space gathering. And in Section 3.5 we describe the up-sampling and geometry filter method.

8

3.1 Initial Data

We first render the scene from eye's view to produce an eye's G-buffer, as shown in Figure 3.1. The eye's G-buffer contains the 3D information of each visible pixel, including world space position, world space normal, depth and flux. The light's G-buffers, as shown in Figure 3.2, are image data rendered from spot light source with emitted direction. We then trace rays from eye's world position map to produce shadow map, as shown in Figure 3.3.

Note that all our tracing steps take place in GPU by using Nvidia Optix[13]. We pass G-buffer data to Optix by using openGL pixel buffer object(PBO).

Figure 3.1 Eye's G-buffer (a) world-space positions and (b) normal vectors(c) original eye view.

Figure 3.2 Light's view G-buffer (a) world positions (b) world normal (c) flux

9

We trace rays from each light's viewing pixel, and record photons of each bounce until reaching the maximum bouncing times. In order to record photons on object surface, we allocate one-dimensional buffer called photon map. The size of photon map is the resolution of the light's G-buffer plus the max bouncing times. In each photon record, we store : photon position, normal to surface hit, photon power and path density. The path density is used to scale the splatting radius. So we can get more details in caustic regions.

Figure 3.3 Eye's view shadow map

Besides we use precomputed random map(PRM), which stores the random seed in a two-dimensional buffer. We compute a PRM while sending geometry data into Optix. So we only compute it once. Then we reuse this PRM while bounce incurred.

3.2 Photon Tracing

The light 's G-buffer represents the first bounce of ray casting. So after photons emit from each light source, we continue tracing photons from light's G-buffer pixels. By using the information stored in each pixel (ex: world position, normal, flux), we can compute the new direction of rays after bouncing. Then trace rays along these new directions. When a ray intersects the scene, we record the photon data in photon map. For point light source, we can use a six-view G-buffer cube map or just discard the light's G-buffer. Since Optix already

10

have all geometry information, we can emit photon from two hemisphere which cover the point light. Then trace and bounce each photon, and just record photons bouncing more than twice. Because the first bounce contribution are computed by direct lighting with deffered shading. Then we use those recorded photon to splat indirection illumination later.

While ray hit a diffuse surface, we compute new bouncing direction by random select a sample from hemisphere toward outside of the surface point itself. This random number is stored in PRM we mention in Section 3.1. For reflection material we use incident vector and surface normal to compute reflective direction. For glossy objects we take the average of diffuse and reflection directions.

3.3 Photon Splatting

Conventional photon mapping computes visible pixel radiance by gathering nearby photons. A photon is described by world space position. In order to obtain nearby photons of a specific position efficiently, photons are linked in k-d tree, called photon map[6]. Although k-d tree speeds up the searching of nearby photons, the gathering step of traditional photon mapping is still time consuming. Instead of using gathering, we estimate radiance by scattering radiance from photons to nearby pixels. Additionally, we apply weights to their contribution by using the same filter kernel as traditional photon gathering.

Image space photon mapping(ISPM)[12] uses icosohedron to bound the region influenced by photons. ISPM calls this icosohedron as photon volume. They compress these photon volumes along the surface normal. So they can abstain bad estimator such as the back faces of a thin object.

11

Figure 3.4 Bad estimator of photons mention in ISPM[12]

Instead of ISPM using compressed icosohedron as photon volume, we splat each photon along a surface as a quad which is strutted by two triangles. This simplification has two advantages. First, the fewer triangle we need to raster. Second, in order to avoid bad estimator(as shown in Figure 3.4 ), ISPM compressed icosohedron on the object surface, the 2D quad can avoid bad estimator, too.

For each pixel covered by photon quad, we compute the photon's incremental contribution as follows:

L(s ) = f(s ) ∗ (n ∙ np ) ∗ Φp∗ κ(s − s ) p (1) Where s is the position of the shading pixel, n is the shading pixel's normal, s and p np

are photon position and normal, Φp is the power of photon, f(s ) is the surface material color andκ(s − s ) is the filter kernel. p

12

Figure 3.6 Render photon centers as point.

We send photons recorded in photon map into vertex shader as points. Figure 3.6 shows result of rendering photons as point data. We cull those back facing photons by dot product of eye direction and photon normal. Because these photons do not contribute to visible pixels.

13

generate four new vertices at each photon point. Figure 3.7 shows the result of rendering photon quads.

Figure 3.7 Photon quad (with direct lighting)

Filter kernel is a falloff function, which describes the attenuation of photon energy. Since our photon are splated as a 2D plane on surface, we can handle filter kernel by a 2D texture , as shown in Figure 3.8 (a).

Figure 3.8 (a) Falloff texture (b) splatting with attenuation energy (with direct lighting)

We load this gray level texture as photon quad's alpha channel. Then apply alpha blending and normal weighting as Figure 3.8 (b). We also embed photon power into alpha channel, so

14

the image after hardware alpha blending is the indirect radiance. We can add it in soft shadow pass directly without combining direct lighting result and indirect lighting result separately. In fact we combine indirect light, soft shadow and up-sampling pass into one pass. After

adjusting splatting radius the indirect lighting result will like Figure 3.9.

Figure 3.9 Splatting result without blending with soft shadow

3.4 Image Space Soft Shadow

Our algorithm generates soft shadow during deffered shading step. As mention before, we chose image space soft shadow algorithm instead of ambient occlusion or other soft shadow method. Since our work is to combine two algorithm, the more pass they are analogous, the more information they can share. This reduce the total computation of the algorithm. Sowe follw the method [17] which uses bilateral filter to blur shadow edges with 3D geometry information stored in G-buffer. Image space gathering shadow has two steps.

First, find the average distance to occluders of each pixel rendered by deffered shading.

Second, with the average distance to occluders, we can blur each shading pixel with individual shadow penumbra radius.

We search a disk range of nearby pixels, as shown inFigure 3.10, with radius Rp

15

Rp = L Zeye (2)

Where L is a scaling factor representing the size of virtual area light and Zeye is the eye space depth value used to project the light radius into screen space.

Figure 3.10 Image space gathering

The goal of the first step is to find the weighted average distance to occluders. We filter those distances by the flowing equation:

Second, with the weighted average distance, d, to occluder, we then compute the blur region radius Rg which is:

Rg = L d

D−d1

Zeye (5)

16

where D is the space distance from the shaded point to the light. Next we search with radius Rg and use the filter function as Equation (3). Then we get the blur shadow image as shown in Figure 3.11 (b).

Figure 3.11 The result (a) without and (b) with soft shadow

3.5 Image Space Radiance Interpolation

After we compute indirect and direct illumination with soft shadow, we continue to combine them together. Although we have simplified the photon volume as photon quad and use Optix to trace photon on GPU, the indirect lighting computation is still expensive. In order to improve the performance of indirect lighting computation, we can optionally down-sample radiance from the photons and then use geometry-aware filtering to up-sample the resulting screen space radiance to estimate the final image resolution.

The idea is to compute indirect lighting only at fewer locations in screen space , which produce a low resolution indirect light radiance map. Next, for remaining pixels, we use interpolation to obtain their color by comparing full resolution normal map and depth map.

That is, for each pixel rendered at the final step, we search the nearby four sub-pixels the same as bilinear interpolation. Simultaneously, we comparing these sub-pixels with the rendering pixel by their normal and depth. Therefore, we can choose sub-pixels which are on

17

the same plane of rendered pixel or nearby sub- pixels in a certain space distance to interpolate while preserves object edges and corners.

Figure 3.12 (a) 8x8 Subsampling. (b) Bilinear weighting

In detail, first, we render indirect lighting by 8x8 sampling with nearest-neighbor upsamples, as shown in Figure 3.12 (a). Then we apply bilinear interpolation with only screen space distance weighting, as shown in Figure 3.12 (b). In order to preserve corners and edges, it is further weighted by the dot product of normal and difference of depth, as shown in Figire 3.13. Compare to the 1x1 sampling image, as shown is Figure 3.14, we dramatically enhance the performance by losing a little quality.

18

Figure 3.13 Result after normal and depth weighting

Figure 3.12 Final result(1x1 sampling)

19

Chapter 4

Implementation Detail

Since our approach bases on photon mapping, we can handle most optical phenomena such as indirect lighting and caustic with physical accuracy. However, photon mapping have two major risk, path tracing and computation of photon radiance contribution. Path tracing is highly dependent on the scene, the cost increases with the complexity of scene. Fortunately, with the GPU-based ray tracing engine, we can handle this by optimized multi-processing.

Moreover, by taking the advantage of GPU rasterization path, we abstain the final gathering.

We implement our approach by using OpenGL with CgFX shader language, and implement photon tracing with Optix [13]. In our approach, any geometry data should be sent into both Optix host and OpenGL. We use OpenGL Vertex Array Object(VAO) to share geometry data with Optix and OpenGL. Since we use VAO, both Optix and OpenGL can access the geometry data directly in GPU memory without copy memory into the main memory. Furthermore, VAO not only can take geometry data to be reused but can avoid copy photon map from GPU to CPU memory. After Optix finishs photon tracing and outputs a photon map, we can use VAO to copy photon data as OpenGL primitive input. Note that the memory bus between CPU main memory and GPU memory which is called Peripheral Component Interconnect Express(PCI-Express) bus has only max 8 GB/s data rate. It is very slow transmission speed comparing to other data bus in computer. If we copy buffer between GPU and CPU memory frequently , it is bound to reduce performance acutely. By using OpenGL Frame Buffer Object, we can access buffer data in GPU memory immediately after the previous rendering pass.

20

When sending mesh data into Optix, we must construct geometry group in Optix context.

The geometry structure in Optix is shown in Figure. A geometry group can contain many geometry instances. For each geometry instance, it has primitive data with intersection program and material program. We bound primitive data by bounding program, and the Optix will generate BVH to accelerate the computation of intersection. The intersect program define the way how we find the intersection position on mesh surface. So we can use multiple kind if structure to describe model surface such as triangle, parallelogram or sphere...etc. Once the intersect program return an intersection event, we can obtain the texture coordinate, normal and color data of the hitting point. However, for external material information such as shadow or indirect light which cannot obtain by intersect program. We can obtain these external material from any-hit or closet-hit program by continue tracing rays from hitting position. In our implementation we trace shadow rays in any-hit program and photon path ray in closet-hit program. Note that the different between any-hit and closet-hit is using the ray payload or not.

Any-hit program can only return hit object or not. While closet-hit can obtain color, normal and distance of intersection point by storing these data into ray payload.

Figure 4.1 Optix data structure

21

Chapter 5

Results and Discussion

In this section, we present our results rendering at real-time frame rates on a desktop PC with Intel Core i7 930 CPU 2.8GHz and NVIDIA GeForce GTX 480 video card.

All results are rendered at 1024 x 768 pixels.

Figure 5.1 ~ 5.4 show scenes rendered by our algorithm. Figure 5.1 and Figure 5.2 display basic Cornell box scene illuminated by once and twice bounce indirect lighting with soft shadow effect. Figure 5.3 and Figure 5.4 demonstrate Cornell box containing a complex model with diffuse material. Figure 5.5 and Figure 5.6 show glass object (η = 1.5 with a little reflect and fresnel effect). Eye-ray refraction is computed for front faces using dynamic environment map with direct light only. The image also demonstrates refractive caustic phenomena. Figure 5.7 and Figure 5.8 show the complex scene of Sponza and Sibenik.

Figure 5.1 Cornell box with (a) once bounce (b) twice bounce illumination.

22

Figure 5.2 Happy buddha

Figure 5.3 Chinese dragon

23

Figure 5.4 Glass sphere

Figure 5.5 Glass bunny

24

Figure 5.6 Sibenik

Figure 5.7 Sponza Atrium

相關文件