• 沒有找到結果。

Classic Transform and Lighting

CHAPTER 2 DirectX Graphics…

2.1 Direct3D Pipeline

2.1.2 Classic Transform and Lighting

As Figure 2.7 shows, a typical 3D application has several local coordinate systems (one local coordinate system for each 3D model). The origin of each of those local coordinate systems is usually at the center of that particular 3D model. In addition, a typical 3D application has one world coordinate system, whose origin is literally the center of the game universe. The first order of business is to take all the model’s local coordinates and transform them into world coordinates so that they all share a single world again.

Converting local vertices on a model to world vertices is the same as applying a transformation to them. In the world transformation step, the system multiplies each model’s vertex by a matrix that rotates it, scales it, and translates it to a specific position in the world. Of course, each model has a separate matrix. After we do this for all the models in the scene, there is no longer any notion of model space. The matrix we plug into the assembly line to place the model in the world is called the world transformation matrix. It’s named this because it’s the matrix that transforms local coordinates into world coordinates.

3D hardware simply renders the vertices which means that the viewer of the scene is hard-coded to be at (0, 0, 0), looking down the positive z-axis into the screen. Simply transforming all the model vertices into world vertices isn’t enough. We need to transform them one more time to account for where the viewer of the world is. All it takes is another pass through the vertex-processing assembly line. The camera position is shifted to the world origin (0, 0, 0), and is right side up. This is

the second step in the pipeline. For example, there is a teapot floating in space at position (11, 17, 63). Now imagine that we put a camera at (11, 17, 63), looking in a positive z direction. What we’d expect the camera to see would be an-closed-and-personal image of the teapot. After all, the camera is only one unit in front of the teapot and is looking right as it.

Unfortunately, graphics hardware can render scenes only as seen from (0, 0, 0). To deal with this, we just move everything in the world by (-11, -17, -62), which works because now the camera’s at (0, 0, 0) and that makes the graphics hardware happy. The teapot also is moved (-11, -17, -62), so it ends up at position (0, 0, 1), still one unit in front of the camera. In other words, everything is relative. It doesn’t matter where exactly the camera is located. What matters is where it’s located relative to all the other objects in the scene. That’s how cameras work. The view transformation matrix is simply the matrix that takes each vertex and translates it so that the camera is at (0, 0, 0) and facing straight up (positive y-axis). Most view matrices are concatenations of four matrices : one matrix that translates the objects and three rotation matrices that rotate the world so that the camera’s x, y, and z axes are pointing correctly. That is, positive x-axis pointing left, positive y-axis pointing up, and positive z-axis pointing into the scene.

After we do view transform, we have a set of vertices that graphics programmers consider to be in view space. They’re all set up. The only problem is that they’re still 3D, and the screen is 2D. Projection transformation takes all the 3D coordinates and projects them onto a 2D plane, at which point they’re said to be in project space. It converts the coordinates by using a matrix called the projection transformation matrix.

Projection transformation matrix set up the camera’s filed-of-view and viewing frustum.

Next on the list of advanced graphics techniques is the use of lighting. Unlike in real-life, most games fully illuminate the scene, which does make graphics look sharp, albeit unrealistic. To get a more true-to-life scene, and to give the graphics those subtle lighting effect, we need to utilize Direct3D’s lighting capabilities. We can use four types of light in Direct3D:ambient, point, spot, and directional. Ambient light is a constant source of light that illuminates everything in the scene with the same level of light. Because it is part of the device component, ambient light is the only lighting component handled separately from the lighting engine. The other three lights have unique properties. A point light illuminates everything around it (like a light bulb does). Spotlights point in a specific direction and emit a cone-shaped light. Everything inside the cone is illuminated, whereas objects outside the cone care not illuminated.

A directional light (a simplified spotlight), merely casts light in a specific direction.

Lights are placed in a scene just as other 3D objects are─by using x, y, z coordinates. Some lights, such as spotlights, also have a direction vector that determines which way they point. Each light has an intensity level, a range, attenuation factors, and color. That’s right, even colored lights are possible with Direct3D. With the exception of the ambient light, each light uses a D3DLIGHT9 data structure to store its unique information. It contains all the information we need in order to describe a light. Although the lights don’t necessarily use every variable in the D3DLIGHT9 structure, all the lights share a few common fields.

Point lights are the easiest lights with which to work. We just set their positions, color components, and ranges. Spotlights work a little differently than the other lights do because spotlights cast light in a cone

shape away from the source. The light is brightest in the center, dimming as it reaches the outer portion of the cone. Nothing outside the cone is illuminated. We define a spotlight by its position, direction, color components, color components, range, falloff, attenuation, and the radius of the inner and outer cone. We don’t have to worry about falloff and attenuation, but need to think both radiuses of the cone. Programmers determine which values to use and we’ll just have to play around until we find the values we like. In terms of processing, directional lights are the fastest type of light that we can use. They illuminate every polygon that faces them. To ready a directional lights for use, we just set the direction and color component fields in the D3DLIGHT9 structure. Ambient lighting is the only type of light that Direct3D handles differently.

Direct3D applies the ambient light to all polygons without regard to their angles or to their light source, so no shading occurs. Ambient light is a constant level of light, and like the other types of light, we can color it as we like.

There doesn’t seem to be a limit to the number of lights that we can use in a scene with Direct3D, but keep the number of lights to four or less.

Each light that we add to the scene increases the complexity and the time required for rendering.

Materials comprise the second half of the Direct3D lighting system.

Direct3D allows us to assign materials to vertices and, therefore, to the surface the vertices create. Each material contains properties that influence the way light interacts with it as Figure 2.8 shows. A Direct3D material is basically a set of four colors:diffuse, ambient, emissive, and specular.

Diffuse color has the most effect on the vertices because it specifies how the material reflects diffuse light in a scene. This parameter tells Direct3D what colors the object reflects when hit with light. In essence, this determines the color of the object under direct light. Say that we set up a model using only material and specify a diffuse color of RBG (255, 0, 0) for that material. This means that the object reflects only red light.

When hit with a white light, the object also appears red because the only light it reflects is red light. When hit with a red light, the object also appears red. When hit with a blue or green light, however, the object appears black because there’s no red within the blue or green light to reflect.

The intensity of the reflected light depends on the angle between the light ray and the vertex normal. The intensity is strongest when the light rays are parallel to the vertex’s normal, in other words, when the light is shining directly onto the surface. The intensity is weakest when the light rays are parallel to the surface, because it’s impossible for them to bounce off the surface when they’re parallel to it.

The ambient color property determines a material’s color when no direct light is hitting it. Usually, we set this equal to the diffuse color because the color of most objects is the same when hit by ambient light and when hit by diffuse light. However, we can create some weird-looking materials by specifying an ambient color that’s different from the diffuse color. Ambient color always has the same intensity because it is assumed that the ambient light is coming from all directions.

We use emissive color to create the illusion of materials that glow.

They don’t actually glow─Direct3D doesn’t perform lighting calculations using them as a light source─but they can be used in scenes to create the

appearance of a glowing object, without requiring additional light processing. For example, we can create a material that emits a white color and use it to simulate an overhead fluorescent light in the scene. It won’t actually light up anything. We have to compensate for they by, say, assigning a bright ambient color to the objects in the scene.

Last but not least, we use the specular color of a material to make an object shiny. Most of the time, we set the specular color to white to achieve realistic highlights, but we can set it to other colors to create the illusion of colored lights hitting the object. Another property, specular power, goes hand-in-hand with specular color. The power property of a material determines how sharp the highlights on that material appear. A power of zero tells Direct3D than an object has no highlights, and a power of 10 creates very definite highlights.

相關文件