• 沒有找到結果。

Chapter 2 Background

2.2 Rendering Process

2.2.4 Rasterization

In a parallel projection, if the view plane is normal to the direction of projection then the projection is orthographic and we have:

s v s v s v (23)

Having looked at how general points within a polygon can be assigned intensities

that are determined from vertex values, we now look at how we determine the actual pixels which we require intensity values for. The process is known as rasterization or scan conversion. We consider this somewhat tricky problem in two parts. First, how do we determine the pixels which the edge of a polygon straddles?

Second, how do we organize this information to determine the interior points?

Rasterizing edges

There are two difference ways of rasterizing an edge, based on whether line drawing or solid area filling is being used. Line drawing is not covered here, since we are interested in solid object. However, the main feature of line drawing algorithm (for example, Bresenham’s algorithm (Bresenham 1965)) is that they must generate a linear sequence of pixels with no gaps (Figure 2.17). For solid area filling, a less rigorous approach suffices. We can fill a polygon using horizontal line segments; these can be thought of as the intersection of the polygon with a particular scan line. Thus, for any given scan line, what is required is the left- and right-hand limits of segment that is the intersections of the scan line with left- and right-hand polygon edges. This means that for each edge’s intersections with the scan lines (Figure 2.17 b). This sequence may have gaps, when interpreted as a line, as shown by the right-hand edge in the diagram.

Figure 2-17 The concept of Bresenham’s algorithm [1]

The conventional way of calculating these pixels coordinates is by use of what is grandly referred to as a ‘digital differential analyzer’, or DDA for short. All this really consists of is finding how much the x coordinate increases per scan line, and then repeatedly adding this increment.

Let ( ,x ys s), ( ,x ye e) be the start and end points of the edge ( we assume that

Pixel sequences required for (a) line drawing and (b) polygon filling

ye >ys

)

). The simplest algorithm for rasterizing sufficient for polygon edges is:

The main drawback of this approach is that m and x need to be represented as floating point values, with a floating point addition and real-to-integer version each time round the loop. A method due to Swanson and Thayer (Swanson and Thayer 1986) provides an integer-only version of this algorithm. It can be derived from the above in two logical stages. First we separate out x and m into integer and fractional parts. Then each time round the loop, we separate add two parts, adding a carry to the integer part should the fractional part overflow. Also, we initially set the fractional part of x to -0.5 to make rounding easy, as well as simplifying the overflow condition. In pseudocode:

Because the fractional part is now independent of the integer part, it is possible to scale it throughout by 2

(

yeys

)

, which the effect of converting everything to integer arithmetic:

:

Although this approach now to involve two divisions rather than one, they are both integer rather than floating point. Also, given suitable hardware, they can both be evaluated from the same division, since the second (mod) is simply the remainder from the first (div). Finally it only remains to point out that the

within the loop is constant and would in practical be evaluated just once outside it.

2(yeys)

Rasterizing polygons

Now that we know how to find pixels along the polygon edges, it is necessary to turn our attention to filling the polygons themselves. Since we are concerned with shading, ‘filling a polygon’ means finding the pixel coordinates of interior points and assigning to these a value calculated using one of the incremental shading schemes described in 2.2.5. We need to generate pairs of segment end points and fill in horizontally between them. This is usually achieved by constructing an

‘edge list’ for each polygon.

In principle this is done using an array of linked lists, with an element for each scan line. Initially all the elements are set to NIL. Then each edge of the polygon is rasterized in turn, and the x coordinate of each pixel (x, y) thus generated is inserted into the linked list corresponding to that value of y. Each of the linked lists is then sorted in order of increasing x. The result is something like that shown in Figure 2.18. Filling-in of the polygon is then achieved by, for each scan line, taking successive pairs of x values and filling in between them (because a polygon has to be closed, there will always be an even number of elements in the linked list). Note that this method is powerful enough to cope with concave polygons with holes.

Figure 2-18 The linked list [1]

In practice, the sorting of the linked lists is achieved by inserting values in the appropriate place initially, rather than by a big sort at the end. Also, as well as calculating the x value and sorting it for each pixel on an edge, the appropriate shading values would be calculated and stored at the same time ( for example, intensity value for Gouraud shading; x, y and z components of the interpolated normal vector for Phong shading).

If the object contains only convex polygons then the linked x lists will only ever contain two x coordinates; the data structure of the edge list is simplified and there is no sort required. It is not a great restriction in practical computer graphics to constrain an object to convex polygons.

One thing that has been slightly glossed over so far is the consideration of exactly where the borders of a polygon lie. This can manifest itself in adjacent polygons either by gaps appearing between them, or by them overlapping. For example, in Figure 2.19, the width of the polygon is 3 units, so it should have an area of 9 units, whereas it has been rendered with an area of 16 units. The traditional solution to this problem, and the one usually advocated in textbook, is to consider the sample point of the pixel to lie in its centre, that is, at

. (A pixel can be considered to be a rectangle of finite area with dimensions 1.0*1.0, and its sample point is the point within the pixel area where the scene is sampled in order to determine the value of the pixel.) So, for example, the intersection of an edge with a scan line is calculated for y+0.5, rather than for y, as we assumed above. This is messy, and excludes the possibility of using integer-only arithmetic. A simpler solution is to assume that the sample point lies

(x+0.5,y+0.5)

An example of a linked list maintained in polygon rasterization.

at one of the four corners of the pixel; we have chosen the top right-hand corner of the pixel. This has the consequence that the entire image is displaced half a pixel to the left and down, which in practice is insignificant. The upshot of this is that it provides the following simple Rasterization rules:

(1) Horizontal edges are simply discarded.

(2) An edge which goes from scan line to should generated x values

(3) Similarly, horizontal segments should be filled from xleft to xright −1 (with no pixels generated if xleft =xright).

Figure 2-19 The problem with polygon boundaries [1]

Incidentally, in rule (2) and (3), whether the first or last element is ignored is arbitrary, and the choice is based around programming convenience. The four possible permutations of these two rules define the sample point as one of the four corners of the pixel. The effect of these rules can be demonstrated in Figure 2.20. Here we have three adjacent polygons A, B and C, with edges a, b, c, and d. the rounded x values produced by these edges for the scan shown are 2, 4, 4, and 7 respectively. Rule 3 then gives pixels 2 and 3 for polygon A, none for polygon B, and 4 to 6 for polygon C. Thus, overall, there are no gaps, and no overlapping. The reason why horizontal edges are discarded is because the edges adjacent to them will have already contributed the x values to make up the segment (for example, the base of the polygon in Figure 2.18; note also that, for the sake of simplicity, the scan conversion of this polygon was not done strictly in accordance with the Rasterization rules mentioned above).

Problems with polygon boundaries – a 9-pixel polygon fills 16 pixels.

Figure 2-20 The result of Rasterization rules [1]

Three polygons intersecting a scan line.

相關文件