Chapter 7ThreeDimensional Graphics Geometric Modeling and VisibilityM. Firebaugh© Wm. C. Brown Communications, Inc. 
...the single application in our industry that continues to hold the most fascination for advanced computer users not to mention the most potential benefit for mankind, is scientific visualization.
We introduced threedimensional graphics with very concrete examples of how 3D objects may be represented, manipulated, and displayed by projecting 3D world coordinates onto a 2D viewing surface. As in all areas of computer science, 3D graphics is subject to tradeoffs along the concrete/abstract spectrum. The concrete examples illustrated in the last chapter have the virtue of simplicity and clarity but suffer from limitations intrinsic in the choice of polyhedral representations. Many of these limitations are overcome by moving to more abstract representations. Several of the more abstract representations for geometric modeling are presented in this chapter.
The second main emphasis of the chapter considers implementation
issues of 3D clipping and hidden surface removal. These operations greatly
improve the visual realism of the image and may be considered refinements
inserted in the viewing pipeline.
Geometric Modeling
The polyhedral representation is capable of displaying a 3D object to any degree of accuracy. While this is true theoretically, in practice polyhedral representation may require such a vast database of polygons that it overwhelms available computing capability. This computational limitation is one factor suggesting alternative representations for modeling geometric objects.
A second factor encouraging us to seek a less restrictive representation arises from the "natural" shape of objects. Most manufactured objects and many natural objects are better modeled by combinations of curved surfaces than by combinations of polygons. Many other natural objects are better modeled by fractal curves and surfaces than by combinations of polygons. In this section we outline techniques for representing curved surfaces and manipulating the objects constructed by these curved surfaces. In a later chapter we consider fractal geometry.
All alternative techniques in geometric modeling represent of a higher level of abstraction than that of the polyhedral technique. More abstraction means simply that "we are saying more with less." That is, the goal of abstraction in geometric modeling is to define the shape of objects with the smallest possible set of numbers. This is possible by having a "theory" about the objects, that is, an assumed relationship between the coordinates on the surface of the object. The information defining the shape of the object then resides in some combination of the functional form of this relationship and the parameters controlling the functions.
The first technique for more accurately modeling 3D objects
is a straightforward extension of the parametric curve approach introduced
in the 2D discussion. For an excellent survey of a variety of geometric
modeling techniques, see Mortensen.
Parametric Surfaces
In the chapter on 2D representation we saw that parametric curves provided an enormous gain in expressiveness at a very low cost in the number of required parameters. That is, with a relatively few control points we could design curves of any shape we desired.
Since a surface is a 2D structure occupying a 3D space,
we need to extend our parametric representation as follows:
These extensions are summarized by the equations,
x = f_{x}(s,t) [8.1] y = f_{y}(s,t) z = f_{z}(s,t)
That part of the surface generated from 0 2 s 2 1 and 0
2 t 2 1 is called a patch. The vector form of this point on the parametric
surface is given by
P(s,t) = i x + j y + k z [8.2]
where
(x, y, z) = given by Equation 8.1
and
(i, j, k) = unit vectors.
As an example of how 3D objects may be uniquely specified
parametrically, consider the following definition of a sphere:
x = xc + R sin p s cos 2p t [8.3] y = yc + R sin p s sin 2p t z = zc + R cos p s
where
R = radius of sphere,
(xc, yc, zc)= coordinates of center of sphere,
(t,s) Æ (q, f) of spherical coordinates.
Similarly, a circular cylinder is defined parametrically
as:
x = xc + R cos 2p t [8.4] y = yc + R sin 2p t z = zc + s
where
R = radius of circle,
(xc, yc, zc) = coordinates of center of circle generating the cylinder,
(t, s) Æ (q, z) of cylindrical coordinates.
These objects with several isoparametric curves are shown in Figure 8.1.
(a) 
(b) 
Figure 8.1 Parametric 3D objects with isoparametric curves.
a) Sphere with 24 t and 12 s intervals
b) Cylinder with 24 t and 6 s intervals.
The simplicity and symmetry of the regular geometric objects
of Figure 8.1 permit a parametric representation of the complete object
in terms of simple linear and trigonometric functions. The complexity of
many objects of interest in computer graphics, however, forces us to consider
more complex parametric representations.
Bicubic Surface Patches
In our discussion of 2D curve representation we observed
that cubic polynomials provided an effective compromise offering power and
flexibility at a relatively small cost in mathematical complexity. This
approach is readily extended to 3D with similar cost/benefit advantages.
The general polynomial form for points in the surface of the bicubic patch
is given as:
x(s,t) = c11 s^{3} t^{3} + c12 s^{3} t^{2} + c13 s^{3} t + c14 s^{3}+c21 s^{2} t^{3} + c22 s^{2} t^{2} + c23 s^{2} t + c24 s^{2}+c31 s t^{3} +c32 s t^{2} + c33 s t + c34 s+ c41 t^{3} + c42 t^{2} + c43 t + c44
y(s,t) = d11 s^{3} t^{3} + ... + d44,
and [8.5]
z(s,t) = e11 s^{3} t^{3} + ... + e44,
where
cij, dij, eij = coefficients of polynomials fixed by control points.
Just as with 2D curves, 3D patches may use a variety of control strategies, including Bézier, Hermite, and BSpline bases. Recall that Bézier curves required four control points with the end points controlling the curve position and the two inner points controlling the tangent vector at the end points. Bézier patches extend this control strategy in a logical fashion using sixteen control points. The four corner points control the position of the patch and lie on its surface. The intermediate twelve points control the tangents to the patch along the edges and at each corner and may be used by the designer to "pull" the surface of the patch into the shape desired.
The sixteen control points form a polyhedral surface which serves as the convex hull that will always completely contain the resulting smoothly curved patch. One example of a Bézier control point polyhedral surface is shown in Figure 8.2.
Figure 8.2 Control points for bicubic, Bézier patch. The four corner points anchor the patch and will lie in its surface. The inner twelve points "pull" on the surface and control its shape in exactly the same way the inner two points control 2D Bézier curves.
The Bézier form of the bicubic parametric patch
has a very concise matrix formulation specifying a vector point, P(s,t),
on the surface in terms of the sixteen control points, P00...
P33. This relationship is expressed as:
whereP(s,t) = S B P BT TT [8.6]
S = [s^{3} s^{2} s 1] (s parameter row vector)
B = [8.7]
(Bézier coefficient matrix)
P = [8.8](Control point matrix)
Figure 8.3 Bicubic Bezier patch corresponding to control points shown in Figure 8.2. Note the change in zscale due to the curve being pulled, but not reaching the top four control points at z = 1.0.
B^{T} = Transposed B matrix [8.9]
(Switch rows and columns)
T^{T} = [8.10]
(Transposed t parameter vector)
The control points of the P matrix shown in Figure
8.2 lie on a square xy grid, four units on a side. The z values of the
corners is zero, the eight middle edge points have z = 0.5, and the four
center points have z = 1.0. Substituting these values into Equation 8.6
and multiplying out gives a simple set of parameters for the bicubic Bézier
patch:
c34 = d43 = 3.0 c44 = d44 = 1.0 e34 = 1.5 e24 = 1.5 [8.11] e43 = 1.5 e42 = 1.5and all other c_{ij}, d_{ij}, and e_{ij}s = 0.
Plotting the resulting parametric equation produces the smooth, parametric surface shown in Figure 8.3.
Now, to demonstrate that surface patches do respond to
changes in the control points, letís vary one of the edge points,
changing its zvalue from 0.5 Æ 2.0. Figure
8.4 illustrates the effect of modifying a single point.
(a)
(b)
Figure 8.4 Effects of variation of single control point.
(a) 
(b) 
Figure 8.5 More complex bicubic Bézier patch. The last two rows of the P point matrix have been given negative values for nonzero elements.
a) Control points. Negative z points are solid; positive are circles.
b) Resulting bicubic patch. Note sinusoidlike shape which results from nonzero thirdorder coefficients for the s parameter.
The example above illustrates how the moving of a single control point can significantly modify the shape of the parametric surface. Since there are sixteen points to manipulate, the user has a tool of considerable power for sculpting the 3D surface shape desired. By changing the sign of the control points in the last two rows of the point control matrix, for instance, we can generate the surface shown in Figure 8.5.
The parametric representation of the bicubic patch of Figure
8.5 (b) can be obtained by multiplying out the matrices in equation 8.6
and is represented as:
x(s,t) = 1.0 + 3.0 s [8.12] y(s,t) = 1.0 + 3.0 t z(s,t) = 0. + 1.5 s  4.5 s^{2} + 3.s^{3} + 1.5 t 9 s^{2} t + 6 s^{3} t  1.5 t^{2 } + 9 s^{2} t^{2}  6 s^{3} t^{2}
As one might expect, the more complex patch now requires cubic terms in the s parameter. Quadratic terms (order 2) are still adequate in the t parameter.
Modeling complex surfaces
As indicated by the figures above, bicubic Bézier patches provide an excellent interactive tool for modeling smooth curved surfaces. However, in many applications it is necessary to model more detail than can be provided by a single patch. Two options available to the user needing to model more complexity include a) using a set of connected Bézier patches or b) turning to a more complex representation. Let's consider option a) first.
Figure 8.6 Splicing two patches together with C0 continuity but not C1 continuity. That is, the surfaces are seamless but not smooth across the transition.
Two critical aspects of connecting Bézier patches,
the seamlessness and smoothness of the connection, may be specified as C0
and C1, indicating continuity of the surface itself and continuity in the
first derivatives. Continuity of the surface joins of patch A and patch
B is achieved by setting the first row of the P matrix of B equal
to the last row of the P matrix of A. In Figure 8.6 we show the results
of joining surface A, defined by PA to surface B, defined by surface
PB, where:
P_{zA} = [8.13] (Patch A control point matrix),
P_{zB}= [8.14] (Patch B control point matrix)
Clearly the patches in Figure 8.6 are continuous, but the
transition from patch A to patch B is not smooth along their adjoining edge.
In order to obtain a smooth transition, the slope of the patches must be
continuous along the joining curve. This is achieved by adjusting the third
row of PA and the second row of PB to assure that each set
of four points (three, actually) [P20A, P30A, P10B, P20B] are colinear.
By adjusting PB in Equation 8.15, a smooth connection results between
patch A and patch B as illustrated in Figure 8.7.
P_{zB} = [8.15](Modified patch B matrix).
Figure 8.7 Splicing Bézier patches to achieve both C0 and C1 continuity. Note the fortuitous similarity of the surface to an automobile body design, the task for which Bézier patches were invented.
For even more complex modeling involving multiple Bézier patches, it has been shown that both C0 and C1 continuity are achieved by making the eight control points surrounding the common corner point coplanar. This constraint makes good sense intuitively and is relatively easy to implement computationally.
Figures 8.5  8.7 illustrate why bicubic Bézier
patches have become such a popular tool for surface modeling. The obvious
advantages include:
We conclude the discussion of bicubic Bézier patches (sometimes called the tensor product approach) with the following observation on the power of abstraction. The matrix representation of parametric surfaces given by equation 8.6 is much more abstract than the simple polygonal surfaces described in the last chapter. The benefits of flexibility and expressiveness achieved by this abstraction, however, far outweigh the increased mathematical complexity.
Other Bicubic Parametric Bases
The sixteen control point basis proposed by automotive
engineer Bézier is just one of a number of different representations
of parametric surfaces. Among the most popular of these are Bsplines, betasplines,
and the Hermite form of the bicubic surface.
Figure 8.8 Control vectors of bicubic Hermite parametric patch. Functions of vectors include:
Control points (solid circles)
Tangent vectors along t (Pt00, ..., Pt11)
Tangent vectors along s (Ps00, ..., Ps11)
Twist vectors (Pst00, ..., Pst11)
Hermite Form
The Hermite formulation is a generalization of Coons'patches, a modeling approach developed by Steven Coons for the Ford Motor Company. The Hermite form can be expressed in a tensor product similar to the Bézier form given in Equation 8.6.
P(s,t) = S H P H^{T} T^{T} [8.16]
where
S and T = parameter vectors defined in Equation 8.6
H = [8.17] (Hermite coefficient matrix) P = [8.18](Geometric matrix)
where
P00 = P(s=0,t=0) First anchor point
P01 = P(s=0,t=1) Second anchor point
P10 = P(s=1,t=0) Third anchor point
P11 = P(s=1,t=1) Fourth anchor point
P00^{t}= = Tangent vector along t at point 1
...
P11^{t}= = Tangent vector along t at point 4 etc.
The geometrical interpretation of these sixteen control
vectors is shown in Figure 8.8. The vectors of the geometry matrix, P,
can be divided into four classes of vectors with four vectors per class.
Some of the interesting characteristics of bicubic Hermite
patches and the control strategies for molding them include the following.
Superquadrics
The critical reader will appreciate the increase in representational power that bicubic paramatric patches provide in comparison to more concrete representations such as polyhedrons described previously. The gain in flexibility and expressiveness is paid for by an increase in the level of abstraction and the complexity of the mathematical machinery required to build a model from control points. The nearly universal adoption of Bézier surface representation by automobile body designers indicates that the costs in complexity are well worth the benefits bicubic patches provide.
This exercise suggests that extending the level of abstraction
even further may lead to even more powerful modeling tools. Superquadrics
provide an exceedingly simple yet elegant step in this direction. Superquadrics
are easily understood as a minor reformulation of the parametric representation
of a sphere given in Equations 8.28.3 with the addition of two exponential
shape parameters, e1 and e2. A superquadric is a closed
surface traced out by the vector, P(s,t) = [x y z], as s and t vary
in the range pi/2 <= t <= pi/2 and 0 <= s <=2 pi.
P(s,t) = [(C_{t}^{e1}C_{s}^{e2}) (C_{t}^{e1}S_{s}^{e2}) (S_{t}^{e1})] [8.19]
where
Ct = cos t (t = angle of latitude) Ss = sin s (s = angle of longitude) e1,e2 = shape parameters
The (s,t) parameters are closely related to the conventional
polar coordinates, (f,q), which may be obtained
by:
q = 90°  t [8.20] f = s
[.5,1] 
[.25,1] 
[.1,1] 
[1,.5] 
[1,.25] 
[1,.1] 
[.5,.5] 
[.25,.25] 
[.1,.1] 
Figure 8.9 Three families of superquadrics and associated shape parameters.
Computationally, Equation 8.19 must be further refined
before it will generate real numbers for plotting routines. In particular,
when the user selects noninteger shape parameters e1,e2, a straightforward
calculation with Equation 8.19 will generate imaginary components for negative
values of the sine and cosine functions. To suppress this problem, we have
made the following replacements:
(cos t)^{e1} ÆSign(cos t) (Abs(cos t))^{e1} [8.21]
and so on.
[1.5,1] 
[2,1] 
[5,1] 
[1,2] 
[1,5] 
[2,2] 
[1,3] 
[2,3] 
[3,3] 
Figure 8.10 Examples of superquadrics with [e1,e2] >1.
For (e1,e2) = (1,1), the parametric equation for the superquadric [8.19] reduces to an equivalent of Equation 8.3 with a resulting parametric spherical object identical to that shown in Figure 8.1. However, most interesting and useful things happen as we begin to vary the shape parameters, e1,e2. If we hold e1 equal to 1 while turning e2 down from 1 to zero, the superquadric gradually transforms itself from a sphere into a closed cylindrical can. If we hold e2 equal to 1 while turning e1 down from 1 to zero, the superquadric transfigures from a sphere into a curved "pin cushion." Finally, if we simultaneously turn both e1 and e2 down towards zero, the superquadric evolves into a cube. These three families of solids are illustrated in Figure 8.9.
Although the metamorphosis illustrated for shape parameters (e1,e2) <= 1 are fascinating, superquadric families associated with (e1,e2) > 1 are even more intriguing. Figure 8.10 shows examples from these families.
It is quite remarkable that a single representation with
only two free parameters is capable of such a range of solid shape descriptions.
These families of superquadrics provide an efficient basis of primitive
solid shapes for modeling more complex scenes.
Extensions of Superquadrics
The first natural extension of the superquadric representation
is to introduce three independent scale factors for the x, y, and z directions.
This leads to the parametric equation for the point vector,
P(s,t) = [(a_{x} Ct^{e1}Cs^{e2}) (a_{y} Ct^{e1}Ss^{e2}) (a_{z} St^{e1})] [8.22]
where
a_{x},a_{y},a_{z} = scale factors along the (x,y,z) axes.
Another function that is valuable for computing properties
such as the visibility and shading of the superquadric surface at parametric
point (s,t) is the vector normal to the surface at that point. This vector,
N(s,t), is given in terms of the parameter of [8.20] as:
N(s,t) = [((1/a_{x})Ct^{2e1}Cs^{2e2}) ((1/a_{y})Ct^{2e1}Ss^{2e2}) ((1/a_{z})St^{2e1})] [8.23]
The second extension to superquadrics is to add the ability to mold and distort the primitive through an interactive 3D display. Barr and Pentland have each described such systems. Pentlandís program, SuperSketch, can stretch, bend, taper, and twist any of the superquadric primitives or composite objects built from primitives. This is a powerful and efficient technique for modeling natural forms, and Pentland shows a reasonably accurate model of a human head composed of only thirteen primitives specified by less than one hundred bytes of information. A complete human body can be modeled by approximately three hundred bytes of information.
The third extension of superquadrics is the ability to
construct composite objects using Boolean operations such as AND and NOT
on more primitive component objects. This capability generally goes under
the title of constructive solid geometry in the CAD field and is
the topic to which we turn next.
Constructive Solid Geometry
Constructive solid geometry (CSG) was invented by Requicha,
Voelcker, and their coworkers at the University of Rochester in the late
1970s. The basic concept of CSG is to use regularized Boolean operators
such as union (OR), intersect (AND), and difference (AND + NOT) acting on
primitive solids to construct more complex solids.
CSG Primitives
This "buildingblock geometry" approach begins with a set of primitive solid objects such as the block, sphere, cylinder, cone, torus, and wedge shown in Figure 8.11. Each of these primitives is easily described by a small set of userdefined parameters specifying the primitive's geometry, location, and orientation.
In CAD of manufactured products, great care is spent on the details of rounding sharp corners to produce objects of greater beauty and eliminate safety hazards. This rounding usually involves slicing off corners with a planechamfer or smoothing corners with an arcfillets. These processes are easily carried out with CSG techniques applied to some of the sharpcornered primitive solids of Figure 8.11. However, this twostage process can be simplified by substituting primitives with parametrically smoothed edges and corners  namely, superquadrics. Since superquadrics are welldefined mathematical objects, it is a straightforward task to determine their geometry (extent), location, and orientation and use them to augment the library of primitive solids shown in Figure 8.11.
Figure 8.11 Standard primitive solids used in constructive solid geometry.
CSG Boolean Operations
The three Boolean functions most useful in CSG are union,
intersect, and difference. These Boolean functions operate
on solids in much the same fashion that the Boolean operators OR (+), AND
(*), and AND NOT operate on ordinary Boolean variables. Let's examine each
of these operations to understand how they can be used to build complex
models from primitive solids.
Union
The union of object A and object B is the object
obtained by the spatial OR of the two input objects. That is, we can define
X to be the union of A and B with the regularized Boolean operator, »,
as:
X = A » B [8.24]
The "spatial OR" interpretation of union follows from the functional definition: if a point in space is contained in either object A OR object B, then it is contained in object X. Since X represents the "sum" of its two constituent objects, either constituent could have been picked first. This leads to the conclusion that the union operation is commutative.
A » B = B » A [8.25]
The operation of union is the most easily visualized and
understood of the three regularized Boolean operations on solids and is
illustrated in Figure 8.12

Figure 8.12 The union of primitive solids in constructive solid geometry.
Intersection
The intersection of object A and object B is the
object obtained by the spatial AND of the two constituent objects. That
is, we say that a point is contained in the intersection of A and B if the
point is contained in object A AND in object B. The mathematical form stating
that X is the intersection of A with B is:
X = A « B [8.26]
Again, by symmetry arguments, it is clear that the intersection operation is commutative.
A « B = B « A [8.27]
The intersection operation is useful in designing simple
objects not available in the library of primitive solids. For instance,
if the library contained "box" but not "wedge," we could
easily design a wedge by the intersection of two boxes as indicated in Figure
8.13. Similarly, a spherical lens is generated by the intersection of two
spheres with centers offset by something less than a radius.
Difference
The regularized Boolean difference operation is analogous to the operation of AND NOT in Boolean set theory. We say that an object X is the difference between A and B if every point in X is contained in A AND NOT contained in B. The mathematical symbol used to introduce the concept of regularized Boolean difference was the "Ð*" which we shall simplify to a standard minus sign.
(a)
(b)
Figure 8.13 Examples of intersection in constructive solid geometry.
X = A  B [8.28](regularized Boolean difference).
Because subtracting object B from object A yields a different
object from that obtained by subtracting object A from object B, the difference
operation is noncommutative.
A  B <> B  A [8.29](difference is noncommutative).
It is particularly valuable for the design of parts which
involve drilling or milling away material at the manufacturing stage. Examples
of the difference operation are shown in Figure 8.14.
Tree Structure in CSG
The final object constructed by CSG may be represented by the root of an ordered, binary tree. The leaves (terminal nodes) of the tree are either primitive solids or transformation operations. Nonterminal nodes represent either Boolean operations (», «, or ) applied to the two incoming branches or transformations to be applied to rigid solids. Each subtree at a node represents the solid resulting from the Boolean operations and geometric transformations occurring below it. This tree structure is illustrated in Figure 8.15. This figure defines the CSG language program required to create the twoholed plate of Figure 8.14(b).
This ordered, binary tree structure may be considered the language for representing constructive solid geometry. It is concise, welldefined, and capable of expressing any object which can be constructed by CSG. One interesting property of the language, however, is that it is not unique. That is, there are often many ways of constructing the same final object. For instance, a hole may be drilled in a plate by the difference operation between the plate and a cylinder the diameter of the hole. The cylinder, however, can be of any length longer than the thickness of the plate and must completely penetrate the plate.
This ordered, binary tree structure may be considered the
language for representing constructive solid geometry. It is concise, welldefined,
and capable of expressing any object which can be constructed by CSG.
Extensions of CSG
As indicated earlier, the synthesis of superquadrics with CSG techniques provides a particularly powerful tool for modeling solid objects, especially those with rounded and lifelike forms. Two other useful techniques for generating more general primitive solids for input into CSG trees are lathing and extrusion. Lathing is the CAD equivalent of turning a part on a lathe  any object whose outline is constant as it spins about one axis is allowed. Extrusion is the CAD equivalent of a cookie cutter  any object formed by the linear translation of a closed 2D figure is allowed.
Early mainframe systems GMSolid (by General Motors
Corporation) and ROMULUS (by Evans and Sutherland) were among the
first CAD programs to make full use of CSG and included lathing and extrusion.
Modern personal workstation CAD programs.include CSG.
(a) 
(b)  
(c) 
(d) 
Figure 8.14 The regularized difference operation in constructive solid geometry.
Clipping in 3D
So far in this chapter we have considered representation issues and have introduced several advanced techniques for building models in the world coordinate system. These techniques extend the tools for representation well beyond the simple, but functional, polyhedral representation presented in the last chapter.
The next step is to return to implementation issues and
extend the tools available for viewing the models we have learned to create.
The two geometric concepts we consider next are 3D clipping and problem
of removing those surfaces which are invisible because of facing away from
the camera or being hidden by intervening objects.
Defining a Viewing Volume
The basic function of 3D clipping within the viewing pipeline is to specify a window in 3D space which is to be mapped onto a 2D viewport. However, a "window in 3D space" is most directly interpreted as a 3D viewing volume. That portion of the 3D world scene within the viewing volume gets projected onto a viewport on the screen. That portion lying outside of the viewing volume is clipped away and does not appear. Computationally it is advantageous to clip in 3D rather than after the whole scene has been projected onto 2D because of the reduction in the amount of projection required.
Clipping in 3D is quite analogous to clipping in 2D with
the following two extensions:
Figure 8.15 Ordered, binary tree structure defining the CSG language. Terminal nodes are either primitive objects (O1 or O2) or transformations. Intermediate nodes are regularized Boolean operators or transformations. The root node is the final product of the constructive solid geometry process.
These 3D extensions are illustrated in Figure 8.16.
As can be seen in Figure 8.16, 3D clipping involves discarding
objects and portions of objects lying outside a box for the parallel projection
case and those lying outside a truncated pyramid for the perspective projection
case. As one might expect, clipping against the regular parallelopiped (box)
is the more easily solved problem. Let us consider both cases in more detail.
3D Clipping in Orthographic Projection
The first question to resolve in developing a clipping algorithm is: What shall we clip? For the sake of simplicity we shall restrict our discussion to the clipping of lines and polygons. One might suspect that the process of clipping a 3D line with a box resembles clipping a 2D line with a rectangle so closely that the CohenSutherland approach would apply. This, fortunatly, is the case. The second happy surprise is that the SutherlandHodgeman polygon clipping algorithm may also be generalized to 3D.
(a) Viewing volume in orthographic projection 
(b) Viewing volume in perspective projection 
Figure 8.16 Viewing volumes for 3D clipping.
To simplify the mathematics of the CohenSutherland algorithm tests, the first step is to transform the clipping volume from the arbitrary parallelepiped defined as [XR,XL,YT,YB,ZB,ZF] into the unit cube at the origin defined as [1,0,1,0,1,0]. This transformation and the resulting normalized view volume are shown in Figure 8.17.
The transformation applied to the view volume and all associated
objects in the scene can be expressed as the 4 ¥
4 matrix, M.
M = [8.30](Normalized view volume matrix)
where
S_{x} = 1/(X_{R}X_{L}) (x axis scale factor)S_{y} = 1/(Y_{T}Y_{B}) (y axis scale factor)S_{z} = 1/(Z_{B}Z_{F}) (z axis scale factor)T_{x} =  Sx XL (x translation to move window to origin)T_{y} =  Sy YB (y translation to move window to origin)T_{z} =  Sz ZF (z translation to move window to origin)
The result of this transformation is to take an arbitrary window view volume and transform it into a normalized cubic view volume which can be considered a 3D viewport.
The next step is to extend the fourbit classification
code, TBRL, for each end point to a sixbit code to account for front plane
clipping (hither) and back plane clipping (yon). This code can be written:
TBRLHY [8.31]
where
Figure 8.17 Transformation of view volume into normalized view volume.
The two end points of each line are then encoded as
C1 = T1B1R1L1H1Y1 [8.32]
and
C2 = T2B2R2L2H2Y2
and the line classified as trivially in or trivially out
by the tests:
Trivially in: (C1 = 000000) AND (C2 = 000000) Trivially out: (C1 AND C2) ≠(000000) [8.33]
The trivially out AND test produces a sixbit answer in which a "1" at any bit location indicates that the corresponding binary digits of both C1 AND C2 were "1."
Next, we enter the CohenSutherland iterative or recursive
loop that classifies all lines as trivially in, trivially out, or neither.
Trivially in lines are plotted; trivially out lines are discarded; and lines
classified as neither are intersected with the plane(s) which prevented
their classification in one of the trivial categories. A convenient way
to calculate the intersection of lines with the normalized clipping planes
is to express each line in its twopoint parametric form and substitute
the equation of the plane as the value of the variable. For a line between
points P1 =(x1, y1, z1) and P2 =(x2, y2, z2) this becomes:
x = x1 + t (x2  x1) [8.34]
y = y1 + t(y2  y1)
z = z1 + t(z2  z1)with
0 <=t <=1 (parameter range)
Assume, for example, that the line crossed the back plane
(z = 1). To locate the 3D point of intersection between the line and clipping
plane, we substitute the value of z = 1 from the plane equation into Equation
8.34 for the line and compute the corresponding value of the parameter,
ti.
1 = z1 + ti (z2  z1) [8.35]ti = (1z1)/(z2z1) [8.36]
This value of ti is then used in Equation 8.34 to compute a new end point, Pi = (xi, yi, 1). Assuming for the moment that the point P2 lay behind the z = 1 clipping plane, the line segment (P2Pi) is discarded, and the surviving line segment, (PiP1), reenters the clipping stream. The process is continued for all lines and all clipped segments until they classify as trivially in or trivially out.
Note that the value for ti computed in Equation 8.36 is
itself a valuable test for intersection of the line segment (P2 P1
). If ti lies in the range, 0 ¾ti ¾1, the line segment intersects
the back plane; if not, it doesn't.
3D Clipping in Perspective Projection
As we expected, the symmetry of orthographic projections
allowed an easy extension of clipping algorithms from 2D to 3D. At first
glance, the nonparallelism of the clipping planes in perspective projections
seems to add serious complications. However, there are two alternative approaches,
each involving a simple transformation, for converting the truncated pyramid
viewing volume into a shape that simplifies computation.
Canonical view volume
The result of the canonical view volume transformation is to convert the arbitrary truncated pyramid of Figure 8.16 (b) into the equivalent of a normalized view volume which we call the canonical view volume. The back plane of the canonical view volume has z = 1, the front plane has z = Zh, and all sides of the truncated pyramid make angles of 45° with respect to the base. The top plane of the pyramid has the equation y = z, the bottom plane has y = z, the righthand plane has x = z, and the lefthand plane has x = z. This geometry is illustrated in Figure 8.18.
Figure 8.18 Transformation of perspective view volume into canonical view volume.
Assuming a center of projection (COP) at the origin, the
canonical view volume transformation can be performed by the matrix of equation
8.28 with the following modifications:
Tx = Ty = Tz = 0 [8.37]
Sz = 1/Z_{B} (z axis scale factor) [8.38]
XR, XL, YT, YB = coordinates of view volume window at back plane.
This transformation generates a new front plane, Zh = Sz Z_{F}, and scales the viewing volume to a 45° pyramid.
The TBRLHY CohenSutherland code is then generated by the
tests:
T = 1 if point is above top plane (y > z); 0 otherwise,
B = 1 if point is below bottom plane (y < z); 0 otherwise,
R = 1 if point is to the right of plane (x > z); 0 otherwise,
L = 1 if point is to the left of plane (x < z); 0 otherwise,
H = 1 if point is in front of hither plane (front) (z < Zh); 0 otherwise,
Y = 1 if point is behind the yon plane (back)(z > 1); 0 otherwise.
Note the similarity with the parallel projection code of Equation 8.31.
Again, the test for intersection of the line with various
clipping planes is obtained by substituting the equation of the plane into
the parametric form of the line, Equation 8.34. If, for instance, a line
intersects the righthand side (x = z plane), we can solve for the value
of the parameter, ti, at the point of intersection.
x = z [8.39]x1 + ti (x2  x1) = z1 + ti (z2  z1)ti = (z1x1)/((x2x1)(z2z1)) [8.40]
With this value of ti we can compute the intersection point,
Pi = (xi, yi, zi,), using Equations 8.34 and proceed with
the line clipping algorithm.
Perspective Æ
Parallel View Volume
The second approach is a bit less intuitive, but conceptually more elegant. The idea is to transform the canonical view volume of the approach just discussed into a parallelsided view volume for which all of the arguments of parallel projection 3D clipping would apply. Recall from our discussion of perspective projection that the projection operator acted like a depthdependent scale factor. If we apply a similar depthdependent scale factor to the truncated pyramid clipping volume, we can pull it into the shape of a parallelepiped similar to the normalized view volume of Figure 8.17.
The M matrix that performs this operation is given as:
M = [8.41]
(Perspective Æ parallel
matrix)
where
z <> 0
Sx = 1/z
Sy = 1/z
Sz = 1/(1Z_{h})
and
Tz = Z_{h}/(Z_{h}1)
This transformation maps the canonical view volume into the "binormalized" viewing volume, shown in Figure 8.19, whose clipping planes are given below.

Figure 8.19 Transformation of canonical view volume into binormalized viewing volume. This parallel viewing volume simplifies both clipping and perspective projection.
top Æ y = 1
bottom Æ y = 1
right Æ x = 1
left Æ x = 1
front Æ z = 0
back Æ z = 1
From this point on, the previous discussion of the normalized parallel view volume applies, and one can easily implement the extended CohenSutherland algorithm. There are some interesting "side effects" of the perspective Æ parallel projection. One of the most appealing is that this transformation has automatically accomplished the perspective transformation. After clipping, the only remaining task is to map the already projected objects onto the desired viewport.
This concludes our discussion of 3D clipping. The message
that emerges from this discussion is the importance of the correct representation
for the solution of difficult problems. By selecting the appropriate representation
for the clipping volume, much of the complexity introduced by the third
dimension is eliminated, and the clipping techniques introduced in the 2D
environment apply with only minor modifications.
Hidden Surface Algorithms
The final topic of this chapter is hidden surface removal. The route to the summit of a realistic graphical display of world scenes involves establishing a base camp and then building higher camps according to a systematic plan. The base camp of visual realism corresponds to the selection of the fundamental polyhedral representation for solids. Higher camps, each closer to the goal, include perspective projection and 3D clipping. However, the end product of these three processes is still only a wire frame model that may be sufficient for certain engineering applications but is far removed from a realistic rendering of the objects in the scene being rendered.
The next step towards our goal of visual realism is to recognize that the back side of a solid objects is invisible and that objects situated behind nearer objects are either totally or partially obscured by these closer objects. Techniques for implementing these two properties of images of real scenes are classified as hidden surface removal algorithms. To emphasize that the objective is to identify the visible portion of the scene being projected, some authors refer to these techniques as visible surface algorithms. The invention and refinement of hidden surface removal algorithms was the dominant topic in the early years of computer graphics research and continues as an area of active investigation. Sutherland, Sproull, and Schumacker summarize the results of this early research in their comparative study of ten hidden surface algorithms.
Several considerations are important in the design of hidden
surface algorithms.
Sorting Considerations
Many hidden surface algorithms rely on sorting objects
according to their x and y extents (view plane coordinates) and most involve
sorting on z depth. Therefore, the efficiency of sorting algorithms plays
a central role in the efficiency of hidden surface removal algorithms. However,
the efficiency of sorting depends on whether the items to be sorted are
randomly distributed or nearly in order. This dependency of efficiency on
the number of items, N, for several standard sorting techniques is shown
in Table 8.1.
Sort Type 
Random distribution 
Nearly in order 
Bubble sort  N^{2} 
N 
Shell sort  N^{3/2} 
N 
Quick sort  N log N 
N^{2} 
Tree sort  N log N 
N 
Radix sort  N 
N 
Since complex scenes involve sorting tens or hundreds of
thousands of polygons, conventional wisdom would indicate using the more
efficient but complex sorts like quick sort, tree sort, or radix sort. However,
the coherence of objects leads to databases of nearly sorted polygons or
with large blocks of nearly sorted polygons. In these cases, the simpler
sorts like selection, insertion, and even bubble sort may be just as efficient.
In such cases, the simplicity and clarity of the simpler sorts make them
the preferred choice. For additional information on sorting considerations,
see Sedgewick or Knuth.
Coherence Considerations
Coherence describes the extent to which the scene or the
image of the scene is locally constant. As one moves from one polygon on
a given object into another, it is very likely that color and shading will
remain largely unchanged. Similarly, because of the physical coherence of
solid objects, it is impossible for the zdepth of the polygon surface
to change significantly as one moves from one polygon into an adjacent polygon.
Such coherence properties are frequently exploited in hidden surface algorithms.
Several kinds of coherence have been used in hidden surface algorithms.
These can be classified as:
Machine Dependence of Algorithms
Sutherland, et al, developed a taxonomy for classifying hidden surface algorithms along a spectrum ranging from pure objectspace analysis to pure imagespace analysis.
Objectspace analysis Imagespace analysis No machine dependence Use pixels, scan lines
Objectspace algorithms work with the geometry of objects to compute exactly the portions of each object and face that are visible. The accuracy of the results depends only on the precision of real numbers on the computing system used but not on any characteristics of the display system (e.g., pixel, scan line, or screen resolution). Objectspace algorithms have the advantage of being "exact" although the computational load increases as the complexity of the scene increases.
Imagespace algorithms rely on features such as the pixel
or scan line structure of the display device to limit the computation required
for distinguishing visible from invisible surfaces. As a result the computational
load depends on the resolution of the screen and less strongly on the number
of polygons. By limiting the computational accuracy to pixel or scan line
integers, imagespace algorithms produce "good enough" resolution
rather than exact results.
Hidden Surface Algorithms
In the following discussion we will make several simplifying
assumptions, all of which are based on sound physical arguments and none
of which reduce the generality of the algorithms in any significant way.
These assumptions include:
Let us consider in some detail the following major hidden surface algorithms that represent the most commonly used approaches for eliminating those surfaces which are invisible:
BackFace Removal
The basic concept in backface removal involves plotting only surfaces "facing the camera" since the back side of objects are invisible. Note that this technique will automatically remove approximately fifty percent of the polygons in a scene viewed in parallel projection and somewhat greater than fifty percent of polygons in perspective projections. The closer the objects are to the COP in perspective projection, the higher percentage of polygons that the backface algorithm removes. A little reflection on the appearance of a cube in various parallel and perspective projections will convince the reader of the truth of these statements.
The backface removal algorithm, however, applies only to objects considered individuallyóit does not take into consideration the "interaction" between objects. That is, many polygons surviving the backface removal algorithm (i.e., "frontfaces") will still be obscured by frontfaces even closer to the viewer. So, while backface removal is necessary for eliminating fifty percent or more of the invisible surfaces in a scene, it is not sufficient for eliminating all hidden surfaces.
Note that the backface removal algorithm is built on the spatial coherence of objects. That is, the component surfaces of an object "hang together," defining a closed volume occupied by the object. Any projector ray from the COP through the viewing screen to the object pierces the object at two points, a visible front surface and an invisible back surface. The backface removal algorithm is a technique for identifying which is which.
In Figure 8.20 we show the geometric interpretation of this algorithm. Each polygon is assumed to have vertices numbered in a clockwise fashion (as seen from outside the polyhedron). The normal, N, is easily generated as the cross product between any two successive edge vectors. Using a lefthanded coordinate system convention, N represents the vector perpendicular to the face and points outward from the polyhedron. Edge vectors are defined as Vi+1  Vi. Therefore,
N = (V2  V1) ¥ (V3  V2) [8.42]
[lefthanded system]
Figure 8.20 Geometry for back face test. The normal, N, to the (V1,V2,V3,V4) polygon is computed by taking the cross product of any two successive edge vectors. When N is dotted with P, a projector from any vertex of the polygon, the sign of the dot product indicates the visibility of the polygon.
The sign of the dot product of N and P (a
projector from any polygon vertex) indicates the visibility of the polygon.
N · P >= 0 Æ Visible [8.43]
N · P < 0 Æ Invisible
This algorithm can be summarized in the following steps:
BackFace Removal Algorithm
Repeat for all polygons in the scene:
1. Number all polygon vertices, V1, V2, ...Vn, in clockwise fashion.
2. Compute the normal vector, N = (V2  V1) ¥ (V3  V2).
3. Using a projector, P, from any polygon vertex, compute the dot product,
Vis = N · P.
4. Test and plot if visible
IF Vis >= 0 then
PLOTelseSKIP
To summarize the backface removal algorithm, visible faces
are those pointing towards the viewer at the COP. Since the normal to the
face indicates which direction the polygon is facing, we can see a face
when there is some component of the normal, N, along the direction
of the projector ray, P. The projection of N along P
is measured by the dot product, N · P. An identical argument
says that invisible faces are those pointing away from the viewer. The dot
product is negative for the back face case because N and P
lie in opposite hemispheres. The indeterminate case in which N ·
P = 0 is interpreted as visible since the edge of the face can be seen
although the projected area is zero. The results of applying this algorithm
to two primitive objects are shown in Figure 8.21.
(a) 
(b) 
(c) 
(d) 
Figure 8.21 Application of backface removal algorithm to two primitive objects, (a) and (b). Objects (c) and (d) in which only front faces are shown help remove the "Necker illusion" apparent in the top two wire frame representations.
In conclusion, the backface removal algorithm uses object
coherence that applies to all solid objects modeled as convex polyhedra.
It provides a simple, straightforward method of eliminating all selfhidden
faces and reduces the size of the polygon database requiring additional
hidden surface processing by fifty percent or more. Therefore, it should
always be the first hidden surface algorithm to be applied.
ZBuffer Algorithm
We turn now to the problem of how to recognize and correctly eliminate the invisible front faces surviving the backface removal algorithm which are obscured by some object closer to the camera. The first method described is a pure imagespace algorithm known as the zbuffer or depthbuffer algorithm.
The concept of the zbuffer algorithm is to create two parallel arrays, I(x,y) and Z(x,y), in which (x,y) are pixel coordinates, I(x,y) is the intensity (or color) at that screen location, and Z(x,y) is the dynamic zbuffer in which the smallest z coordinate of any polygon examined to date is stored. I(x,y) corresponds to the intensity (or color) of that closest polygon.
The Z(x,y) array is initialized to 1.0 corresponding to the back plane of the normalized coordinated system, and the corresponding I(x,y) array is initialized to some background color. The first polygon is then projected onto the viewing screen and the Z(x,y) and I(x,y) arrays loaded with the depth corresponding to each (x,y) pixel and the color of the polygon. Then the next polygon is projected and the z2 depth computed for each (x,y) pixel in its projection. As each z2 is computed, it is compared to the existing Z(x,y) buffer, and, if z2 < Z(x,y), the element Z(x,y) is replaced by z2 and the corresponding I(x,y) is replaced by I2, the color of the second polygon. This process is repeated for all polygons. The resulting I(x,y) is the correct image, accurate to the nearest pixel, with hidden surfaces removed.
Another way to understand the depthbuffer algorithm is to consider it as a pixelwise bubble sort of the zdepth of all polygons pierced by the projector ray to the pixel. As each smaller value of z is found for pixel (x,y), it is used to overwrite Z(x,y) and to update I(x,y) to the intensity or color of the corresponding polygon. This interpretation is illustrated in Figure 8.22.
Figure 8.22 Zbuffer algorithm for hidden surface removal. Two arrays, Z(x,y) and I(x,y) store the depth and color of the polygon closest to the xy viewing screen. These dynamic arrays are updated by scanning all polygons of the screen. The final image is contained in I(x,y).
The algorithm can be summarized as:
ZBuffer Algorithm
1. For each pixel, (x,y), of the viewport, establish a Zbuffer array and an intensity/color array and initialize.
1.1 For all x and y
1.1.1 Z(x,y) = 1.0 [Back plane of clipping volume]
1.1.2 I(x,y) = Bkg [Background color, e. g., black]
2. For all polygons of the scene projected onto the viewport
2.1 For each (x,y) of projected polygon i, compute zi(x,y)
2.2 IF zi(x,y) < Z(x,y) THEN
Z(x,y) = zi(x,y)
I(x,y) = Ii [Ii = intensity of polygon i]
3. Plot I(x,y) [Image with hidden surfaces removed]
The calculation of depth z as a function of x and y is
performed by using the equation of the plane in which the polygon lies and
solving for z:
z = (AxByD)/C [8.44]
Note that the zbuffer algorithm is strictly an imagespace algorithm, using the computed zdepth as the key for a bubble sort. It has the advantage of computing every thing necessary for displaying correct visible surfaces but only to the pixel resolution of the display screen. Since the screen has a finite pixel resolution, more complexity in the scene (i.e., more polygons) can occur only through a decrease in the average polygon size. Thus, the efficiency of the zbuffer algorithm is relatively insensitive to image complexity.
The algorithm does require, however, two arrays of numbers,
at least one of which must be real. If this requirement taxes the computing
resources available, the algorithm may be decomposed into individual scan
line arrays and performed one scanline at a time.
ScanLine Algorithm
The scanline algorithm is another imagespace algorithm. It processes the image one scanline at a time rather than one pixel at a time. By using area coherence of the polygon, the processing efficiency is improved over the pixel oriented method.
Using an active edge table, the scanline algorithm keeps track of where the projection beam is at any given time during the scanline sweep. When it enters the projection of a polygon, an IN flag goes on, and the beam switches from the background color to the color of the polygon. After the beam leaves the polygonís edge, the color switches back to background color. To this point, no depth information need be calculated at all. However, when the scanline beam finds itself in two or more polygons, it becomes necessary to perform a zdepth sort and select the color of the nearest polygon as the painting color. This concept is shown in Figure 8.23.
Accurate bookkeeping is very important for the scanline
algorithm. We assume the scene is defined by at least a polygon table containing
the (A, B, C, D) coefficients of the plane of each polygon, intensity/color
information, and pointers to an edge table specifying the bounding lines
of the polygon. The edge table contains the coordinates of the two end points,
pointers to the polygon table to indicate which polygons the edge bounds,
and the inverse slope of the xy projection of the line for use with scanline
algorithms. In addition to these two standard data structures, the scanline
algorithm requires an active edge list that keeps track of which
edges a given scan line intersects during its sweep. The active edge list
should be sorted in order of increasing x at the point of intersection
with the scan line. The active edge list is dynamic, growing and shrinking
as the scan line progresses down the screen.
Figure 8.23 Scanline hidden surface algorithm. Scanline SL1 must deal only with the lefthand object. SL2 must plot both objects, but there is no depth conflict. SL3 must resolve the relative zdepth of both objects in the region between edge E5 and E3. The righthand object appears closer.
In Figure 8.23, the active edge list for scan line SL1 contains edges E1 and E2. From the left edge of the viewport to edge E1, the beam paints the background color. At edge E1, the IN flag goes up for the lefthand polygon, and the beam switches to its color until it crosses edge E2, at which point the IN flag goes down and the color returns to background.
For scanline SL2, the active edge list contains E1, E3, E5, and E6. The IN flag goes up and down twice in sequence during this scan. Each time it goes up pointers identify the appropriate polygon and look up the color to use in painting the polygon.
For scan line SL3, the active edge list contains the same edges as for SL2, but the order is altered, namely E1, E5, E3, E6. Now the question of relative zdepth first appears. The IN flag goes up once when we cross E1 and again when we cross E5, indicating that the projector is piercing two polygons. Now the coefficients of each plane and the (x,y) of the E5 edge are used to compute the depth of both planes. In the example shown the zdepth of the righthand plane was smaller, indicating it is closer to the screen. Therefore the painting color switches to the righthand polygon color which it keeps until edge E6.
Note that the technique is readily extended to three or more overlapping polygons and that the relative depths of overlapping polygons must be calculated only when the IN flag goes up for a new polygon. Since this occurrence is far less frequent than the number of pixels per scan line, the scanline algorithm is more computationally efficient than the zbuffer algorithm.
The scanline hidden surface removal algorithm can be summarized
as:
ScanLine Algorithm
1. Establish the necessary data structures.
1.1 Polygon table with coefficients, color, and edge pointers.
1.2 Edge table with line end points, inverse slope, and polygon pointers.
1.3 Active edge list, sorted in order of increasing x.
1.4 An IN flag for each polygon. Value = on or off.
2. Repeat for all scan lines:
2.1 Update active edge list by sorting edge table against scan line y value.
2.2 Scan across, using BKG color, until an IN flag goes on.
2.3 When 1 polygon flag is on for surface S1, enter intensity (color) I1 into refresh buffer.
2.4 When 2 or more surface flags are on, do depth sort and use intensity In for surface n with minimum zdepth.
2.5 Use coherence of planes to repeat for next scan line.
The scanline algorithm for hidden surface removal is well
designed to take advantage of the area coherence of polygons. As long as
the active edge list remains constant from one scan to the next, the relative
structure and orientation of the polygons painted during that scan does
not change. This means that we can "remember" the relative position
of overlapping polygons and need not recompute the zdepth when two or more
IN flags go on. By taking advantage of this coherence we save a great deal
of computation.
Painter's Algorithm
The two previous algorithms have used imagespace processing which relies on the finite pixel and scanline display resolution. The painterís algorithm is based purely on objectspace sorting. This method, sometimes referred to as the depthsorting algorithm, is conceptually quite intuitive but a bit difficult to implement in practice.
The concept is to map the objects of our scene from the world model to the screen somewhat like an artist creating an oil painting. First she paints the entire canvas with a background color. Next, she adds the more distant objects such as mountains, fields, and trees. Finally, she creates the foreground with "near" objects to complete the painting. Our approach will be identical. First we sort the polygons according to their zdepth and then paint them to the screen, starting with the far faces and finishing with the near faces.
A first approximation to the painter's algorithm is the
following:
Simple Painter's Algorithm
1. Sort all polygons by zdepth, largest z first.
2. Scanconvert (paint) polygons in this order.
(a) 
(b) 
(c) 
(d) 
Figure 8.24 Tests applied by painter's algorithm:
Problems immediately become apparent with this algorithm. First, we must define what is meant by the "zdepth" of a polygon which has a number of vertices. A reasonable convention is to sort the z coordinates of the vertices of the polygon and pick the maximum z value as the key for polygon sorting. With this convention, the simple painterís algorithm would be adequate for solving the hidden surface problem for faces F and F´ well separated in z as shown in Figure 8.24 (a). However, most scenes involve objects that overlap in their z dimensions. These may or may not be rendered correctly by the simple painter's algorithm. To handle such overlaps, we have to refine the algorithm with several additional tests.
The refined painter's algorithm may then be expressed as a type of bubble sort of the zdepth list of faces produced by the simple algorithm. The deepest face, F, is compared to the second deepest face, F´, to see if there is any z overlap. If there is none and if F´ is the only overlapping face, F is painted to the screen. If there is overlap, F and F´ are subject to a series of tests to see if their order should be interchanged. If F passes any of these tests (described below), it qualifies as behind F´ and is painted to the screen. If it fails all the additional tests, F and F´ are interchanged and the sorting resumes with the new F. The overlap tests must be carried out between F and all overlapping faces. Once F passes any of these tests with all overlapping faces, it is guaranteed to be behind all overlapping surfaces and is plotted.
The additional tests are spelled out in the following algorithm
summary, and three of the tests are illustrated in Figure 8.24 (b), (c),
and (d).
Refined Painter's Algorithm
1. Sort polygon faces by largest zvalue of each, largest first.
2. Repeat for all surfaces:
2.1 Test surface of greatest depth, F, with all others.
2.1.1 If no overlap in z, PLOT (paint).
2.1.2 If overlap with surface F', then make 5 tests.
Plot F if it passes any of the tests.
1) Xextents of F, F' do not overlap.
2) Yextents of F, F' do not overlap.
3) If F is wholly on that side of the plane of F' facing away from the viewpoint, then F will not obscure any part of F' and F' will overwrite F. (i.e., F is "outside" of Fí)
4) If F' is wholly on that side of plane F facing the viewpoint, again, F will plot and be correctly overwritten when F' plots. (i.e., F' is "inside" of F)
N. B.: Tests (3) and (4) are not the same.
5) The projections of polygons F and F' do not overlap. [Must do an edge compare test.]
2.1.3 If F fails all five tests, assume F obscures F'.
\ Switch F ´ F'
Then F' is plotted first and F later, properly overwriting F'.
2.1.3.1 If switch does occur, F' must then be checked against all "lower" polygons.
3. To handle cyclic and intersecting planes, ( loops):
3.1 Flag polygons that are switched to last place.
3.2 They may not be moved again.
3.3 Clip intersecting or cyclic planes apart.
3.3.1 Treat one plane as clip plane.
3.3.2 Snip second plane into two new planes, A,B.
3.3.3 Add A & B to polygon list.
3.3.4 Continue.
Warnock's Area Subdivision Algorithm
John Warnock, inventor of PostScript, proposed an elegant divideandconquer hidden surface algorithm. The algorithm relies on the area coherence of polygons to resolve the visibility of many polygons in image space. Depth sorting is simplified and performed only in those cases involving imagespace overlap.
Warnock's algorithm classifies polygons with respect to
the current viewing window into trivial or nontrivial cases. Trivial cases
are easily handled. For nontrivial cases, the current viewing window is
recursively divided into four equal subwindows, each of which is then used
for reclassifying remaining polygons. This recursive procedure is continued
until all polygons are trivially classified or until the current window
reaches the pixel resolution of the screen. At that point the algorithm
reverts to a simple zdepth sort of the intersected polygons, and the pixel
color becomes that of the polygon closest to the viewing screen.
Classification Scheme
All polygons are readily classified with respect to the current window into the four categories illustrated in Figure 8.25.
The classification scheme is used to identify certain trivial
cases that are easily handled. These "easy choice tests" and the
resulting actions include:
1. For polygons outside (disjoint) from the window,
set the color/intensity, I(w), of the window equal to the background
color, BKG.
2. There is only one inside or intersecting
polygon. Fill the window area with BKG, then scanconvert (paint) the polygon
with I(poly), the polygon color.
3. There is only one surrounding polygon. Fill
the window with I(poly).
4. If more than one polygon intersects, is inside, or
surrounds, and at least one is a surrounding polygon.
4.1 Is one surrounding polygon, Ps, in front of all others? If so, paint window with I(Ps). The test is: Calculate the zdepths for each polygon plane at the corners of the current window. If all four zdepths of the Ps plane are all smaller than any zdepths of other polygons in the window, then Ps is in front.
Figure 8.25 Classification scheme for Warnock hidden surface algorithm. The square is the current window used to classify polygons into the categories shown.
If the easy choice tests do not classify the polygon configuration into one of these four trivial action cases, the algorithm recurs by dividing the current window into four equal subwindows. Note that test 4.1 is a very easy and clean test for identifying a surrounding upfront polygon. That is, since the polygon's plane, not just the polygon itself, must be cleanly separated in z, some Ps polygons fail the test because they overlap in z with other polygon planes. In this instance, cases will frequently be missed in which the back polygons themselves should be obscured by Ps but fail the plane separation test. However, rather than revert to the complex geometrical tests of the painterís algorithm, Warnock's algorithm simply makes the easy choices and invokes recursion for nontrivial cases. Figure 8.26 shows the application of the algorithm with four levels of recursion applied to the complete image and eight levels along a selected boundary.
A noteworthy feature of Warnock's algorithm concerns how the divideandconquer area subdivision preserves area coherence. That is, all polygons classified as surrounding and outside retain this classification with respect to all subwindows generated by recursion. This aspect of the algorithm is the basis for its efficiency. The algorithm may be classified as a radix four quick sort. Windows of 1024 ¥ 1024 pixels may be resolved to the single pixel level with only ten recursive calls of the algorithm.
While the original Warnock algorithm had the advantages
of elegance and simplicity, the performance of the area subdivision technique
can be improved with alternative subdivision strategies. Some of these include:
(a)
(b)
Figure 8.26 Warnock's area subdivision algorithm. (a) The whole image at four levels of recursion. (b) A selected portion with one eightlevel refinement shown.
Conclusions
This chapter explores the expressive power achieved by moving from simple polyhedral representations to more abstract mathematical formulations. Bézier's parametric cubic surfaces were shown to be a particularly flexible and powerful tool for modeling smooth curved surfaces. The recently introduced superquadric representation provides an even more elegant tool for modeling primitive solids in an infinite variety of shapes. Once such primitive solids are available, the techniques of constructive solid geometry (CSG) offer methods of constructing complex objects by the Boolean operations of union, intersection, and difference applied to the geometric primitives.
After introducing these new representations, we returned to the task of implementation. The next two implementation issues (following projection, introduced in the last chapter) are those of 3D clipping and hidden surface removal. The CohenSutherland clipping algorithm is shown to be easily extended from 2D to 3D using the techniques of homogeneous coordinate transformations, normalized and canonical view volumes, and the perspective Æ parallel volume transformation. Finally, techniques for hidden surface removal were introduced, starting with the algorithm for backface removal and proceeding through four standard hidden surface algorithms. Implementation of an effective visible surface algorithm is a large and essential step towards the goal of visual realism of synthetic images.