Chapter 9

Three-Dimensional Graphics -

Visual Realism and Color

M. Firebaugh

© Wm. C. Brown Communications, Inc.

If the goal in shading a computer-synthesized image is to simulate a real physical object, then the shading model should in some way imitate real physical shading situations.

Bui-Tuong Phong




 

The goal of visual realism in computer graphics is attained most directly by a systematic refinement of our techniques for representing scenes in computer memory and displaying them on some output device. So far, we have studied the polyhedral representation of objects and various alternatives based on parametric representations. Our first step toward visually realistic rendering was achieved through the application of visible surface techniques to the polyhedral representation. Visible surface techniques correctly take into account the effects of an object's geometry and the geometrical relationships of multiple objects on the visibility of all objects in a scene. They say nothing about the shading or color with which an object is rendered. These subjects are central to realistic rendering and are the topics of this chapter.

The correct rendering of a scene requires not only an accurate representation of the objects composing the scene but also an accurate representation of the visual environment in which the objects exist. The visual environment includes such factors as the distribution and spectral intensities of all light sources, the optical properties of all objects in the environment (opacity, color, reflectance, and surface texture), and the optical properties of the medium carrying light from the sources to the objects and eventually to the observer. The extent to which rendering programs take all these effects into account determines the quality of visual realism achieved.

As in several previous case studies, models for shading and color range from concrete, overly simplistic techniques to physically correct but mathematically complex approaches. We introduce a simple shading model in this chapter and consider the more complex but physically correct models in the next. The simple model is incomplete because it lumps many secondary effects into simple categories such as ambient light intensity. The simple model is a combination of physics and heuristic approximations of physical effects. It represents a compromise between visual realism and computational cost.

As a final note, the compromise between visual realism and computational costs involves interactions between representation schemes, hidden surface algorithms, and shading models. Polyhedral representations simplify the hidden surface removal task but complicate the task of computing the shading of curved surfaces. Parametric representations simplify the task of computing surface shading but complicate the task of hidden surface removal. The interesting result demonstrated in this chapter is that, through the use of clever shading algorithms, the relatively primitive polyhedral representation can achieve a visual realism approaching that of the parametric representation.

 

A Simple Reflection Model

In 1975, Bui-Tuong Phong proposed a simple reflection model to improve upon the shading models in use at that time. In addition to the simple shading model, Phong proposed a new algorithm to average the shading on polygonal representations of smooth surfaces that recovered the smooth appearance of the original surface. The Phong algorithm provided a significant improvement over Gouraud shading, an averaging algorithm which greatly enhanced the smooth appearance of polygonal surfaces. The success of the Phong shading model rests on what we might call the principle of model authenticity. This principle can be summarized as:

In order to successfully simulate some process of nature, any model must recognize and successfully simulate the natural laws by which the process occurs.

This principle is suggested in Phong's quotation introducing the chapter and serves as the basis for David Marr's successful theory of human vision. It is the guiding principle behind the approaches to visual realism of both this and the next two chapters.

 

Sources and Surfaces

Before we investigate the Gouraud and Phong models in detail, it is helpful to define some of the characteristics of sources and surfaces.

Sources

The sources of illumination for a scene are broadly classified as either point sources or extended sources. In fact, there are no ideal point sources in nature -the concept logically involves an infinite energy density. In practice, we consider stars and objects such as fireflies and auto headlights viewed from a distance as good approximations of point sources. A good heuristic test for distinguishing between point and extended sources is to answer the question: Can I distinguish the size of the source by looking at it? If the answer is "no," then it is a point source; if "yes," then it is an extended source.

Note, when this definition is applied rigidly to the sun, it classifies the sun as an extended source. This is the correct classification as you can verify by noting the fuzzy shadows cast by sharply defined objects in sunlight. Since the disk of the sun subtends a solid angle of approximately 0.5 degree diameter, rays from different parts of the sun can diverge by up to one half degree. However, in practice, this divergence of the sun's rays from parallelism is not detectable in most renderings of computer graphics scenes. So the sun is usually classified by graphics texts as a point source.

Extended sources are much more common in practice than point sources. Examples include fluorescent lights and windows. Extended sources all exhibit a common property which identifies them as extended (or distributed). That is that rays striking the observer from different parts of the source are measurably divergent. Rays from point sources, which strike a point on the image, are by definition all parallel. The equivalence of the "size" and "parallelism" tests for point/extended sources is illustrated in Figure 9.1.

 


(a)

(b)

(c)

Figure 9.1

Range of sources from point to extended.


 

Point sources can be located anywhere, of course, even in the midst of a scene being rendered. In circumstances in which the distance from the source to the scene is comparable with the dimensions of the scene itself, rays from the source will have considerable divergence. In fact, rays from a source near a polygon will diverge over the surface of the polygon. That is, the angle of incident rays and their distance from source to surface will vary significantly as one scans across a single polygon. This greatly complicates the computation of the amount of light striking the polygon at a given point.

Since much of human history was spent outdoors with the sun and moon as the primary source of illumination, we have a deeply rooted instinct for interpreting lighting from an approximate point source approximately at infinity as the most natural. This fact, combined with the computational simplification achieved, leads us to the first heuristic of our simple model.



Heuristic 1: All point sources are located at infinity.


This compromise achieves considerable simplification at a relatively small cost in loss of visual realism.

 

Ambient Lighting

The observant reader may already feel uneasy over the difficulties involved in computing the illumination of a given polygon surface from extended sources. Problems such as these provided the motivation for inventing calculus. To calculate the intensity of light striking any point on the polygon from a distributed source involves integrating the intensity distribution over the surface of the source and correctly taking into account the dependence of the intensity upon distance from source to polygon surface. This is a difficult and computationally intensive task.

Upon further consideration the curious reader may be overwhelmed to discover that such source-to-object integrations are only the tip of the iceberg. The core of this discovery is the recognition that each surface illuminated by all point and extended sources becomes, itself, a source of light for illumination of all other line-of-sight surfaces of the scene. Each of these surfaces, in turn, re-reflects light to other surfaces, including the original one, thus achieving an "infinite regression" of reflections and illumination. The partial history of the scattering of one ray is shown in Figure 9.2. Both ray tracing and radiosity algorithms are attempts to recognize and solve this real, physical, and enormously complex problem.

 

Figure 9.2

Successive scattering of a single ray from a point source. As light from the point source illuminates the top surface of the cube, it becomes a secondary source illuminating all other line-of-sight surfaces. At each scattering a portion of the ray's energy is absorbed, leaving a reduced-intensity reflected ray.


 

In the simple illumination model, this complexity is "swept under the rug" by defining all such secondary reflection effects as ambient illumination. This heuristic states that all secondary scattering and background illumination effects can be represented by a uniform, isotropic ambient lighting (i.e., the same in all directions).


Heuristic 2: All illumination other than that from point sources comes from isotropic ambient lighting.



The ambient lighting heuristic simplifies the task of accounting for diffuse lighting and secondary scattering effects at a relatively low cost in loss of visual realism. It can be thought of as a gross averaging or integration of these effects for a scene with a random distribution and orientation of objects. As we shall see shortly, it reduces computation of illumination due to scattered light to a trivial additive factor.

 

Surfaces

Light striking a surface can undergo a variety of processes depending on the optical properties of the material. In the most general case these processes include:

Transparent objects can be further classified as translucent or clear. For translucent objects, the light is scattered internally and gives the object a cloudy or "milky" appearance. Clear objects transmit the ray in a straight line through the object with the angular relationship at each surface being determined by Snell's law. This variety of processes is illustrated graphically in Figure 9.3.

 

Figure 9.3

Processes which may occur when a light ray strikes a surface.

 

Figure 9.3 illustrates a number of important relationships. Three of these are based on the conservation of energy:

Ii = IR + It 					[9.1]

(Law of energy conservation),

IR = Ir + Id 

(reflected light may be spectral or diffuse),

It = Is + It´ + Ic 

(transmitted light may be scattered, refracted, or absorbed)

where

Ii = Integrated intensity (power density) of the incident beam,

IR = Integrated intensity of all reflected rays, both spectral and diffuse,

It = Integrated intensity for all rays transmitted through the surface,

Is = Integrated intensity of spectrally reflected rays (mirror reflection)

Id = Integrated intensity of diffusely reflected rays

Is = Integrated intensity of all primary scattered rays

Ic = Integrated power density of all absorbed light energy (converted to heat), both internally and on the surface.

 

The three laws of energy conservation given in [9.1] state simply that energy never simply disappears. The light energy striking the surface of an object shows up either as transformed light energy (reflected, transmitted, or scattered) or as heat energy (absorbed).

Snell's law of refraction gives us an additional equation governing the relationship of the angles that the incident and refracted rays make with the normal to the surface, N. For a ray incident in media 1 and refracted in media 2, Snell's law may be stated:

 

n1 sin q1 = n2 sin q2 			[9.2]

(Snell's law of refraction),

where

n1 = Index of refraction of media 1 (~1.0 for air),

n2 = Index of refraction of media 2 (~1.5 for most glass)

q1 = Angle of incident ray with respect to surface normal,

q2 = Angle of refracted ray with respect to surface normal.

Finally, the law of spectral reflection for mirror-like surfaces yields the equation:

qi = qr 				[9.3]

(Perfect spectral reflection).

 

Any physically correct illumination model must take all of these effects into account. In actual practice, computation of all these effects in accordance with the physical laws 9.1-9.3 is an extremely time consuming task. Various illumination models attempt to simplify these calculations with heuristic approximations.

The simple illumination model makes several key assumptions which limits the range of applicability of the model but also greatly simplifies the computations. First, it assumes that all objects are opaque. That is, it ignores transparent objects and assumes that It = Ic and that all absorption takes place on the surface. Therefore, we can state heuristic 3 as:



Heuristic 3: All objects of the scene are opaque.


 

You might run your own in situ experiment to determine how much generality is lost by this assumption. Look about you, and note how many objects can be classified as transparent and how many opaque. Only objects constructed from glass, clear plastics, or liquids can qualify as transparent. Again, since the trigonometric functions of Snell's law are computationally expensive, this heuristic costs us little and buys us much.

A final heuristic in the simple illumination model deals with the unrealistic mathematical abstractions of "point source" and "perfect spectral reflection" given in Equation 9.3. The heuristic, first proposed by Warnock, replaces the equality of [9.3] with the approximation, qi Åqr. The two problems which Warnock's heuristic effectively solve include:

 

The theoretical mathematical description of the angular dependence of specular reflection of a point source by a perfect mirror is given by:

Ith(q) = ks Ii) d(q - qr) 	 [9.4]

 

where

ks = the fraction of the incident beam which is reflected,

d(q - qr) = 1 when q = qr; 0 otherwise.

Warnock's heuristic broadens the distribution function by using a function of cos (q - qr) raised to the power n. That is,

Iw(q) = ks Ii cosn(q - qr)	 [9.5]

where

n = 1 Æ100 depending on how shiny the surface is.

(n = 1 for matte surface;

n = 100 for highly reflective surface)

 

We summarize this heuristic as:



Heuristic 4: The spectral reflection intensity is proportional to the cos(q - qr) raised to some power n which depends on the glossiness of the surface.


 

One interesting property of specular reflection is that it is independent of the color of the surface. The color of the reflected beam, Is, is the same as that of the incident beam. If this seems difficult to believe, you can verify it with a simple experiment. Simply hold up a book with a glossy cover at the correct orientation so that you can observe some quasi point source with qi = qs. Using different colored objects as the specular reflecting surface, you will always get the same color for the highlight -- namely, the color of the source light.

 

Color

One of the most important characteristics of a surface is its color. The difference in color exhibited by different objects illuminated by the same light source is due to a difference in the absorption coefficient, ka, for different wavelengths, li, of incident light. Since in our simple illumination model all light is either absorbed or reflected, the reflection coefficient, kr, is also l dependent because of energy conservation.

ka(l) + kr(l) = 1 			 [9.6]

where

ka(l) = fraction of incident light of wavelength l which is absorbed,

kr(l) = fraction of incident light of wavelength l which is reflected.

So, for instance, when sunlight (approximately white light) strikes a green object the reddish portion of the spectrum is absorbed and green wavelengths are reflected.

The approximation used in the simple illumination model is to consider the color of an object as fully described by the reflectivity coefficient, kr(l). Further, it is assumed that the color of spectrally reflected light is the color of the light source rather than that of the surface. Finally, it is assumed that the spectral distribution of ambient light is that of white light. These approximations can be summarized by the following heuristic:



Heuristic 5: The color response of a surface element is given by

  • The reflection coefficient kr(l) applied to incident point source and ambient light,
  • Specularly reflected light which retains the color of the point source,
  • Ambient light which is white.


 

Note that this heuristic describes fairly accurately the behavior of specularly reflected light but grossly over simplifies several other important aspects of real optics. First, by using only kr(l) to describe the reflection behavior of a surface, we ignore the difference in color of point sources in computing the diffuse reflection. Secondly, by assuming that the ambient background light is white, it ignores the interaction of proximate color surfaces on each other -- the "bleeding" of color from one surface onto nearby surfaces. These faults of the simple illumination model are corrected by the more realistic radiosity model.

 

The Simple Illumination Model

Now that we have defined some of the terms used in describing illumination and some useful heuristics to reduce the complexity of modeling the behavior of real light precisely, we can begin to build up the simple shading model. Let's define a term, SHADE, which will represent the color with which we shade a given polygon of our polyhedral scene and incrementally build the model by adding successive terms corresponding to each physical process.

Combining the ambient lighting heuristic with the simple color model heuristic, we can describe the ambient light contribution to the shading term as:

SHADE = kr(l) Ia 				[9.7]

(ambient term),

where

kr(l) = color-dependent reflection coefficient of the surface,

Ia = intensity of ambient light striking the surface.

The second term in the simple illumination model is the diffuse reflection of light from point sources. In Figure 9.4 we show the important parameters for this process. Simple geometric considerations lead to Lambert's law of cosines which states that the intensity of light reflected from a perfect diffusing surface is proportional to the cosine of the angle, q, between the light source direction, L, and the normal to the surface, N. Since cos q = L·N, this can be expressed mathematically as the second term in the shading equation:

SHADE = kr(l) Ia + kr(l) Ii L·N 	[9.8]

(adding diffuse term)

where

Ii = intensity of incident light source i measured at the surface.

 

 

Figure 9.4

Diffuse reflection from a point source. Lambert's law says that the intensity, Id, of diffusely reflected light is proportional to cos q = L·N, the cosine of the angle between the surface normal and vector, L, pointing towards the light source. Note that the intensity Id is isotropic - the same in all directions.


 

Note that both the ambient term and the diffuse reflection term are independent of the viewing angle. At first this may seem a bit surprising since one might expect intuitively that an illuminated surface viewed along the direction anti-parallel to the normal would appear brighter than one viewed along a glancing angle. However, since truly diffuse surfaces emit an equal intensity in all directions, this is not the case.

An interesting geometric cancellation takes place when a surface is viewed at a glancing angle. Although each unit surface area appears reduced in size (and hence emits less light toward the viewer), a given solid angle of viewing area includes more surface area units as the glancing angle f increases towards 90° from the normal. The shrinkage in surface area is proportional to cos f of the glancing angle. The growth of the number of units of surface in the same viewing solid angle grows as (1/cos f), so the two effects precisely cancel. Thus, the apparent intensity of a diffusely reflecting surface appears constant from any viewing angle.

The dependence of light intensity on distance, r, from a point source is proportional to r-2. Since we are assuming that r = infinity for all point sources, the intensity of illumination of all point sources would be zero for a physically correct dependence on distance. However, since objects in a real scene become dimmer with increasing distance, we can provide some depth cueing by introducing another heuristic. This heuristic states that the intensity of an object illuminated by a point source falls off inversely with the distance from the observer to the object. Thus, the distance correction to the simple shading model may be added by rewriting the shading equation as:

SHADE = kr(l) Ia +  kr(l) Ii L·N/(d + K) 		[9.9]

(adding distance effect)

where

d = D + z = distance of polygon from observer,

D = distance of observer from view plane,

K = arbitrary constant to be adjusted to optimize realism.

Note that the distance term in Equation 9.7 is not intended to accurately simulate the inverse square law of intensity vs. distance from a point source. Rather, it provides a simple heuristic for depth cueing based on our experience of closer objects appearing brighter.

The final term in the simple illumination model involves a mathematical representation for specular reflection. As heuristic 4 indicated, specular reflection can be effectively modeled by using a term in cosn(dq) where dq = q - qr is a measure of the deviation from ideal specular reflection and n is some number in the range of 1 <= n <= 100, depending on the glossiness of the surface. A dull, matte surface would have n values in the range of one while a highly polished, glossy surface would have a large value of n. Figure 9.5 illustrates the angles involved in this model and an alternative form of expressing the relationship in terms of the half-angle vector, H.

 

(a)

(b)

Figure 9.5

Angular relationships for specular reflection. The error angle dq is the deviation of the viewing direction angle, qr, from the ideal angle, q. The half-angle direction, H, splits the angle between the source direction, L, and viewing direction, Ir.


 

A careful comparison of Figure 9.5(a) and 9.5(b) allows us to derive a relationship between the error angle, dq, and the angle f between the half-angle vector H and the normal, N.

 

We can use the trigonometric and geometric identities



The important point is that cosn dq µ (H·N)2n, and the vector formulation is easily computed in terms of the light direction, L, and the viewing direction, Ir. With this formulation of the specular reflection angular dependence, we can add in the fourth term to the shading equation as:

The reflection coefficient, ks, is a measure of the fraction of incident light which is specularly reflected. In general, it is a function of incident angle, q, increasing as q increases and reaching approximately 1 as q Æ 90°. Since there are no simple heuristic for describing this behavior, we shall assume ks is a constant for each surface. The effect of varying the glossiness coefficient, n, is illustrated in Figure 9.6.

 

Figure 9.6

Dependence of specular peaking term, cosn dq, on coefficient n. Note how the distribution approaches ideal reflection (a spike at dq = 0) as n Æ


 

The final refinement of the simple illumination model is to take into account multiple point sources. Since the light from each source adds independently to the brightness of a given polygon's illumination, the appropriate method for modeling their influence is to add the diffuse and specular reflection terms that depend on the incident point sources. This may be expressed as:

The addition of specular reflection introduces a dependence upon viewing angle into the shading equation. This is illustrated in Figure 9.7 in which the vector, V, points towards the viewer. For this particular case the total diffuse reflection intensity Id is approximately equal to the spectral intensity, Is. For the particular viewing angle shown, the spectral intensity is at the maximum value since the angle of incidence is equal to the angle of reflection. As the viewing angle changes, the spectral reflection intensity will fall off, but the diffuse intensity will remain constant as indicated by the spherical distribution of Id.

 

Figure 9.7

Angular distribution of diffuse and specular reflection. A viewer along vector V would observe a maximum reflected highlight of the source. For the case shown, Is ~Id.




Limitations of the Simple Shading Model

The strength of Phong's simple shading model is that it incorporates enough physics to simulate the major lighting effects in a polygon scene at a relatively modest computational cost. However, the heuristics -- sometimes called "hacks" -- used in the simple model to approximate the physics of illumination are both overly simplistic and incomplete. Some lighting effects not accounted for in the simple shading model include:

Much of the research in computer graphics has been directed toward refining the simple illumination model in order to overcome these limitations. The physical laws of diffuse and specular reflection first captured in the simple illumination model provide the basic structure for more sophisticated illumination models such as ray tracing.

 

Interpolative Shading Techniques

The simple shading model of Equation 9.16 is designed to give each polygon in a scene the appropriate, constant shade according to the simplifying assumptions outlined above. As we have noted in earlier chapters, there are advantages to using a simple polyhedral representation, including simplicity of the data base and relative ease of solving the hidden surface problem. The price we pay is that every scene is composed of only polygons, and each of these has a constant shading value. The result is that all objects have a "faceted" appearance, including smoothly curved surfaces. Facets broadcast the polyhedral nature of the scene and destroy any semblance of visual realism.

The goal of interpolative shading techniques is to recover the appearance of the curved surfaces which the polyhedral representation is designed to approximate. This objective is illustrated in Figure 9.8 as arc 2 transforming the polygon representation back to the original curved surface. The two leading interpolative techniques are Gouraud interpolation and Phong interpolation.



Figure 9.8

Closing the representation loop. For purposes of simplicity in data storage and hidden surface removal, arc 1 was used to transform the original object on the left into the polyhedron on the right. Interpolative shading techniques, shown as arc 2, attempt to recover the original shape from the polygon representation.


 

 

Assumptions of Interpolative Techniques

As the basis of their interpolation algorithms, both Gouraud and Phong made several simplifying assumptions. These include:

The basic distinction between Gouraud and Phong interpolation is the choice of quantity to be interpolated. Gouraud interpolates the shading; Phong interpolates the normal vector. Let's consider each of these in more detail.

 

Figure 9.9

Computing the normal to the surface at vertex V. Vertex V is the vertex common to polygons P1, P2, P3, and P4 whose normals are N1, N2, N3, and N4, respectively. The surface normal, N, may be computed by averaging the surrounding polygon normals.


 

 

Gouraud Interpolation Shading

The interpolated quantity for Gouraud shading is the vertex shading value. First, the surface normals at each vertex bounding the polygon are computed by averaging the normals of each polygon surrounding the vertex as shown in Figure 9.9. Once the surface normal for vertex i, called Ni, is known, the simple shading model given in [9.16] (or any other shading model) can be used to compute the intensity of illumination at each of the bounding vertices. These intensities, in turn, can be used to compute the intensity at any point along the polygon boundary lines. Finally, the edge intensities are used to compute the intensity at any point along the internal scan line across the polygon. Figure 9.10 demonstrates the relevant variables for this bilinear interpolation.

 

Figure 9.10

Intensity interpolation of Gouraud shading. The surface normals, Na…Nd, are used to compute the intensities, Ia…Id, at the bounding vertices. These, in turn, are interpolated to compute the intensities, I1 and I2, at the ends of the scan line. These end point intensities are interpolated to compute the intensity Ip at an arbitrary internal point.


 

We summarize these steps in the Gouraud shading algorithm listed below.

The Gouraud interpolation algorithm greatly improves the appearance of smooth objects that have been modeled as polyhedrons. However, several anomalies of Gouraud shading cause it to fail the test of visual realism. These flaws include:

 

Gouraud Shading Algorithm


1. For each vertex bounding polygons in the area to be smoothed, compute the surface normals by averaging the polygon normal vectors for those polygons surrounding the vertex. Use only those polygons which are part of the smoothed surface. For instance, in Figures 9.9 and 9.10, we would compute:

2. Repeat for each polygon of the surface:

2.1 Repeat for each vertex of the polygon:

Use the normal values in a shading model to compute the shading value at each vertex. For example, the shading at vertex Vb can be computed using the normal Nb in [9.15] for single source illumination as:





2.2 Repeat for each scan line of the polygon:

2.2.1 Interpolate the appropriate vertex values to compute the intensity values at teach end of the scan line. To compute the intensity I1 at a position (x1,y1) along the edge connecting Va and Vb, use:



2.2.2 Interpolate the scan line end-value intensities to compute the intensity value at each intermediate pixel along the scan line. To compute Ip at point (xp,yp) along a scan line with end point intensities I1 and I2, the interpolation equation is:




 

Figure 9.11

Geometric anomaly stemming from polygon normal averaging algorithm. The resulting surface normals, Ns, will produce a completely uniform Gouraud shading of this surface.


 

 

Phong Interpolation Shading

Two of the most serious problems of Gouraud shading, Mach bands and highlight anomalies, are resolved by Phong interpolation. In contrast to Gouraud shading which interpolates intensities, Phong shading involves a bilinear interpolation of the surface normals. By using a vector, Np, Phong shading provides three times as much information at each pixel as does Gouraud shading. This allows the full shading model to be applied at each pixel and results in a much more realistic treatment of geometry-dependent features such as highlights.

The important interpolation vectors used in Phong shading are shown in Figure 9.12.

 

Figure 9.12

Phong interpolation shading vector diagram. Note that vector N1 interpolates Na and Nb; vector N2 interpolates Na and Nd; and, finally, vector Np interpolates N1 and N2 and is used to compute the final shading value Ip.


 

In Figure 9.13 we illustrate graphically the progress to this point in the quest for realistic rendering of a polyhedra-based world scene.

 

(a)

(b)

(c)

(d)

Figure 9.13

Steps toward visual realism. (a) Simple wire frame model; (b) application of hidden surface removal; (c) application of simple shading model with specular reflection; (d) result of Phong interpolative shading.


 

Both the success of Phong shading in solving the specular reflection problem and one of the remaining unsolved problems of this algorithm are illustrated in Figure 9.14. As we indicated in the discussion of Gouraud shading, the correct and consistent treatment of highlights (specular reflection) is a difficult problem. In Figure 9.14(a) the light source is positioned so that the angle of incidence lies along the light vector and the angle of reflection lies along the vector pointing to the camera for light striking the brightly lit polygon. The Phong-shaded Figure 9.14(b) correctly renders this highlight. In Figure 9.14(c), the light source has been moved so that the deviation from the maximum in specular reflection is about the same for the four upper central polygons. The result for the simple polygonal model (and for a Gouraud-shaded rendering) is that the specular highlight is almost completely lost. However, Phong shading correctly renders the shifted highlight in Figure 9.14(d). As the light source moves from situation (b) to situation (d), the Phong-shaded highlight moves from image (b) to image (d) in a smooth and consistent fashion.

 



(a)



(b)


(c)


(d)

Figure 9.14

Highlight behavior under simple polygonal shading and Phong shading. The polyhedral model of a sphere shows a "flashing" effect as various facets pick up specular reflections from a moving light source (a,c). With Phong shading, the highlight moves smoothly to reflect the moving source (b,d).





Phong Shading Algorithm

1. For each vertex bounding polygons in the area to be smoothed, compute the surface normals by averaging the polygon normal vectors for those polygons surrounding the vertex. Use only those polygons which are part of the smoothed surface. Use [9.17].

2. Repeat for each polygon of the surface:

2.1 Repeat for each scan line of the polygon:

2.1.1 Interpolate the appropriate surface normal vectors to compute intermediate normal vectors at each end of the scan line. To compute the normal N1 at a position (x1,y1) along the edge connecting Va and Vb, use:



2.1.2. Interpolate the scan line end-value normals to compute the normal vector, Np, at each intermediate pixel. To compute Np at point (xp,yp) along a scan line with end point normals, N1 and N2, use:



2.1.3 Use the normal value, Np, with the simple shading model to compute the final shading value for the pixel at (xp,yp). For example, the shading Ip can be computed using the normal Np in [9.17] for single source illumination as:






The remaining problem with both Gouraud and Phong shading is very apparent in Figure 9.14 and detectable in Figure 9.13. That problem is the "straight-edge" profile of any object constructed from polygons. Phong shading can correctly recover the smooth internal surface from its polygonal representation, but since the basic structure remains unmodified, the polygon profile remains as an artifact of the representation.

A final observation on the relative computational efficiency of Gouraud and Phong shading is in order. Note that, as bilinear interpolations, both require 2N + N¥M interpolation calculations per polygon, where N is the number of scan lines and M the average number of pixels per scan line. However, the Gouraud algorithm is much more efficient than the Phong algorithm for two reasons. Gouraud interpolation involves only scalars (intensities), whereas Phong interpolation operates on vectors (surface normals). This provides an automatic factor of three advantage of Gouraud over Phong. In addition, the fairly complex calculation of the simple shading model must be performed only at the vertices of Gouraud-shaded polygons (typically three or four times). In Phong interpolation, the shading function must be applied at each of the N¥M pixels of the polygon. This gives a considerable efficiency advantage to Gouraud shading over Phong shading.

 

Color Models

Color provides one of the most powerful computer graphics tools for visualization. Its availability, resolution, and ease of implementation have become one of the best measures of sophistication in graphics systems. Color has long been recognized as essential for building and manipulating complex CAD graphics. It is now widely used for a variety of tasks from encoding data in medical imaging to highlighting -- in red -- the negative bottom line on spreadsheets.

The apparent advantages of four, eight, and sixteen colors offered by early two-, three-, and four-bit color systems led to the demand for increased color resolution. The desktop publishing industry pioneered the introduction of true color capability in its quest for photographic-quality electronic images from scanners and video sources. Near photograph-quality is possible with the 256 colors available on 8-bit color systems, and the industry is rapidly moving towards 24-32-bit true color capability. Many excellent monitors are available for the display of true color images, and color hard copy is now available on ink jet, thermal transfer (wax), and laser printers.

To better master the techniques involved in the effective use of color, it helps to understand the models which explain the production and reflection of various colors. The integration of color image generation hardware and software, color image processing programs, and color image output devices is a nontrivial task requiring familiarity with color models, color device protocols, and color image standard formats.


Properties of Light

The two disciplines most directly concerned with color perception are physics and physiology. In physics, colored light is described by a relatively few simple concepts. The physiology of color is much more complex and less well understood.

Physics of Color

The visible spectrum consists of electromagnetic waves with wavelengths in air between 400 and 700 nanometers (nm). One nanometer is equal to 10-9 meters. The color of a given photon of light is completely defined by its frequency, f. Associated with this frequency is a wavelength, l, and the relationship between f and l is governed by the speed of light, c, according to the equation:

c = f l (speed of light)				[9.26]

where

c = 2.9979458 ¥ 108 meters/second for transmission in vacuum,

= 2.25 ¥ 108 meters/second for transmission in water,

= 1.97 ¥ 108 meters/second for transmission in crown glass.

Note that the speed of light is strongly dependent upon the medium through which it is transmitted. Since the frequency of light is unchanged as it moves from one medium into another, the wavelength must change. In air, the speed of light is very close to that in vacuum. As light enters water, however, its speed (and wavelength) drop sharply to about 3/4 that in air. Figure 9.15 illustrates the relationship between frequency, wavelength, and color.

 

 

Figure 9.15

Relationship between frequency, color, and wavelength in air for the visible spectrum. The principle hues labeled as color patches represent samples of the underlying continuous spectrum.


 

Some sources, such as monochromatic lasers, emit light of a single frequency. However, most sources, such as the sun and incandescent lights, emit light of many frequencies. Light from such sources is defined by its spectrum, I(l). The I(l) spectrum is a measure of the energy of a given wavelength which passes through a unit area in a certain time. This is sometimes called a power spectrum and its units are watts/m2. The power spectrum may be used to measure the emission intensity of a source, the transmission intensity of light flowing through space, and the illumination intensity of light striking a surface.

Isaac Newton discovered that a beam of white light could be refracted into a rainbow spectrum of colors by a glass prism. Newton interpreted this effect as the resolution of white light into its component colors. He confirmed this interpretation by using a second prism to recombine the dispersed spectrum into a beam of white light. Figure 9.16 illustrates dispersion of a white beam into a color spectrum and the associated power spectrum, I(l).

 


(a)



(b)

Figure 9.16

The spectral composition of white light. (a) White light is dispersed by a glass prism into its component color spectrum. (b) The intensity distribution of a typical beam of white light.


 

Another important physical concept states that colored light behaves in either an additive or subtractive mode, depending on the physical process under consideration. This concept applies directly to computer graphics color output devices.

Most beams of colored light are not, in fact, monochromatic but rather an intensity spectrum with a peak at the wavelength corresponding to the observed color. This point is emphasized in Figure 9.17 in which a beam of green light is resolved by the same prism used in Figure 9.16 into the intensity spectrum which has a peak between 500 and 550 nanometers.

 

(a)

 

(b)

Figure 9.17

Resolution of green light into its color spectrum.




Physiology of Color

The physiology of color perception is a far more complex and less understood area than the physics of color. Human response to color varies greatly from the total inability of color blind individuals to distinguish different hues of the same intensity to the capability of the trained eye to distinguish an estimated 350,000 shades of color. Most people can detect a change in wavelength of about 2 nm over a considerable portion of the spectrum. Not surprisingly, the spectral sensitivity of the human eye peaks at about 555 nm and closely matches the intensity distribution of sunlight.

Two of the most successful computer graphics color models are based on the tristimulus (or trivariance) principle which states that any color may be approximated by the appropriate mixture of three primary colors. This concept has a distinguished historical tradition. Painters have been aware for centuries that most any desired color can be obtained by the appropriate mixture of three primary pigments. Nearly two hundred years ago Thomas Young proposed that the retina contained red, green, and blue light sensitive "particles" that responded independently to these three colors. Modern physiology interprets these particles as red, green, and blue sensitive cones, and the spectral response of each has been measured. The tristimulus principle is the basis for the RGB (red, green, blue) and CMY (cyan, magenta, yellow) color models.

Although tristimulus models have been highly successful in providing the theoretical basis for such technologies as RGB monitors and CMY printers, they remain incomplete as representations of all aspects of color perception. Several effects unaccounted for by the tristimulus theory include:

In additional experiments, a colorful scene was photographed with black-and-white transparency film through red-and-green filters and re-projected using only the red filter and white light. A full color image was produced! When the black-and-white shaded slides were removed, the screen was filled with pink light from the two projectors. That is, using only red light, white light and shading, it is possible to generate full color images. Land concluded that the eye is able to see color independently of wavelength, a result totally at odds with conventional tristimulus theory.

 

Figure 9.18

Context dependency of color perception. The color perceived for the identical inner squares depends on the color environment in which they appear.


 

Trivariate Color Models

Conventional color models based on the tristimulus theory all contain three variables and so are called trivariate models. Let us now consider three of the most useful models, the conversion relationships between them, and the most widely accepted color standard.



Figure 9.19

Projection of three primary colors of the RGB model. In this additive process, an equal mixture of red, green, and blue light produces white light, while mixtures of two of the three primaries produce cyan, magenta, and yellow (C,M,Y).



RGB Model

The RGB model is based on the assumption that any desired shade of color can be obtained by mixing the correct amounts of red, green, and blue light. As Land has shown, the exact hues chosen are not important as long as they include a long wavelength hue (red), a medium wavelength hue (green), and a short wavelength hue (blue). If, for instance, circular red, green, and blue beams are projected onto a white screen in a darkened room, we get the color pattern shown in Figure 9.19.

The additive nature of the RGB model is very apparent in Figure 9.19. Adding red, green, and blue light produces white light, while adding red and blue light produces magenta light, and so on. This linear superposition is expressed mathematically as:

C = rR + gG + bB 				 [9.27]

where

C = color or resulting light,

(r,g,b) = color coordinates in range 0 Æ 1,

(R,G,B) = red, green, blue primary colors.

 

Figure 9.20

RGB color cube. Note how the primary colors define unit vectors along the axes. The three corners opposite R, G, and B are cyan (C), magenta (M), and yellow (Y), the basis of the CMY model. The line connecting black (0,0,0) and white (1,1,1) is the gray scale line.




It is very helpful to visualize the range of colors, or gamut, specified by Equation 9.25 as a 3D RGB color cube shown in Figure 9.20.

Figure 9.20 illustrates the coordinates and colors of the corners of the RGB color cube. Most light, however, can be represented by a 3D color vector which terminates at some arbitrary point in the interior of the cube. To understand the additional shadings possible with the color cube representation, consider the shadings possible on the surface of the cube. In Figure 9.21 a transformed view of the color cube is presented in which subcubes interpolate the color between the four corners of the cube.

 

Figure 9.21

Transformed RGB color cube with interpolated hues.


 

The RGB color model is particularly important because it is the basis for control of most color monitors. For this reason it is also the preferred color model for graphics languages and image processing programs. A typical interactive RGB color picker for selecting the three color coordinates is shown in Figure 9.23.

 

Figure 9.22

Filtering of white light by cyan, magenta, and yellow filters. In this subtractive process, the magenta filter subtracts green light out of the white beam, leaving only its complement, magenta. Subtracting all three colors leaves no light at all, -- black.


 

 

CMY Model

The cyan, magenta, yellow (CMY) color model is a subtractive model based on the color absorption properties of paints and inks. As such it has become the standard for many graphics output devices like ink jet and thermal transfer printers. The principle of the CMY model is illustrated in Figure 9.22 in which white light beamed toward the viewer is intercepted by partially overlapping cyan, magenta, and yellow filters. The cyan filter removes red light from the beam and passes only cyan, the complementary color to red.

 

 

Figure 9.23

Interactive color picker supporting both the RGB color model and the HSV (hue, saturation, value) color model. The user can select any hue from the color wheel by either pointing and clicking or by numerical control of the RGB arrows. The brightness is controlled by the slide control along the right-hand side.


 

In the printing trade this model is frequently called the CMYK model in which the K stands for black. The reason for black is that, although theoretically the correct mixture of cyan, magenta, and yellow ink should absorb all the primary colors and print as black, the best that can be achieved in practice is a muddy brown. Therefore, printers like the Hewlett-Packard PaintJet have a separate cartridge for black ink in addition to the cyan, magenta, and yellow ink cartridge(s).

The CMY model can also be represented as a color cube as shown in Figure 9.24.

 

Figure 9.24

The CMY color cube. Each corner is labeled with its (c,m,y) coordinates. Note that the RGB color cube is transformed into a CMY color cube by interchanging colors across the major diagonals.


 

One can understand the subtractive nature of the CMY model in the following sense. When white light falls on a white page, virtually all the light is reflected and so the page appears white. If white light strikes a region of the page which has been printed with cyan ink, however, the ink absorbs the red portion of the spectrum and only the green and blue portions are reflected. This mixture of reflected light appears as the cyan hue.

In terms of the CMY color cube coordinates, one can think of the origin, (0,0,0), as three color filters with a tint so faint that they appear as clear glass. In terms of absorbing inks, the origin corresponds to pastel shades of cyan, magenta, and yellow so faint as to appear white. As one moves up along the M axes from (0,0,0) towards (0,1,0), it corresponds to turning the density of a tinted filter up towards the maximum possible. In terms of inks, this motion up the M axis corresponds to moving from a pale pastel towards a pure magenta. If one uses all three filters in sequence (or a mixture of C, M, and Y inks), eventually all light is absorbed as one gets to pure colors of filters or inks. This is point (1,1,1).

The RGB and CMY color cubes are useful in expressing the transformations between the two color models. Suppose, for instance, that we know a certain ink may be specified by the CMY coordinates, (C,M,Y) and we would like to know what mixture of light, specified as (R,G,B) in the RGB cube, is reflected. Looking at Figure 9.24 we note the following 3D vector relationships:



Expressing each set of coordinates as a column vector we note that we can write:



The inverse transformation can be thought of as solving the following problem: Given light of a certain color, (R,G,B), reflected from a page illuminated with white light, what mixture of ink, (C,M,Y), is required? Using Figure 9.20, we can write a set of equations resembling [9.26] with White substituted for Black. Since, on the RGB color cube, white has coordinates (1,1,1), the transformation equation becomes:



The CMYK colors are the Process Colors of offset printing. Several image processing, drawing, and desktop publishing programs now have the capability of the color separation of colored images. The process of color separation involves producing four black-and-white images (or negative images) corresponding to the four colors, cyan, magenta, yellow, and black. These separations are then used photographically to produce the four plates for each of the four inks of the offset press. To produce the final color image, each sheet is printed separately with each of the four color plates. Since alignment is critical, accurate crosshairs are printed on each of the four color negatives to assist the printers in achieving good color registry. In Figure 9.25 we show the results of color separating Figure 9.22


HSV Model

The three variables of hue, saturation, and brightness (value) were chosen as the basis for the intuitive color model proposed by Alva Ray Smith. By allowing specification of concepts such as hue, tints, shades, and tones, the HSV model is easier to use from the standpoint of artists and designers than the more theoretical RGB and CMY model. For instance, it is difficult to visualize what combinations of red, green, and blue pigments are necessary to produce pink or brown paint.

Hue is defined as that quality which distinguishes one color family from another, for instance, red from blue. It is best associated with the physical property of wavelength, and the range of possible hues is obtained by marching around the perimeter of the color wheel shown in Figure 9.23. Saturation is defined as the purity of the color, that is, the ratio of light of the color of the dominant hue to all light present in the color. On the color circle of 9.23, the saturation is 1.0 along the perimeter and 0 at the center. Brightness (value) is hard to define, but "you'll know it when you see it." It is the opposite of darkness and corresponds to what you see when you turn the intensity up on your CRT or the voltage up on the dimmer control of a floor lamp. Technically, it is proportional to how many photons of the given hue and saturation are emitted by a source or reflected from a colored object per second. One can think of it as the average of the power spectrum, <I(l)>.

Just as the RGB and CMY models have a 3D geometric representation, so does the HSV model. The representation is called the HSV single hex-cone model and is shown in Figure 9.26. The cylindrical coordinate system uses saturation as the radial axis, value as the axial axis, and hue as an angle measured from red as f = 0°. The primary colors of the RGB and CMY systems are appear alternatively about the perimeter at multiples of 60°.

The top plane (value = 1) of the HSV hex-cone closely resembles the color circle of Figure 9.23 and can be generated by projecting the colors seen by looking along the diagonal ray connecting white, (1,1,1), to black, (0,0,0), of the RGB color cube. One of the primary advantages of the HSV representation is the direct geometric interpretation it provides of artists' concepts of tints, tones, and shades. By slicing the hex-cone by any half plane containing the value axis, we get a triangle, shown in Figure 9.27, which directly displays all three terms.

 

(a) Cyan separation

(b) Magenta separation

(c) Yellow separation

(d) Black separation

Figure 9.25

Color separations of Figure 9.22. These are positives; the program has an option for printing negatives as well. (Separations by Canvas 3.0.)


 

The conversion between the HSV and RGB representations can be accomplished by a linear mapping from one color space to the other. The algorithm proposed by Smith for converting a set of hex-cone coordinates, (H,S,V), to a set of color cube coordinates, (R,G,B), is given as follows.

Figure 9.27 can be interpreted in terms of a painter mixing pure pigments with white-and-black pigments to obtain her final color. First she selects the hue, for example, red. By mixing red and white pigments, the various tints are obtained. Adding more white pigment moves the color from the pure red, (H,S,V) = (0°, 1,1) through various pinks, to white, (0°,0,1). Mixing pure red with black pigment forms various shades, moving the color from pure red, (0°,1,1), down the boundary of the hex-cone to pure black, (?,0,0). Note that along the V axis, hue is undefined. Finally, mixing pure red with varying amounts of both white and black pigments forms the possible tones within the interior of the hex-cone.

 

 

Figure 9.26

HSV color hex-cone. The color space of the hue, saturation, brightness (value) system is a hexagonal-sided cone using a cylindrical coordinate in which hue is measured by the angle, f. Value ranges from 0 (black) to 1 (white), and saturation ranges from 0 on the axis to 1 along the perimeter.


 

 

Figure 9.27

Half-plane slice through the HSV hex-cone showing the locations of tints, tones, shades and grays.


 




Algorithm for Converting (H,S,V) to (R,G,B)

 

Given: Hue on the range 0 <= H <= 360°

Saturation on the range 0<= S <= 1

Value on the range 0 <= V <= 1

if S = 0 then {Achromatic case-gray scale only}

if H = Undefined then

R = V

G = V

B = V

else

if H has a value, raise error flag

end if

else {Chromatic case - hue has a value}
if H = 360 then
H = 0
else

H = H/60 {Reduce hue range

to 0 ² H < 6.0}

end if

I = trunc(H) {Compute largest integer

below H}

{I points to one of hex-cone

primaries}

F = H - I {Fraction of distance between

HSB primaries}

M = V*(1 - S) {Develop first linear

interpolant}

N = V*(1 - S*F) {Develop second linear

interpolant}

K = V*(1 - S*(1-F)) {Develop third linear

interpolant}

case I of {Assign set of RGB values to interpolants}

0: (R,G,B) = (V,K,M)

{Red Æ yellow range}
1: (R,G,B) = (N,V,M)

{Yellow Æ green range}
2: (R,G,B) = (M,V,K)

{Green Æ cyan range}
3: (R,G,B) = (M,N,V)

{Cyan Æ blue range}

4: (R,G,B) = (K,M,V)

{Blue Æ magenta range}

5: (R,G,B) = (V,M,N)

{Magenta Æ red range}

end case

end if

end.


 

 

CIE Chromaticity Diagram

Early experiments with the tristimulus model of color revealed a puzzling problem. An observer was asked to match a test light projected on a screen by independently varying the intensity controls on projectors of monochromatic red, green, and blue lights to generate an adjacent response light. Experimental results indicated that many of the sample test lights were impossible to match by any combination of intensities of the RGB response lights.

The reason for this inability lies in the spectral response of the retinal cortex. Thus, for instance, a relatively pure cyan test light could nearly be matched by a roughly equal addition of blue and green response lights. However, these intensities of green and blue excite a red response within the eye which prevented a good color match. The only way in which a color match could be achieved was by redirecting the red response projector to add some red light to the original test light. By gradually turning up the intensity until the red component of the (test + red) response light matched the red excitation of the (blue + green) response light, a perfect color match could be achieved. This can be expressed mathematically by rewriting Equation 9.27 as:

C + rR = gG + bB 				[9.32]

which may be rewritten as

C = gG + bB - rR 				[9.33]

where

(r,g,b) are color coordinates on range 0- 1.

 

Note the presence of a negative quantity of red in Equation 9.31. This negative term can be seen in the negative portion of the I(red) spectrum of Figure 9.28. This figure shows the set of color matching functions, r, g, and b capable of matching all wavelengths of the visible spectrum using red light of wavelength 700 nm, green light of wavelength 546 nm, and blue light of wavelength 436 nm.

 

Figure 9.28

Color matching functions. With these proportions of monochromatic red, green, and blue light, the color of the wavelength shown will be matched.


 

A final problem with tristimulus models like the RGB color cube is the difficulty of using them for routine color analysis and manipulation. That is, to use a 3D model effectively, one must visualize and move a vector in 3D color space-a fairly difficult task.

In 1931, the Commission Internationale de L'Eclairage (CIE) established an international color definition in terms of a 2D chromaticity diagram and associated standard observer functions shown in Figure 9.29.

 

 

Figure 9.29

CIE standard observer functions. These three functions give the relative amounts of the three CIE colors, X, Y, and Z required to specify the entire spectrum of visible light. This basis uses:

lx = 700 nm; ly = 543.1 nm; lz = 435.8 nm.


 

 

To work around the problem of requiring negative quantities of a primary color, the CIE standard specified three hypothetical colors, X, Y, and Z to use as the basis for an additive, tristimulus model. The great advantage of these three new hypothetical primary colors is that all visible hues may be generated by the addition of positive amounts of each primary. Hence, we have solved the problem of negative coordinates in color space.

Comparing Figures 9.28 and 9.29, one notes that the color matching functions of [9.28] closely resemble the standard observer functions of [9.29] with some shifts and scaling of the intensity axis. One can think of the new XYZ colors as RGB colors which have been shifted in some perception space in such a way as to stimulate the perception of all visible hues by positive amounts of the hypothetical primaries.

The transformation from 3D color space coordinates (X,Y,Z) to 2D space defined by the coordinates, (x,y), is accomplished as follows. We can define the fractions of each of the three primaries as

x = X/(X+Y+Z) , y = Y/(X+Y+Z) , z = Z/(X+Y+Z) [9.34]

Note also that

 

x + y + z = 1 				 [9.35]


Figure 9.30

CIE Chromaticity Diagram. The points along the perimeter counter clockwise from l = 700 nm to l = 400 nm correspond to the saturated hues of the visible spectrum. The straight line from violet (400 nm) to red (700 nm) is called the purple line and cannot be produced by light of a single wavelength. As one moves from the perimeter to the white point, tints are generated.


 

 

From [9.33] it is clear that, upon selection of any two fractional coordinates, the third is determined. The CIE committee selected x and y as the basis of the 2D chromaticity diagram shown in Figure 9.30. The chromaticity diagram with points labeled (x,y) consists of a projection of the points generated by intersecting the (X,Y,Z) vector with the x + y + z = 1 plane onto the xy plane. The variable lost in this projection is the brightness or luminance of the original (X,Y,Z) color. This luminance may be incorporated by use of the Y variable. The chromaticity diagram coordinate can then be considered as the triplet, (x,y,Y), projected onto the xy plane. The transformation between the two systems is given as:

X = x(Y/y) , Y = Y , Z = (1 - x - y)(Y/y) 	[9.36] 

The CIE chromaticity diagram provides a color standard of great utility and explanatory power. Useful applications include:

Figure 9.31 illustrates several of these concepts.


Figure 9.31

Use of CIE chromaticity diagram for locating complementary colors and computing saturation.




The complementary color of any hue, C1, is readily obtained by drawing a straight line from that color through the white point, C. The intersection of this line with the pure hue perimeter defines the complementary color, C2. The correct mixture of C1 and C2 light will produce white light. The CIE diagram also makes it clear that some colors, such as green, do not have a complementary color. That is, there is no single, pure wavelength which, when added to the original color, will produce white light. The colors along the purple line are mixtures of red and violet light rather than hues of a single wavelength.

The saturation at any given point on the CIE diagram is defined as the fraction of a point's distance from the white point, C, to the saturated color at the perimeter. So, for instance, the pinkish magenta point shown in Figure 9.31 has a saturation given as:

S = d1/(d1+d2)					[9.37]

Figure 9.32 shows how color gamuts of color output devices and color trajectory for variable objects are represented on a chromaticity diagram.

Figures 9.31 and 9.32 graphically illustrate some of the most useful applications of the CIE chromaticity diagram. Comparing the color gamut of an RGB monitor with that of typical printing inks, one finds that the ink gamut is smaller and almost totally surrounded by the RGB monitor gamut. The conclusion is that RGB monitors can show colors which are unprintable by the ink in question. The gamut of color film, on the other hand, encompasses that of both RGB monitors and printer's ink. Hence, one can photograph colors that cannot be displayed on a color monitor.

 

Figure 9.32

Plot of RGB color monitor gamut and the trajectory of a heated blackbody. Note how color monitors are capable of displaying only part of the visible spectrum. Heating an object transforms its color from "red hot" to "white hot" and beyond.



Conclusions


We began the chapter in the context of model authenticity, the principle that states that realistic rendering requires a realistic simulation of the physics of light. Heuristics were introduced for approximating physical processes which are too computationally expensive for exact solution. Terms for ambient lighting and specular reflections were shown to achieve considerable realism at little computational cost. Phong's simple shading algorithm was assembled term by term to provide an effective illumination model and to serve as the basis for more advanced rendering techniques.

The faceted appearance of polyhedral surfaces remained the major drawback of this convenient representation. However, Gouraud demonstrated that the shape of curved surfaces could be recovered from their polyhedral representation through interpolative shading. Phong extended this work by interpolating normal vectors, a process that achieves excellent realism at the cost of an increased computational load. Neither Gouraud nor Phong interpolation can hide the polygonal profile associated with the polyhedral representation of curved surfaces, however.

Finally, we exami