## Chapter 6## Two-Dimensional Graphics -## Implementation## M. Firebaugh## © Wm. C. Brown Communications, Inc. |

**We are somewhat chagrined
that the obvious extension of work on line clipping with which we have been
involved... kept us so long from seeing the simplicity of the present approach.
**

*Ivan Sutherland and Gary
Hodgman *

Representation and implementation are very tightly related and interdependent issues in computer science. The proper representation can make the implementation a quick, painless, and satisfying task. A poorly chosen representation, on the other hand, can reduce implementation to an awkward, inefficient, or nightmarish task. The emphasis of the last chapter was on representation issues, with particular stress on the role of the proper mathematical representation in performing important graphical operations easily and efficiently.

In this chapter we turn to some of the techniques and concepts
involved in implementation of 2D graphics routines. These are issues that
have dominated the discussion in previous texts. However, as the sophistication
of graphics hardware and software systems increases, more and more of these
techniques move from the "to be proven" category into the "given"
category. When the only plotting tool available was the control of an individual
pixel, the issue was: Given a pixel control routine, how does one most effectively
draw lines? When systems achieved line drawing routines, the issue became:
Given the routine, *DrawLine*, how does one most effectively draw polygons?
Now that virtually all systems support polygon manipulation routines, the
issue becomes: Given polygon manipulation routines, how does one most effectively
construct and render 3D solids? High-level graphics languages like PHIGS
now supply standard 3D object manipulation and rendering routines.

So a good argument can be made for deleting several of the topics in this chapter altogether as a rehash of already-solved problems. However, there is value in reviewing the solutions to some of the most important historical problems in computer graphics. Insight on efficiency considerations and algorithm development is gained by study of elegant techniques used in rendering graphical objects. Our compromise on this issue is to present representative algorithms for solving each of the major graphics problems rather than an exhaustive survey of solutions.

As a final observation in this introduction, it is helpful
to consider the close parallel of the representation/implementation issue
to the abstract/concrete continuum. Representation issues are generally
simplified by defining a new abstract data type for generalizing some object
by naming it and giving it an internal structure in which the nitty-gritty
details are hidden. This is an effective strategy for conceptualizing and
simplifying algorithms. However, the details cannot be "swept under
the rug" forever. Eventually the complex data types must be unpacked
and data displayed on a real screen. These concrete tasks are at the heart
of implementation issues and are the focus of study in this chapter.

**Raster Graphics Implementation
**

In the previous chapter, we developed some mathematical
tools for representing points and lines. Real graphics devices, however,
do not display mathematical points and lines, but rather some coarser approximations
that are limited on raster displays by pixel density and size. The question
then becomes: Given a mathematical point at (x,y), or a mathematical line
from (x1,y1) to (x2,y2), what is the optimal raster graphics representation?

Raster Graphics Representations

Before we examine algorithms for the efficient mapping of mathematical primitives to real raster graphics displays, two observations are in order. First, very significant design issues are involved in the solution to these problems. Second, system designers for the languages and application programs have, for the most part, solved these problems efficiently, so the reader need not worry about them in practice. However, it is instructive to study the problems and the design considerations used in their solution.

**Figure 6.1**

The line density problem. Note how the pixel density, DN/DL,
is 10 along Dx and Dy but only about 7 along the 45° line,
Dd.

What are the criteria that must be considered when designing
an algorithm for drawing graphics primitives on a raster screen? Certainly
all line drawing should attempt to optimize the following:

Uniform, user-selected density-This is a nontrivial problem as a quick examination of Figure 6.1 will indicate. Here the simplest algorithm yields a 45° line with a pixel density only 70.7 percent that of a horizontal or vertical line.Straight lines should appear straight- The quickest test for distinguishing a raster graphics display image from a vector display image is to note the presence of thejaggieson raster display devices. Figure 6.2 illustrates why the jaggies occur. Jaggies are an intrinsic side-effect of the discrete, pixel structure of raster screens, but clever programming can minimize the effect.

Line end-points should be constrained- Multiline objects created by user pointer input quickly become very messy due to end-point overshoots and gaps if constraints are not imposed. Straightforward techniques ofgridsandsnapshave helped solve this problem. However, in some cases, over-constrained systems can even worsen the problem. See Figure 6.3 as an example.

Line algorithms should be fast- The main bottleneck on CAD workstations remains the time required to redraw complex diagrams composed of tens or hundreds of thousands of vectors. Without graphics accelerator cards this task may take the order of minutes on personal workstations. Any tricks to speed the line drawing operations translate directly into productivity increases.

**Figure 6.2**

The jaggies problem. Line (a) is well represented by the
array of pixels. Line (b) is OK, but suffers the obvious density problem
and some question as to "where the line really starts." Line (c)
has a severe case of the jaggies, a problem inherent in any raster system
and particularly distracting for lines nearly horizontal or vertical.

**Figure 6.3**

The "fighting constraints" problem. Each line of the star was generated by duplicating the horizontal line and rotating it by n ¥ 72° where n = 1Æ 4. Then, with "Snap to grid" in force, an attempt was made to drag them into the correct orientation. This was the best we could do. What went wrong?

*Straight Line Algorithms
*

Consider the following concrete problem: Your new, raster-screen computer has only the most primitive graphics function, point(x,y), and your job is to write a line function, line(x1,y1,x2,y2). Assume, further, that you want to follow the design criteria given above as far as possible, and that the pixels on your machine have a value of either 1 (ON) or 0 (OFF), i.e., no gray scale is supported. How will you proceed?

Your first instinct might be to use the two points to compute
the parameters of the slope-intercept form of a line, *y = m x + b,*
and then simply to step along the *x* value of pixels, computing *y*
at each point. By rounding the *y* value you would get the (*x,y*)
pixel pair closest to the line and send it to point(x,y).
What is wrong with this approach?

While this method makes good sense conceptually, there
are two serious problems, one mathematical and one computational. The mathematical
problem is seen most clearly for the special case of a vertical line in
which case x is not stepped at all. It is also apparent that the nearly
vertical case would also badly violate the *uniform density* criterion.
The obvious solution to the mathematical problem is to determine in which
angular region the line lies and switch to a *x = f(y)* representation
for the 90° sector centered on the *y* axis.

The second problem is related to computational efficiency.
Note that each slope-intersect calculation involves both a multiplication
and an addition. Additions and condition checking are relatively efficient
on most computers, but multiplications are not. Multiplications, particularly
on computers not equipped with special co-processors, consume many clock
cycles compared to the 12 cycles of an addition. Therefore, to satisfy
the *fast* criteria, we should look for a different algorithm.

*Simple DDA Algorithm
*

The simple Digital Differential Analyzer (DDA) algorithm
follows our instinctive approach started above, but cleverly avoids the
two problems indicated. The algorithm can be summarized as follows:

Note that the first two steps are really subsumed by the
last three. That is, deleting the first two steps and renumbering steps
(3), (4), and (5) as (1), (2), and (3) will accomplish the precise task
desired. Below we show a Pascal program, *SimpDDA*, and the output
it generates.

*Pascal Program *SimpDDA

program SimpDDA;

{Program to implement simple }{Digital Differential Analyzer }

{for drawing a straight line from (x1,y1) to (x2,y2) }

var

i, irange, xp, yp: integer;

x1, y1, x2, y2: real;

dx, dy, x, y, range: real;

begin

{Query the user for two points}

writeln('Simple DDA Program');

writeln('Input point 1 (x1,y1):');

readln(x1, y1);

writeln('Input point 2 (x2,y2):');

readln(x2, y2);

range := abs(x2 - x1);

if abs(y2 - y1) > range then

range := abs(y2 - y1);

dx := (x2 - x1) / range;

dy := (y2 - y1) / range;

x := x1;

y := y1;

showDrawing;

irange := round(range);

for i := 1 to irange do

begin

xp := round(x);

yp := round(y);

moveto(xp, yp);

lineto(xp, yp);

x := x + dx;

y := y + dy

end;

end.

**Figure 6.4**

Output of SimpDDA program. User control dialog takes place in the Text window and the graphical output in the Drawing window. Note the jaggies.

This program does, indeed, solve the problem. The *point(x,y)*
function is implemented through the *moveto* + *lineto* calls.
The only other differences from the algorithmic version are the *showDrawing*
call to open the Drawing window and the statements converting real to integer
numbers.

A variation of the simple DDA, called the *symmetric
DDA*, uses scaling the range by powers of 2 to calculate the values of
*dx* and *dy*. Since scaling by powers of 2 can be done by right
shift operationóvery efficiently in machine languageóthe only
two divide operations in this algorithm can be replaced by shift operations
for further improving the efficiency of the algorithm.

*Bresenham's Algorithm
*

While registers in the DDA kept track of the true values
of (x,y) as the line was being plotted, Bresenhamís algorithm uses
only the *error* between the true values and the pixel locations.
Basically, the algorithm determines along which axis the point plotter
will move most rapidly and then plots a point at each pixel along this
direction. Along the slower moving axis it keeps track, via an error signal,
of the difference between the true line position and the current pixel
position. When this error signal becomes positive, the current pixel position
is incremented by one, and the error signal is reduced by one. This algorithm
can be summarized as:

As presented, one might expect that this algorithm would
suffer a performance penalty due to the division in steps 2 and (even worse)
3. However, Bresenham noted that the test for incrementing the y pixel location
involves the *sign* of the error signal, not its magnitude. We can,
therefore, modify *err *with multiplication by the constant, *2 Æx*,
to get the equivalent error signal, *errp*, defined as:

errp = 2 Dx err.

Old statements in the algorithm are then replaced by the
new statements:

Next, we list a Pascal program that implements Bresenhamís
algorithm.

*Pascal Program *Bresenham

programBresenham;

{Program to draw a straight }

{ line from (x1,y1) to (x2,y2) }

{using Bresenham's Algorithm }

var

i, irange, xp, yp, dxs, dys: integer;

x1, y1, x2, y2: real;

dx, dy, x, y, range: real;

errp: real;

axis: char;

procedurepoint (x, y: integer);

{Procedure to plot point at (x,y)}

begin

moveto(xp, yp);

lineto(xp, yp);

end;

begin

{Query the user for two points}

writeln('Bresenham''s Straight-Line Algorithm');

writeln('Input point 1 (x1,y1):');

readln(x1, y1);

writeln('Input point 2 (x2,y2):');

readln(x2, y2);

range := abs(x2 - x1);

axis := 'x';

{Test for axis of more rapid motion}

if abs(y2 - y1) > range then

begin

range := abs(y2 - y1);

axis := 'y';

end;

irange := round(range);

dx := (x2 - x1);

dy := (y2 - y1);

errp := 2 * dy - dx;

dxs := 1;

{Test for direction of x motion}

if dx < 0 then

dxs := -1;

dys := 1;

{Test for direction of y motion}

if dy < 0 then

dys := -1;

xp := round(x1);

yp := round(y1);

showDrawing;

{This part steps along x axis}

caseaxis of

'x':

begin

for i := 1 to irange do

begin

point(xp, yp);

if errp > 0 then

begin

yp := yp + dys;

errp := errp - 2 * dx * dxs

end;

xp := xp + dxs;

errp := errp + 2 * dy * dys;

end;

end;

'y': {This part steps along y axis}

begin

for i := 1 to irange do

begin

point(xp, yp);

if errp > 0 then

begin

xp := xp + dxs;

errp := errp - 2 * dy * dys

end;

yp := yp + dys;

errp := errp + 2 * dx * dxs;

end;

end;

end;

end.

The output of program Bresenham is shown in Figure 6.5.

**Figure 6.5**

Output of Bresenham program . Note the similarity to the output of SimpDDA

.

Several comments are in order on program *Bresenham*.
Two extensions to the algorithm listed were necessary to generalize the
program. The first involves testing for the axis of more rapid motion and
using the results of this test in a case
statement to step along either *x* pixels or *y* pixels. The second
recognizes that x2 may be less than x1. To handle the direction of the resulting
motion, we test on the sign of Æx and set a sign flag, *dxs*,
accordingly. These two extensions generalize the program so that it correctly
handles lines of any orientation pointing in any direction.

Note, however, that both the simple DDA and Bresenhamís
algorithms suffer from severe cases of the jaggies. This problem is inherent
in drawing pixel-wide lines on B/W raster systems. A more scientific name
for the jaggies problem is the *aliasing* problem. That is, the true
mathematical line is represented by short horizontal and vertical "alias"
segments. Raster displays with gray-scale or pixel shape capabilities can
minimize the alias problem by various *antialiasing* techniques.

*Antialiasing Algorithms
*

We shall use two approaches to minimize the aliasing problem.
The first involves the *shape* of pixels and the second, their *intensity*.
The principle on which the pixel shape antialiasing algorithm is based is
illustrated in Figure 6.6.

Though this is a relatively crude antialiasing technique,
it has the advantage of working on bi-level monitorsóof course, at
the expense of line width resolution. The program, Shape_AntiAlias,
implements this algorithm.

*Pascal Program *Shape_AntiAlias

programShape_AntiAlias;

{Program to implement }

{pixel shape antialiasing}

{in a DDA line generator.}

var

dx, dy, steps, k: integer;

x1, y1, x2, y2, xp, yp: integer;

sx, sy, x, y, m: real;

begin

writeln('Antialias DDA program');

writeln('Input (x1,y1): ');

readln(x1, y1);

writeln('Input (x2,y2): ');

readln(x2, y2);

showDrawing;

{Calculate appropriate differential.}

dx := (x2 - x1);

dy := (y2 - y1);

m := abs(dy) / (abs(dx) + 1);

{Set anti-aliasing pensize, }

{depending on slope.}

if m < 0.5 then

pensize(1, 2);

if (m > 0.5) and (m < 2.0) then

pensize(2, 2);

if m > 2.0 then

pensize(2, 1);

if abs(dx) > abs(dy) then

steps := abs(dx)

else

steps := abs(dy);

sx := (dx / steps);

sy := (dy / steps);

{Initialize and draw first point.}

x := x1;

y := y1;

drawline(x1, y1, x1, y1);

{Loop through remaining steps.}

for k := 1 to steps do

begin

x := x + sx;

y := y + sy;

xp := round(x);

yp := round(y);

drawline(xp, yp, xp, yp);

end;

end.

In Figure 6.7 we show two lines, each with and without shape antialiasing. The results produced by this simple algorithm are surprisingly good.

The second approach to antialiasing is through *shading*.
The shade antialias algorithm assumes that a mathematical line is represented
by a long, thin rectangle of the same length as the line and *n* pixels
wide. Any pixels totally inside this box are given a maximum shading of
S = 1. A pixel lying partially within the box is given a shading proportional
to the area inside the box. For instance, if only one-quarter of the pixel
was within the box, the shading would be set to S = 0.25. Figure 6.8 illustrates
this shading algorithm.

**Figure 6.6**

Pixel shape antialiasing technique. By using "super pixels" of shape 2 ¥ 1, 2 ¥ 2, or 1¥ 2 regular pixels, depending on the line direction, the resulting line is heavier but appears smoother. Panel (a) shows the angular regions for each shape. Panel (b) shows two lines and their superpixel representation.

In Figure 6.9 the results of applying this algorithm to two lines is demonstrated, along with a 400% magnification of one section of the figure.

As can be seen from Figure 6.9, shade antialiasing is effective
in reducing aliasing effects and improving the appearance of lines. The
price paid for this enhanced appearance is a broadening and blurring of
the line. Line broadening is a problem only for lines ~ one pixel wide and
is barely noticeable for lines 3 3 pixels wide. Note also that this technique
is not available on bi-level (B/W) screens. It can be emulated on a B/W
screen by using superpixels in which the number of real pixels turned on
is proportional to the shading.

(a) |
(b) |

(c) |
(d) |

**Figure 6.7**

Two lines with shape antialiasing off (left) and on (right). All four figures use "superpixels," 2 ¥ 2 regular pixels for (a) and (c), and shaped as in Figure 6.6a in (b) and (d).

(a) |
(b) |

**Figure 6.8**

Shade antialiasing algorithm. A rectangle of finite width is drawn circumscribing the mathematical line. Each pixel even partially contained within the rectangle is shaded in proportion to that fraction of its area within the rectangle.

(a) |
(b) |

**Figure 6.9**

Application of shade antialiasing. (a) Two lines, each with shade antialiasing on and off. (b) Sub-window of (a) magnified 400% to show detail.

Windows and Viewports - 2D

All of the terms and definitions of windows, viewports,
and clipping defined in Chapter 4 are easily generalized to 2D. A *window*
is that portion of object space selected for visualization. Rectangular
windows are specified by the *world* *coordinates*, WxL, WxR,
WyT, and WyB. That portion of the display device on which the window is
to appear is called the *viewport*. A rectangular viewport is defined
by the parameters, VxL, VxR, VyT, and VyB, generally specified in *normalized*
*device* *coordinates*, NDC, ranging from zero to one. The process
of mapping world coordinates from the scene onto the viewport is called
the *viewing transformation*. Matrix composition may be used to derive
a simple and elegant representation for this transformation.

The Viewing Transformation

The viewing transformation is illustrated in Figure 6.10 in which a window on a world coordinate scene is mapped onto a viewport on a display screen.

The equations of the viewing transformation are relatively
easy to derive with the help of homogeneous coordinates and composed matrices.
The three-step algorithm may be written as:

1. Transform the lower left-hand corner of the window to the origin.

2. Scale the window to match the viewport.

3. Transform the lower left-hand corner of the viewport to its desired location.

Written in terms of concatenated matrices, this algorithm becomes:

**Figure 6.10**

The Viewing Transformation. The transformation, *Mv*,
maps a window from the world coordinate scene (top view) onto a viewport
on the screen (bottom view).

Clipping in 2D

An important element in the viewing pipeline is the process of clipping, that is, discarding portions of the scene outside of the window. This process is most efficient when performed at the earliest possible point in the viewing pipeline, since then objects and portions of objects not appearing in the final viewport need not undergo the transformation calculations.

There are two primary approaches to clipping in 2D. First
is the very general and elegant line clipping algorithm of Cohen and Sutherland.
Second is the more object-oriented polygon clipping algorithm of Sutherland
and Hodgman. To quickly appreciate the difference between these two algorithms,
consider Figure 6.11.

Line Clipping

Cohen and Sutherland developed a simple, recursive procedure
for reducing all lines to the two trivial cases: *in* or *out*.
The algorithm for this procedure may be written:

1. Classify_lines

2. Procedure classify_lines

while lines remain

If trivially in, plot line and delete from list

If trivially out, discard line

If nontrivial

split at window edge

discard outer segment

send inner segment to classify_lines.

**Figure 6.11**

Distinction between line clipping and polygon clipping.
*Line clipping* results shown at top could equally well represent clipping
the right or left figure. *Polygon clipping* shown at bottom uniquely
distinguishes between the two possibilities by *edge coding*, thus
removing the line clipping ambiguity.

Figure 6.12 illustrates how lines are classified by this algorithm.

**Figure 6.12**

Classification scheme for Cohen-Sutherland line clipping
algorithm.

Trivially in: Segment ab

Trivially out: Segments cd, ij

Non-trivial: Segments ef, gh, kl

Segments totally above, below, to the right or left of the window are trivially out.

The second major contribution by Cohen and Sutherland was
the efficient test that they developed for determining the trivially in
and trivially out cases. The end points of each line are classified by the
four-bit code:

TBRL

where

T = 1 if point is above top of window,

0 otherwise

B = 1 if point is below bottom of window, 0 otherwise

R = 1 if point is right of window,

0 otherwise

L = 1 if point is left of window,

0 otherwise.

The nine regions delimited by the extended window boundaries are then identified by the binary code shown in Figure 6.13.

A line with end-points P1 and P2 can be coded by C1 and
C2, respectively, where:

C1 = T1B1R1L1

C2 = T2B2R2L2.

The tests for lines being trivially in and trivially out
may then be stated:

Trivially in:(C1 = 0000) AND (C2 = 0000)

Trivially out:(C1 AND C2) < > (0000).

The following program, Line_Clip, implements this recursive algorithm.

**Figure 6.13**

Cohen-Sutherland classification regions and binary codes for line end points.

*Pascal Program *Line_Clip

programLine_Clip;

{Program to use Cohen-Sutherland algorithm to }

{line clip an arbitrary set of lines. }

{Lines are represented as x1,y1,x2,y2 in a text file. }

const{This sets size of window.}

top = 125;

left = 200;

bot = 225;

right = 350;

type

pointArr = array[1..2] of integer;

CodeFour = ( T, B, R, L );

CodeType = set of CodeFour;

PointType = record

point: PointArr;

Code: CodeType;

end;

LineType = array[1..2] of PointType;

ClassType = (Tin, Tout, neither);

var

line: LineType;

window: rect;

x1, y1, x2, y2: integer;

DataFile: text;

{****************** draw_window *********************}

proceduredraw_window;

begin

pensize(2, 2);

setRect(window, left, top, right + 3, bot + 3);

frameRect(window);

end;

{********************* Code **************************}

procedureCode (var P: PointType);

{This procedure tests point P to see if in window. }

{If in, the LRTB code is returned empty. }

begin

P.Code := [];

if P.Point[1] < left then

P.Code := P.Code + [L]

else if P.Point[1] > right then

P.Code := P.Code + [R];

if P.Point[2] < top then

P.Code := P.Code + [T]

else if P.Point[2] > bot then

P.Code := P.Code + [B];

end;

{****************** classify line ***********************}

functionClassify (Line: LineType): ClassType;

{This procedure classifies }

{ all lines into 3 groups: }

{ Tin = Trivially in }

{ Tout = Trivially out }

{ Neither = Neither trivially in or out. }

begin

if ((Line[1].Code = []) and (Line[2].Code = [])) then

classify := Tin {Line is in if both code sets are empty.}

else if (Line[1].code * Line[2].code <> []) then

classify := Tout {Line is trivially out if the intersection}

else {of the code sets is NOT empty.}

classify := neither;

end;

{****************** clip_lines *************************}

procedureclip_line (Line: LineType);

{This routine recurs until line is reduced to }

{trivially in or trivially out. }

{It plots those "in", and ignores those "out". }

var

P1,P2:PointType;

class: ClassType;

codeOut: CodeType;

xp, yp: real;

done: boolean;

xt, yt: integer;

begin

repeat {Until done --> totally trivial}

done := false;

P1:=Line[1];

P2:=Line[2];

Code(P1);

Code(P2);

Line[1]:=P1;

Line[2]:=P2;

x1 := trunc(P1.Point[1]);

y1 := trunc(P1.Point[2]);

x2 := trunc(P2.Point[1]);

y2 := trunc(P2.Point[2]);

class := classify(line);

case class of

Tout: {Trivially out case}

done := true;

Tin: {Trivially in case}

begin

PenSize(3, 3); {Draw clipped lines bold.}

drawLine(x1, y1, x2, y2);

done := true;

PenSize(1, 1);

end;

neither: {Neither case - requires segmentation}

begin

if (P1.Code = []) then

codeOut := P2.Code

else

codeOut := P1.Code;

if (L in CodeOut) then

begin {Calculate intersection with left edge.}

yp := y1 + (y2 - y1) * (left - x1) / (x2 - x1);

xp := left;

end

else if (R in CodeOut) then

begin {Calculate intersection with right edge.}

yp := y1 + (y2 - y1) * (right - x1) / (x2 - x1);

xp := right;

end

else if (T in CodeOut) then

begin {Calculate intersection with top edge.}

xp := x1 + (x2 - x1) * (top - y1) / (y2 - y1);

yp := top;

end

else if (B in CodeOut) then

begin {Calculate intersection with bottom edge.}

xp := x1 + (x2 - x1) * (bot - y1) / (y2 - y1);

yp := bot;

end;

xt := trunc(xp);

yt := trunc(yp);

if (codeOut = P2.Code) then

{Determine which point is out}

begin {and reduce line to "in"}

line[1].Point[1] := x1; {point + intersection point.}

line[1].Point[2] := y1;

line[2].Point[1] := xt;

line[2].Point[2] := yt;

clip_line(line); {and recur here if P2 out}

end

else

begin

line[1].Point[1] := xt;

line[1].Point[2] := yt;

line[2].Point[1] := x2;

line[2].Point[2] := y2;

clip_line(line); {or recur here, if P1 out}

end;

end;

end;

until done;

end;

{******************* Main Program **********************}

begin{by having user select file of lines.}

ReSet(DataFile, OldFileName('Input Line File?'));

draw_window;

repeat {for all lines in file}

ReadLn(DataFile, x1, y1, x2, y2);

pensize(1, 1); {Draw line as thin, 1-pixel line.}

DrawLine(x1, y1, x2, y2);

{Load line record to send to clipper.}

line[1].Point[1] := x1;

line[1].Point[2] := y1;

line[2].Point[1] := x2;

line[2].Point[2] := y2;

clip_line(line);

until eof(DataFile);

end.

The results of running this program are shown in Figure 6.14.

Program Line_Clip
demonstrates several features of interest.

It is completely general. It can clip any number of lines and does so without the use of arrays for internal storage. The lines can occur in any order and need not be connected in polygon form. The set of lines to be clipped is selected by the user at run time.

It uses recursion. This is a natural control structure to use for this application since the problem requires reducing a complex problem to a simpler one with each recursive call until a trivial case is achieved.

It uses a simple, convenient representation of lines as a file of ( x1,y1,x2,y2) coordinates of end points.

It uses abstract data types, LineTypeandPointType, to simplify passing these graphical objects as arguments to the worker routines.

**Figure 6.14**

Results of Cohen-Sutherland line-clipping algorithm. Original set of lines shown as one-pixel figure, clipping-window as a two-pixel wide figure and clipped lines as three-pixel wide lines.

Polygon Clipping

As Figures 6.11 and 6.14 indicate, the coherence implicit
in graphical objects such as polygons is lost in simple line clipping. For
distinguishing between the ambiguities inherent in line-clipping closed
objects, it is useful to provide visual clues for the viewer. Figure 6.11
suggests two such visual cluing schemes. First is the *edge coding *indicated
in the bottom figure. Second, and even more effective, would be *shading
coding* in which the shade or color of the object is preserved through
the clipping process. Shading coding is possible only on displays supporting
shading/color, whereas edge coding works on any display supporting graphics.

For graphical objects represented as polygons, edge coding
is accomplished by *polygon clipping*. The best known polygon clipping
algorithm is that developed by Sutherland and Hodgman. This elegant algorithm
has the virtues of generality, extensibility, and conceptual simplicity.
It is not restricted to rectangular windows, but rather is easily applicable
to irregular convex windows. Not restricted to 2D windows, it is readily
extended to 3D clipping volumes.

In the introductory quote the algorithmís authors themselves express chagrin that it took so long for them to discover the polygon clipping algorithm.

The basic idea in the Sutherland-Hodgman algorithm is that
an *n*-sided polygon is represented by a set of *n* input vertices
which are clipped successively against each extended edge of the clipping
window. On each edge, two tests are applied to each vertex. First, does
the line, which the vertex and its precursor vertex defines intersect the
edge? If so, add the intersecting point to the output vertex list defining
the polygon. Second, is the vertex point itself outside of the edge under
test? If so, ignore it. If not, add it to the output vertex list. The output
vertex list resulting from testing against edge *i* becomes the input
vertex list for testing against edge *i+1*. This re-entrant algorithm
is iterated until the dynamic vertex list has been tested against all edges.
The resulting output vertex list corresponds to the clipped polygon with
edge coding an automatic side-effect.

The Sutherland-Hodgman algorithm may be formally specified
as follows:

1. Repeat for all (extended) edges of the window

1.1 Repeat for all vertices on the

current vertex list

1.1.1 If line between vertex

iand vertexi1intersects edgeAdd intersection point to output list

1.1.2 If vertex

iis "inside" edgeAdd vertex

ito output list1.2 Set current vertex list = output list

The only additional complications in the algorithm arise from handling end points of the polygon. In order for the first vertex, i = 1, to have a predecessor vertex, a new vertex, i = 0, is defined and given coordinates of the last vertex, i = n. Since we want to check the line crossing of the line from vertex n to the starting vertex, i = 1, we extend the polygon "forward" to vertex n+1 and assign it the coordinates of vertex 1. Thus by always carefully maintaining the dynamic polygon as a n-vertex figure but appending vertices 0 and n+1 before re-entering the next edge clip, we automatically handle the problematic end points of the polygon without having to resort to some awkward flag system.

*Pascal Program *Poly_Clip

programPoly_Clip;

{Program to perform polygon clipping }

{using the Sutherland-Hodgman algorithm. }

const

top = 100;

left = 150;

bot = 200;

right = 300;

var

window, viewPort: rect;

x, y: array[0..100] of real;

xo, yo: array[0..100] of real;

nout, npts, n, j: integer;

DataFile: text;

{************** polygon read routine ****************}

procedureload_object;

begin{Assume n vertices for and n- sided polygon.}

ReSet(DataFile, OldFileName('Input File?'));

n := 0;

repeat {for all vertices in file}

n := n + 1;

ReadLn(DataFile, x[n], y[n]);

xo[n] := x[n];

yo[n] := y[n];

until eof(DataFile);

npts := n;

x[0] := x[n]; {Extend backward by one vertex.}

y[0] := y[n];

x[n + 1] := x[1]; {Extend forward by one vertex.}

y[n + 1] := y[1];

end;

{*****************Plot output polygon *******************}

procedureplot_out;

var

i: integer;

begin

eraseRect(0, 0, 300, 500);

pensize(3, 3);

xo[n + 1] := xo[1];

yo[n + 1] := yo[1];

moveto(round(xo[1]), round(yo[1]));

for i := 2 to (n + 1) do

lineto(round(xo[i]), round(yo[i]));

end;

{******************* draw_window *********************}

proceduredraw_window;

begin

pensize(1, 1);

{Use off-sets to accommodate pixel-width of object.}

setRect(window, left - 1, top - 1, right + 4, bot + 4);

frameRect(window);

end;

{*********Procedure to load output into input**********}

procedurereload;

{Re-entrant vertex array loader}

var

i: integer;

begin

for i := 1 to n do

begin{Set new vertex array to old output array}

x[i] := xo[i];

y[i] := yo[i];

end;

x[n + 1] := x[1]; {Extend polygon 1 vertex forward}

y[n + 1] := y[1];

x[0] := x[n]; {Extend polygon 1 vertex backward}

y[0] := y[n];

end;

{******************** clip_edges ***********************}

procedureedge_clip;

{This procedure clips polygon sequentially }

{on each output window edge and plots the results.}

var

i: integer;

begin

n := npts;

{Clip against right side, and plot clipped polygon.}

j := 0; {Dynamic "in" vertex counter}

for i := 1 to (n + 1) do

begin

if ((right - x[i]) * (right - x[i - 1]) < 0) then {Intersect test}

begin

j := j + 1;

yo[j] := y[i-1]+(y[i]-y[i-1])*(right- x[i -1]) / (x[i] - x[i -1]);

xo[j] := right;

end;

if x[i] < right then

{Test if point itself is in}

begin {If so, add to output list}

j := j + 1;

xo[j] := x[i];

yo[j] := y[i];

end;

end;

n := j;

plot_out;

draw_window;

reload;

{Clip against top, and plot clipped polygon.}

j := 0;

for i := 1 to (n + 1) do

begin

if (top - y[i]) * (top - y[i - 1]) < 0 then {Intersect test}

begin

j := j + 1;

xo[j] := x[i-1]+(x[i]-x[i-1])*(top- y[i -1]) / (y[i] - y[i - 1]);

yo[j] := top;

end;

if y[i] > top then {Test if point itself is in}

begin {If so, add to output list}

j := j + 1;

xo[j] := x[i];

yo[j] := y[i];

end;

end;

n := j;

plot_out;

draw_window;

reload;

{Clip against left edge, and plot clipped polygon.}

j := 0;

for i := 1 to (n + 1) do

begin

if (left - x[i]) * (left - x[i - 1]) < 0 then {Intersect test}

begin

j := j + 1;

yo[j] := y[i-1] + (y[i]-y[i-1])*(left- x[i-1])/(x[i] - x[i - 1]);

xo[j] := left;

end;

if x[i] > left then {Test if point itself is in}

begin {If so, add to output list}

j := j + 1;

xo[j] := x[i];

yo[j] := y[i];

end;

end;

n := j;

plot_out;

draw_window;

reload;

{Clip against bottom, and plot final clipped polygon.}

j := 0;

for i := 1 to (n + 1) do

begin

if (bot - y[i]) * (bot - y[i - 1]) < 0 then {Intersect test}

begin

j := j + 1;

xo[j] := x[i-1]+(x[i]-x[i-1])*(bot- y[i -1])/(y[i] - y[i - 1]);

yo[j] := bot;

end;

if y[i] < bot then {Test if point itself is in}

begin {If so, add to output list}

j := j + 1;

xo[j] := x[i];

yo[j] := y[i];

end;

end;

n := j;

plot_out;

draw_window;

reload;

end;

{******************* Main Program *********************}

begin

load_object;

{Open drawing window}

setRect(viewPort, 0, 0, 400, 300); setDrawingRect(viewPort);

showDrawing;

plot_out;

draw_window;

edge_clip;

end.

Output from Poly_Clip is shown in Figure 6.15.

A number of features distinguish the Sutherland-Hodgman
polygon clipping algorithm.

The algorithm achieves its primary objectiveómaintenance of polygon coherence by edge coding. This edge coding even extends to tieing together two or more discontinuous sub-polygons of the same parent within the clipping window as a degeneratepolygon as shown in Figure 6.16.

The algorithm uses a dynamic, vertex-based representation of the polygon as the basic data structure. Thus the ten vertex object in Figure 6.15 expands to a twelve vertex object after clipping on the right edge and a fifteen vertex object when clipping is finished. Had the window been interior to the object, the ten vertex polygon would have reduced to a four vertex polygon the size of the window.

The algorithm is re-entrant. That is, the output of one clipping process becomes the input for the next. This feature makes it relatively straightforward to recode the algorithm in recursive form. Once a vertex has made the output list of one edge clip, it immediately becomes valid input for the next edge clip. The appropriate recursive formulation would significantly reduce the need for intermediate storage of the vertex points.

(a) |
(b) |

(c) |
(d) |

**Figure 6.15**

Poly_Clip output. (a) Original polygon with superimposed clipping window; (b) output after clipping on right edge; (c) output after clipping on top edge; (d) final output after clipping on all four edges.

**Figure 6.16**

Polygon coherence from Sutherland-Hodgman polygon clipping algorithm. The degenerate edge segment connecting the small triangles indicates that they are part of a common figure, the star from Figure 6.11.

Text Clipping

Text-clipping algorithms are classic examples of the compromises
the graphics system designer must make between performance and efficiency.
Ranked in order of decreasing performance and increasing efficiency, three
text-clipping schemes may be listed as:

Pixel level- This level produces the most attractive and intuitive text clipping effect. The algorithm is simply: display all pixels of all characters out to, but not including, the edge of the window. Since this displays the maximum alphanumeric information, it is the highest performance algorithm. However, since it requires testing at the pixel level, it is the least efficient.

Character level- This level produces the next, best-looking clipped output, but is simpler to implement. The algorithm may be stated: draw a box totally enclosing each character and disregard any character whose bounding box lies partially or totally outside of the window.

String level- This level provides the poorest performance but is the simplest to implement. The algorithm is identical to the character level algorithm withstringsubstituted forcharacter. If even the last period of the string lies outside of the bounding box, none of the string is displayed.

**Figure 6.17**

Levels of text clipping. Pixel level shows all of the text within the window, down to the pixel level. Character level shows only those characters totally within the window. String level shows only strings that are entirely within the window.

These three text-clipping algorithms are illustrated in Figure 6.17.

Two important observations are relevant to the implementation
of text clipping. First, most powerful graphics applications programs which
readers are likely to encounter provide automatic clipping of both text
and graphical objects for all windows and viewports with which they work.
Secondly, the trend in computer graphics is strongly toward outline fonts
as the standard display and printer text format. Outline fonts consist of
mathematical representation of the character set which, in turn, facilitates
pixel level clipping appropriate to the resolution of the output device.
Thus, the implementation of text clipping is a task already solved on all
but the most primitive systems.

Bit-mapped vs. Object-Oriented Graphics

Successful commercial graphics applications programs are implemented by combining efficient representation schemes with easy-to-use graphics manipulation tools. Most such implementations may be classified as painting programs or drawing programs using bit-mapped or object-oriented representations, respectively. The distinction between these two graphics modes involves fundamental issues of both representation and implementation and illustrates the tight relationship between these two aspects of graphics. Each mode has its own particular strengths and drawbacks, and the choice of mode depends strongly on the particular application.

To summarize the distinction before examining each mode
in more detail, the following classification scheme is helpful.

Bit-mapped mode- Generally associated withpaintingprograms, bit-mapped graphics represents graphical images and patterns by assigning a block of memory for the direct storage of the intensity patterns that are to be displayed on the screen. On bi-level devices, a 1 bit corresponds to a pixel being turned on and a 0 bit with it off (orvice versa). Implementation then corresponds to mapping this block of memory directly to the screen. Note that all "objectness" of a line, polygon, or brush strokeis lostthe moment it is written to the screen (and hence, the bit-mapped memory).

Object-oriented mode- Generally associated withdrawingandCADprograms, object-oriented graphics involves storing the commands that are used to draw the objects on the screen (or some other output device). This generally involves a mathematical representation in terms of lines, rectangles, ovals, Bézier curves, and so on. Implementation of the drawing functions involves rendering through pixels on the display, but the objectís mathematical identity is preserved. Note that theobjectnessof object-oriented graphicsis maintainedthroughout the graphics session and preserved in the file storage process.

Next, let's examine the features and tools of each graphics mode in more detail and consider the applications best suited for each. Painting and drawing programs both implement efficiently most of the 2D operations presented so far, and their differences illustrate the critical role of representation on the functionality of graphical systems.

Bit-Mapped Graphics - The *Paint*
Mode

The basic concept of raster graphics on which the bit-mapped
representation is built was clearly stated by the philosopher L. Wittgenstein
in 1922.

Let us imagine a white surface with irregular black spots on it. We then say that whatever kind of picture these make, I can always approximate as closely as I wish to the description of it by covering the surface with a sufficiently fine square mesh, and then saying of every square whether it is black or white. In this way I shall have imposed a unified form on the description of the surface. The form is optional, since I could have achieved the same result using a net with a triangular or hexagonal mesh.

The parallel development of vast, low-cost memory for storing
images and high-resolution monitors onto which the images in memory are
easily mapped has been the key to the widespread implementation of bit-mapped
graphics systems. The principles and advantages of bit-mapped computer graphics
had been recognized for several years before Bill Atkinson of Apple Computer
Company introduced his revolutionary *MacPaint* program on the newly
released Macintosh computer in 1984. This happy marriage of bit-mapped graphics
concepts with a toolbox of extremely fast functions for manipulating bit-mapped
regions of the screen defines the *paint mode* of raster graphics that
has become the standard of the industry.

Bit-mapped graphics remains an active area of research
and development. A recent attempt to put bit-mapped graphics on a more formal
mathematical foundation has been published by Fiume.

*Paint Tools*

A typical palette of tools available on most painting programs is shown in Figure 6.18.

**Figure 6.18**

Typical tools available on painting programs (*from Canvas
2.0*).

The functions provided by these icons can be summarized
as:

Lasso- Any (irregular) region of the image may be selected by dragging the cursor around its perimeter in a closed loop. This selected region may then be duplicated, dragged to a new location, saved to the clipboard, or operated on by any other painting functions available on the system.

Marquee- Any rectangular region of the image may be selected by dragging a bounding box around it. This selected region may then be duplicated and dragged to a new location, saved to the clipboard, or operated on by any other painting functions available on the system.

Spray can- When the mouse button is pressed, the spray can sprays pixels on the screen in a pattern centered on the cursor. Both the shape of the pattern outline and the pattern itself may be selected from menus by the user. Just as with a real can of spray paint, the longer the button is pressed, the denser becomes the pattern.Paint brush- When the mouse button is pressed, the paint brush applies paint in a pen pattern and pen shape, both of which are menu selectable. This provides a flexible "etch-a-sketch" painting tool.

Paint Bucket- By dragging the icon so that the tip of the flowing paint is within a white region surrounded by any outline of pixels and by pressing the mouse button, the user can fill (or "flood") the enclosed region with a menu-selectablefill pattern.

Pencil- The pencil tool draws a thin (one-pixel) line whenever the mouse button is pushed. It generally operates in an XOR mode, toggling pixel states. In an all white region it draws a line of black pixels; in regions of black pixels, it draws a white line. With a magnified image, the pencil provides an excellent pixel-level editing tool.

Eraser- The eraser functions precisely as a blackboard or pencil eraser does. Any part of the bit-mapped image under the eraser is erased to white when the mouse is pressed.

Bitmap Editor- After this icon is selected, the cursor may be used to select an invisible rectangular region on the screen in which bit-maps may be created. It then automatically selects the Paint Brush tool so painting can begin. Holding the cursor down on this icon brings up a menu allowing the user to select pixel densities from 72 dpi to 300 dpi.

Hand- This intuitive icon allows the user to push the bit-mapped image around on the screen. When the mouse button is pressed, the hand "grabs" the image that then tracks the hand as it is moved about the screen. It serves as a short cut to the more formal screen edgeelevatorsfor accessing off-screen segments of the image.

Magnifying Glass- After this icon is selected, the cursor turns into a magnifying glass with a symbol in the center. The default value of the symbol is "+" which causes the image to be magnified by a factor of two upon each click of the mouse button. Holding down theshiftkey changes the symbol to a "" corresponding to demagnification by a factor of 0.5 upon each click of the mouse.

The icon functions give the user the ability to select
and edit arbitrary segments of the image ranging from the whole image down
to the individual pixels composing it. Included at this level is the translation
operation. By simply dragging a selected segment of the image, it may be
translated to any new location in the active window. Higher-level transformations
such as reflections, shearing, arbitrary rotations, and arbitrary scaling
are performed through menu options. We look at some of these next.

*Paint Menu Operations
*

Most painting programs support the following transformations
for whole images or any selected segment of an image:

Rotate (specified in degrees or by dragging a handle), Flip horizontal (reflection through y-axis), Flip vertical (reflection through x-axis), Flip both axes (reflection through origin), Skew (shearing along either x- or y-axis), Distort (the product of x-shear and y-shear), Scale (either by Sx, Sy, or both), Perspective (x-dependent y-scaling, or vice versa, to create 3D illusion),Invert (change all black pixels to white and vice versa), Trace edges (draws line at interface between black-and-white regions).

These menu operations, in conjunction with the image-manipulating icons, give the artist/designer a broad and powerful array of tools for artistic creation. A skilled practitioner with a capable painting program can rapidly create designs and works of art comparable to those created with more traditional media. Note, however, that painting programs are designed for applications requiring free-form expression rather than applications requiring precise numerical accuracy. Such applications require drawing programs.

Next let's study a PICT image and how painting functions
can modify it.

*Paint Example
*

Consider the bit-mapped image shown in Figure 6.19 produced by scanning a photograph of Yosemite National Park. The original photo was marred by an unsightly contrail from a passing airplane. Using the photoediting tools available on most paint programs it is a simple task to remove this flaw.

**Figure 6.19**

Bit-mapped image of scene from Yosemite National Park. Note contrail from passing aircraft.

The first step is to use the marquee tool to select an area about one-half inch on a side, next to the contrail, but not containing it. This region is duplicated and then dragged on top of the corresponding section of the contrail, thereby hiding it by a section of clear sky. This process is repeated about four times down the length of the contrail, resulting in the restored image shown in Figure 6.20.

The gradient in sky intensity is very apparent as one moves
up the *y* axis. A less apparent gradient is present in the *x*
direction. Using nearby portions of the clear sky to paint over the contrail
avoids the artifacts introduced by trying to paint over the contrail with
the paintbrush tool.

As an extension of this example, suppose you wished to emphasize the point that glaciers often carved symmetric, U-shaped valleys through the mountains. The geometric transformation tools available on paint programs provide the perfect solution for this task.

Using the restored image of Figure 6.20 as the basis, the first step is to use the marquee tool to select the left-hand half of the image. This is duplicated and the result is flipped horizontally. This mirrored half image is then dragged to the right side of the image, painting over the existing image. The resulting image of a symmetric valley is shown in Figure 6.21.

Although this example is simplistic, it does illustrate
how painting tools may be used to manipulate 2D images. In fact, the ease
with which photographs can be altered with photoediting tools has destroyed
the concept of *photographic evidence*. Electronic photoediting raises
serious issues of journalistic ethics. For instance, should editors authorize
the removal of facial blemishes and wrinkles from their reportersí
photographs of public officials? If photoediting can be used to enhance
a personís appearence, might it not also be used to distort photographs?

*Limitations of Bit-maps*

The primary limitation of bit-mapped graphics is a result of "pixel democracy"óall pixels are created equal once they are set in a bit-mapped image. Once a brush stroke or pencil mark has been entered on the image, it becomes a permanent part of the image, overwriting what was there before. The only exception to this rule is the extremely valuable Undo option most painting programs provide for reversing the latest painting operation.

**Figure 6.20**

Yosemite scene with contrail eliminated by photoediting Figure 6.19.

**Figure 6.21**

Results of editing Figure 6.20 bit-map with *flip horizontal
*tool. This illustrates a symmetric, U-shaped valley.

The price one pays for this pixel democracy is that changes
in portions of an image must be done by processes which frequently are slow,
painstaking, and tedious. The most common image-editing techniques include
*painting over* (a brush with the correct pattern and color is used
to paint over the flaw), *patching over* (a near-by region of the correct
texture and shading is selected, duplicated, and dragged to cover the flaw),
and *pixel editing* (the screen is magnified to the level where each
pixel can be individually modified).

The second limitation of bit-mapped images is that they generally do not support images requiring precision or numerical dimensioning. Therefore, painting programs are not suitable for drafting or CAD applications.

The final limitation we shall discuss here is that most
bit-mapped graphics are device dependent. So, for instance, when a 72 dpi
bit-mapped image from a display screen is printed on a 300-dpi laser printer
or a 1,200-dpi 2,400-dpi typesetter, the resolution of the image remains
a coarse 72 dpi. As high-quality printers become more readily available,
this problem is becoming increasingly bothersome. To solve it, some painting
programs are now supporting a user-selected pixel resolution of 300 dpi
or more.

Object-Oriented Graphics *Draw*
Mode

Object-oriented graphics is the mode most appropriate for
applications requiring accurate detail, numerical dimensioning, and precision.
Such applications include electrical engineering circuit board design, mechanical
engineering part design, civil engineering construction project design,
and architectural design. We designate such object-oriented graphics as
working in the *draw mode*. The following features distinguish the
draw mode from the paint mode.

Objects designed in the draw mode retain their identity. This is the single most important distinguishing feature of object-oriented graphics. Such objects can be selected at any stage in the design, can be transformed by all standard operations independent of all other objects, and can be replicated or deleted at will.

Objects designed in the draw mode may have a hierarchical structure. Primitive objects may be grouped to form more complex objects which in turn may be grouped to form more complex objects, and so on. Draw programs may provide libraries of basic design elements from which the base of the hierarchy may be efficiently constructed.

Objects designed with draw programs have a mathematical representation (as opposed to a bit-mapped representation). This is particularly advantageous at the rendering stage. Since the drawing commands themselves are sent to the output device, the resulting image is rendered with the highest resolution of which the device is capable. The 300 dpi or greater resolution of many hard-copy devices produces objects of much greater resolution than is available on the screen on which they were designed.

Objects designed with draw programs can be easily edited. Lines, rectangles, circles, polygons, and spline curves can be radically altered or fine tuned by clicking on the object to select it and dragging the control points. This permits fluid design strategies and keeps the process reversible.

Now that we have a general overview of the features characterizing
drawing programs, letís look at some of the specific functions they
provide.

*Draw Functions*

Drawing programs vary greatly in the range and sophistication
of the tools that they provide the designer. The spectrum of programs, in
order of increasing cost and power, include:

Simple drawing programs, Drafting programs, Low-level 2D CAD programs, High-level 2D CAD programs, 3D CAD programs.

There is no clear line distinguishing each of these classes,
and each release of a new version of a given program often moves it up one
class. However, a basic set of drawing tools that every drawing program
should contain includes:

Basic shapes icons ã line

ã rectangle

ã rounded-corner rectangle

ã oval

ã arc

Generalized shapes ã Freeform pen

ã Polygon tool

ã Bézier tool

Transformations ã Rotate

ã Translate

ã Differential scaling

Special effects ã Fill (closed objects filled with pattern)

ã Invert

ã Flip horizontal

ã Flip vertical

ã Flip both axes

Magnification or zoom

Alignment of objects

Grid tools ã Coordinate readout

ã Show grid

ã Snap to grid

Hierarchy tools ã Group

ã Ungroup

Ordering (layering) tools ã Bring to front

ã Send to back

ã Shuffle up one

ã Shuffle down one

ã Send to layer

Constraints are particularly valuable tools for drawing
programs. The S*nap-to-grid* constraint requires that all lines start
and end on grid points. This effectively eliminates the embarrassing 1-2
pixel overshoots or gaps inevitable when drawing by hand in a free form
mode.

Another valuable constraint is provided by holding down
special keys when in a particular drawing mode or process. For instance,
a line drawn in constrained mode lies along multiples of 45°. Constrained
rectangles are squares; constrained ovals are circles; and constrained moves
and resizing can occur only in the *x*-axis or *y*-axis directions.

*Draw Example*

Letís examine a typical example of a useful application of a drawing programóthe 2D floor plan of the ground level of a home as shown in Figure 6.22. Assume this is one of a library of 50 basic floor plans an architect is showing potential clients.

**Figure 6.22**

Ground level-floor plan of a house

The clients indicate that the interior was just what they were looking for, but they really wanted a three-car garage. "No problem!" says the architect, and sixteen mouse clicks and keyboard strokes later shows them Figure 6.23.

What steps were required to revise this object-oriented
drawing to meet the new specifications?

1. The architect first

ungroupedthe original drawing which had been stored on disk as a grouped hierarchical object labeledCondo.Des34.

2. Next she selected the driveway object which was designed as a rectangle and stretched it in constrained mode to the size to accommodate three cars.

3. Then she selected the south wall and southwest garage door frame as a joint object and dragged them in constrained mode to connect to the expanded driveway.

4. To complete the garage, the door-frame line was duplicated and dragged into position, and resized to form the southeast wall of the new garage.

5. Then the larger car was selected, duplicated, scaled up a bit along the x-axis, and dragged into the position shown.

6. Finally, the text mode was entered and the word "New" added to "Garage." To return to the top of the hierarchy, all objects were selected and regrouped as a single object,

Condo.Des34G.

This example demonstrates the ease with which complex objects
may be built from more elementary objects and may be arranged in an efficient
hierarchical order. It also demonstrates the ease with which object-oriented
images may be edited and manipulated. Every object in Figures 6.22 and 6.23
may be selected, reoriented, resized, and relocated at the whim of the designer.

*Difficulties of Object-oriented Graphics*

Object-oriented graphics achieve much of its power, in comparison to bit-mapped graphics, by using a more abstract representation of an image. In general abstraction is achieved by labeling an object and specifying it to be composed of more primitive objects. In object-oriented graphics, this labeling is achieved by graphical interaction in which the user selects (i.e., "identifies") a set of objects and groups them as a single complex object.

**Figure 6.23**

Floor plan revised to include three-car garage

Just as more abstract data types are frequently more difficult to understand than simple, concrete ones, the hierarchy of graphical objects can sometimes be confusing, especially for the novice. The very "objectness" of drawing objects can lead to confusion because some objects may be lying on top of other objects, totally obscuring them from view. Selecting objects totally surrounded by other objects, for example, can be tricky.

A final difficulty with drawing programs is that most do
not support any of the more elegant painting tools such as the spray can.
Such tools give the artist/designer great freedom in creating realistic
images but are intrinsically limited to operating on bit-mapped images rather
than objects. Some hybrid paint/draw programs have attempted to resolve
this problem by letting the user operate in both modes. S/he can drag a
rectangular bit-mapped object, then switch to paint mode, and apply all
of the painting tools within this object. This "painted" object
then becomes an abstract object and can be grouped with other "real"
objects. However, most of the transformations available for draw objects
have no effect on the bit-mapóit is "frozen" into the original
object.

Conclusions

We conclude the discussion of 2D graphics with an examination of some of the implementation issues that must be resolved in order to display graphical objects on real devices. Even such simple primitives as line drawing algorithms for raster graphics displays involve interesting compromises between speed and line quality. Aliasing effects, intrinsic to raster display devices, pose a related set of problems for which some interesting solutions are proposed. Concepts re