Section 1
Foundations of Visualization
 

The goals of this section are:

Chapter 1

Justify visualization as a new computational paradigm by examining its physiological basis, the analysis tools it provides, and applications of visualization to understanding both real and imaginary worlds.

Chapter 2

Discuss the technological basis for computer graphics and its central importance in the paradigm shift in computer operating systems.

Chapter 3

Interpret some of the elemental interactive graphics functions on which all interactive computer graphics systems are based in the language of object-oriented programming.



 
 

Chapter 1 

Visualization -

A New Paradigm

M. Firebaugh 

© Wm. C. Brown Communications, Inc. 




The psychical entities which seem to serve as elements of thought are certain signs and more or less clear images which can be voluntarily reproduced and combined Ö The above-mentioned elements are, in my case, of visual and some muscular type. Conventional words or other signs have to be sought for laboriously only in a secondary stage Ö
Albert Einstein

T here are many approaches to the study of computer graphics. They range on a spectrum from highly abstract formal mathematical processes to informal impressionistic artistic techniques. The mathematical approach emphasizes the power of abstraction for achieving maximum flexibility in transforming one graphical object into another. The artistic approach stresses the generation of interesting patterns while holding the level of mathematical sophistication to a minimum. Most courses in computer graphics fall midway on this spectrum with an emphasis on sufficient mathematical formalism to enable students to generate and manipulate realistic graphical images. The goals of this text are to introduce students to the fundamental concepts of computer graphics, to demonstrate many of the powerful new graphics tools which have recently become available, and to establish visualization as the unifying principle for all of computer graphics.

The underlying theme in computer graphics is visualization. Computer graphics provides a tool without equal for helping achieve a clear mental image of the object under study. Whether we are interested in an abstract mathematical transformation, US census data, or a cleanly rendered CAD part, computer graphics allows us to see the object of our interest more clearly than we could by other techniques.

Traditionally, computer graphics authors have focused almost exclusively on computer graphics as a tool for the synthesis of graphical images. Synthetic images may be generated two ways. First, by using graphical primitives for curves, surfaces, and elementary 3D objects it is possible to generate almost any graphical image by the proper synthesis of these more primitive elements. Second, data from real world phenomena can be plotted as charts, scatter plots, or contour plots in either 2D or 3D to help the observer discover interesting patterns. Both of these techniques feature the use of computer synthesized output images for visualizing either synthetic models or real physical data.

However, we believe that a visualization approach to computer graphics requires an expansion of the definition to include image analysis as well as image synthesis. New tools make it possible for the computer to collect visual data directly. These, in effect, give the computer the power to "see." The extraction of meaning from such images is generally referred to as image processing or computer vision. In image processing, the image itself is the object of analysis. This synthesis/analysis distinction is determined by the direction of information flow: synthetic images are generated as output by the computer; image analysis requires transducers for collecting data and generating an image as input to the computer.

Many of the tools available on computer graphics systems provide valuable operations for both the synthesis and analysis of images. These tools perform many of the same operations on graphical data that spreadsheets and data bases perform on numeric data-object selection, integration, arithmetic operations on single objects, and Boolean operations between objects. It is useful to introduce the graphical data flow paradigm for describing this integration and manipulation of graphical data in a manner parallel to that widely used for numerical and symbolic data. This book illustrates numerous examples of graphical data flow between programs on a single machine and across hardware platforms.
 

Seeing and Understanding

Computer graphics provides potent tools for seeing, and hence understanding, objects of interest. Remember as a child when the light bulb of understanding suddenly flashed on in your mind and you exclaimed "I see!"? In order to truly understand objects, whether they are abstract mathematical functions or intriguing artistic designs, it is tremendously important to see them. Computer graphics allows us to accomplish this visualization.

Visualization involves cognitive functions such as recognition, learning, and memory. It is the basis for the concept of "the mind's eye," an abstract mental representation of our physical experience. The clearer we visualize an object in our mind's eye, the more effectively we can relate to it and ultimately understand it. Our own visual system is the shortest path to the mind's eye.

Other authors have noted the effectiveness of graphics as aids in understanding and communication. Tufte presents an excellent discussion on the history of pre-computer graphics and the elements of good graphics design. Shu considers the importance of the visual paradigm in her book on visual programming. The role of visualization in scientific computing was emphasized in a special issue of Computer Graphics reporting on an NSF workshop cochaired by McCormick, DeFanti, and Brown.
 

The Physiological Basis for Visualization

Other senses besides sight provide valuable clues for understanding the world around us. The sense of smell is a very powerful determinant in the behavior of many mammals, fish, and insects, but is less critical as a survival tool in human beings. The sense of touch is still vitally important, particularly in the cognitive development of young children. Watch a young child and note how she refines her psychomotor movements while batting an object with her hand and observing its subsequent behavior. Much of the child's sense of "objectness" is developed by the tactile sensations resulting from grasping, punching, and poking objects.

The sense of hearing is another highly developed communication channel by which we interact with the world around us. While the sense of hearing does not provide as large an information bandwidth as sight, it is a sense with a huge range of sensitivity (we can comfortably hear sound intensities ranging from 1 Watt/m2 down to 10-12 Watt/m2.) The lower limit of hearing cuts off just above the energy of individual molecules-a fact for which we can be grateful! In fact, many individuals with losses of both sight and hearing report that they consider hearing the more important of the two senses for functioning in society.

The physiological basis of visualization is grounded on the fact that vision is the most highly developed sense in humans and many animals. Many textbook depictions of the brain, in fact, include the eyes as an integral part of this vital organ. Evolutionary development has produced an organ, the brain/eyes, in which approximately half of the neurons are dedicated to the processing of visual information. Vision is the complex process of transforming an optical image registering on the retina of the eye into meaningful information. Such information provides the basis for understanding and action.

Motivated by the goal of putting machine vision on a sounder theoretical basis, David Marr emphasized the necessity for understanding the human vision system and using the information-processing insights it offers. Some understanding of the power and complexity of the human vision system may emerge by considering the following facts:
 

The human vision system is an extremely powerful tool for information processing and analysis. It operates at a relatively low level on the hierarchy of cognitive functions (processing takes place even in the retina of the eye), and the large portion of the brain devoted to visual processing suggests the importance of vision in the evolutionary process.

Highly symbolic cognitive processes such as language and number processing arrived relatively recently on the evolutionary time scale compared to our highly developed vision. This may explain, in part, why no regions of the brain have been identified exclusively for carrying out these functions. They tend to be distributed throughout the brain as a whole. This evolutionary development is replayed in the development of the individual child. A child's vision arrives long before the ability to use language or compute.

Historically, the development of computer systems, particularly the representations used for languages and operating systems, has occurred in precisely the opposite order from the evolution of these representations in human beings. That is, the first communication with computers was through numbers, binary and decimal. Then alphabetic mnemonics and, later, natural language instructions appeared. Finally, in the 1980s, visualization arrived with its emphasis on representing objects and actions by semantic icons and the display of data in functional and geometric form. In retrospect, it is quite remarkable that it took the computer science community so long to recognize the power and efficiency of the visual paradigm in computing.
 

Effective Communication through Visualization

The reason visualization has assumed such an important role in computer graphics is the enhanced communication it provides at the human/machine interface. We illustrate the power of communication through visual images with four examples, including side-by-side comparison of visual/image/icon representations with more traditional alphanumeric forms. The examples include:
 

This set of examples begins by illustrating the power and efficiency of icons for communicating meaning about ordinary experiences in everyday life. The second example, operating systems, illustrates how the power of icons increases when they become "intelligent," that is, when they represent not only objects but also the functions of which objects are capable. Next, we look at the added insight that graphics provides for studying the relationship of (x,y) data sets interpreted as y = f(x). Finally, we indicate how computer graphics and image processing tools can help interpret the output from computer simulations of complex systems.
 

Public Signs

In Table 1.1, we illustrate several iconic signs that are becoming fairly standardized in international usage, along with an approximation of their meaning in English. Note how the signs convey information in a culture- and language-free mode. Note also how the eye is instinctively drawn to the iconic symbols and only later browses back to the natural language interpretation. The iconic symbols often relate at a primitive level to one or more of our other senses such as hearing, touch, and smell which also require low-level cognitive processing. Meaningful alphanumeric symbols, on the other hand, are compound symbols (words and sentences) comprising strings of more elementary symbols (letters) that require a higher level of cognitive processing to extract their meaning.
 

Table 1.1
Comparison of Natural Language vs. Iconic Signs
English Representation
(symbolic) 
Iconic Representation
(visual) 
Fuel Available 
at this Exit
This Parking Space
Reserved
for the Handicapped
Women's Restroom,
Men's Restroom
No Smoking
 

Some educators are beginning to voice concern that the ease and simplicity of communication via computer graphics and visual icons will result in reduced reading ability on the part of students in general. When the electronic calculator came into widespread use, educators expressed concern that students would lose their ability to do arithmetic by hand (How many of us can calculate the square root of a number by longhand?). The counter argument, in both cases, is that the computer has provided a valuable extension of our cognitive processes, allowing us to perform functions more rapidly than otherwise possible.
 

Operating Systems

Computer operating systems and programming environments provide a classic example of the extremes possible on the textual/visual spectrum. On the one end, we have traditional command-line languages such as JCL (or MS-DOS or UNIX, for that matter), an example of which is shown in Figure 1.1. On the other end of the extreme, we have windows/icons/menus/pointer (WIMP) interfaces exemplified by the Macintosh operating system and Microsoft Windows on MS-DOS systems.

Consider the Job Control Language (JCL), shown in Figure 1.1, required to run a finite element analysis job on an IBM mainframe.

The primary advantage of textual operating systems and environments is the power and flexibility they provide to perform any conceivable task. The price one pays for this power is the effort to master exceedingly complex systems with arcane commands, a task that frequently requires years to develop real expertise. Then, just as you have fully mastered one operating system, Murphy's Law requires you to learn a completely new and equally arcane system as the old computer is retired. One attractive feature of UNIX is that it is relatively machine-independent, so that the learning curve need not be repeated as a new system is installed.

Alan Kay and his colleagues at the Xerox Palo Alto Research Center (PARC) conceived of the basic windows/icons/menus/pointer concepts which form the WIMP paradigm. The basic principles underlying this paradigm are:
 




//UOX608 JOB (1,X60800S,2N01),'MAGSQN94 RUN',
// MSGLEVEL=(2,0),CLASS=N,MSGCLASS=S /*ROUTE PRINT R15
//A EXEC TSMAGSQN,RUNNAME=HUGE,REGION.MAG=2800K,RESEQ=HUGE,
// USERID=UOX608,PLTNAME=HUGE,PROS=MAGSTA94,PROG=MAGNET94
//START.STEPLIB DD DSN=AOS1.GRP23.TLOAD1,DISP=SHR
//MAG.STEPLIB DD DSN=AOS1.GRP23.TLOAD1,DISP=SHR
//MAG.FT08F001 DD UNIT=SYSDA,SPACE=(18144,300)
//MAG.FT09F001 DD UNIT=SYSDA,SPACE=(18144,300)
//MAG.FT07F001 DDUNIT=SYSDA,DCB=(RECFM=FB,LRECL=1440,BLKSIZE=1440),
//SPACE=(1440,5000)
 
 

Figure 1.1
Example of Job Control Language. This segment of JCL is listed directly from the author's finite element analysis runs on a mainframe computer. The difficulty of translating such language into English rivals that of natural language translation. The main difference is that this problem is the result of conscious design. (Reprinted from Firebaugh by permission of PWS-Kent Publishing Company.
 


These concepts were first implemented in 1972 in the Xerox Alto, the first "personal computer" with 256 KB memory, a 600 ¥ 800 pixel full-page, bit-mapped graphic display, a mouse, and a 2.5 MB removable hard disk pack (floppies had not yet been invented). The enormous popularity of the Alto within the Xerox Corporation led to the development of the Xerox Star, the first commercial WIMP system. The Star used a Xerox 8010 workstation to deliver a very powerful networked windows/icons/menus/pointer operating system. This system provided e-mail through Ethernet, an office automation system through OfficeTalk, and contained many object-oriented features derived from the SmallTalk language, another product of Alan Kay's PARC group. An excellent history of the development of the WIMP paradigm as it was implemented in the Xerox Star has been given by Jeff Johnson et al.

Because of inattentiveness to the personal computer revolution occurring outside of Xerox and the company's choice to keep the Star's technologies proprietary, the system failed to become an industry standard. It had, however, a tremendous influence on the development of the Apple Lisa, another benchmark computer. The Lisa was another ground-breaking technological success that resulted in a commercial failure. Two contributing reasons for its lack of success were its closed architecture and excessive price. The primary contribution of the Lisa was to give birth to the Macintosh, a miniaturized version of the Lisa with all of its functionality at one-fifth the cost.

Consider the operation of a common, garden-variety Macintosh. In Figure 1.2, we show an example of the author's Mac Plus desktop environment as this was being written. The original desktop contained the icon for a 45-MB hard drive labeled "Morris_Rodime," a floppy disk labeled "CG-Text," and the icon of a trash can. By double clicking the mouse cursor (shown as an arrowhead near MS Dictionary) on the hard drive icon, the contents are opened to reveal a hierarchical filing system containing sixteen files and folders headed by the "System Folder." Double clicking on the "Applications" folder reveals its contents, eight folders labeled "WORD4.0," "Graphics," "WINGZ," and so on. After selecting the WORD4.0 folder by pointing at it and double-clicking, the next level of the filing system opens revealing applications programs such as "Microsoft Word," documents such as "Word 4 ReadMe," and ancillary files such as "MS Dictionary."

Objects such as files and folders may be deleted by selecting them with a single click of the mouse and dragging them into the trash can. Objects may be copied onto other objects by simply selecting them (single click) and dragging them onto the desired object. When more complex operations are required, the user simple scans the contents of the pull-down menu bar at the top of the screen for the desired function.

Figure 1.2
Example of a WIMP Operating System, showing three overlapping windows (W), multiple icons of folders, applications, and documents (I), the menu (M), and the cursor arrow pointer (P) near MS Dictionary. 



 

The operation of WIMP operating systems may be summarized by the following three simple rules:
 

The beauty of a WIMP system is the intelligence built into it. That is, the knowledge of how to interact with and operate such systems is an intrinsic part of the system itself. In conventional command-line operating systems, this knowledge is archived in reference manuals that often approach a thousand pages or more. Since the intelligent machine knows the commands and functions of which it is capable and can explain them to the user, the user doesn't have to memorize or painfully reference the operating manual. As the advertisers of WIMP operating systems correctly claim, "It's better to teach machines to think like humans than to train humans to think like machines."

The desk top environment-consisting of nested folders and files-is a paradigm with which most knowledge workers are already familiar. It is remarkably simple to transfer this intuition for creating, moving, copying, and trashing paper documents to the computerized desk top. Personnel unfamiliar with computers can learn to operate and become productive with WIMP systems in a matter of hours or days compared to the weeks or months of training required for more conventional command-line systems. Experienced WIMP power-users take great pride in exploiting the full capabilities of a totally unfamiliar software system without having to open the user's manual. And they do so without ever having to memorize arcane mnemonics like "MKDIR", "COPY A:*.* C:", or "grep."

Power users of command-line systems used to huffily protest that "Real programmers don't use a mouse," and that they could easily type out their complex, arcane commands and beat any WIMP system in performance. However, the figures from careful, side-by-side studies of systems with nearly identical capabilities but differing only in user interface (command-line Vs WIMP) show the much greater efficiency of WIMP systems. One of the earliest studies indicates the relative efficiency of a typical command-line system (IBM PC) compared to a WIMP system (MAC). The results of this study in which similar word processing and drawing programs were compared on the two systems are shown in Table 1.2.
 

Table 1.2
Comparison of Command-line and WIMP Systems
Applications Program 
Keystrokes on PC
(Command-line)
Keystrokes on Mac
(WIMP)
Word Processor  

MS-Word 2.0 on PC  

MS-Word 1.0 on Mac 

16
(5 functions)
13
(5 functions)
Drawing Program  

DR Draw 1.0 on PC  

MacDraw 1.7 on Mac 

42
(7 functions)
15
(7 functions)
 

The combination of an intelligent operating system with an intuitive, easy-to-visualize desk top environment has resulted in dramatic decreases in training time and expenses with corresponding increases in productivity. These practical, "bottom-line" results have led, in turn, to a complete rout of command-line operating systems (with the notable exception of UNIX systems) and the triumph of the mouse. The success of this revolution is evident in the rush to Microsoft Windows, X Windows, and other graphical user interface environments.

The success of the visual paradigm for computer operating systems has encouraged considerable effort in extending the visual paradigm to programming languages themselves. Achievements in this area have been well documented in a special issue of IEEE Computer on "Visualization in Computing." Application programs written in one of the leading graphical programming languages, Visual Basicô, are presented in the chapter on Designing Graphical User Interfaces.
 

Correlations in Data

Assume we make four sets of measurements on the dependence of variable Y on the independent parameter, X. These measurements are shown as (Xi,Yi, i = 1 to 4) in Table 1.3.

A cursory examination of these sets of numbers reveals nothing at all, or at least nothing unusual, except for the constancy of X4 for all values except the eighth one.
 

Table 1.3
Four Sets of Measurements
X1 Y1  X2  Y2 X3 Y3  X4  Y4
10.00 8.04  10.00  9.14 10.00 7.46  8.00 6.58
8.00 6.95  8.00 8.14 8.00  6.77 8.00 5.76 
13.00 7.58  13.00  8.74 13.00 12.74  8.00 7.71
9.00 8.81  9.00 8.77 9.00  7.11 8.00 8.84 
11.00 8.33  11.00  9.26 11.00 7.81  8.00 8.47
14.00 9.96  14.00  8.10 14.00 8.84  8.00 7.04
6.00 7.24  6.00 6.13 6.00  6.08 8.00 5.25 
4.00 4.26  4.00 3.10 4.00  5.39 19.00 12.50 
12.00 10.84  12.00  9.13 12.00 8.15  8.00 5.56
7.00 4.82  7.00 7.26 7.00  6.42 8.00 7.91 
5.00 5.68  5.00 4.74 5.00  5.73 8.00 6.89 
 

To better understand how X and Y are correlated within each data set and how such correlations differ between the four sets, we can apply some standard statistical tests. First, we can calculate the mean of both X and Y, (<X>,<Y>), for each set and find that they are identical across sets. To compare the scatter of the points, we can compute the sum of the square of (X - <X>) and find that that, too, is identical in all four cases. Assuming a simple linear relationship of the form Y = mX + b between the variables, a least squares fit of the data (i.e., a linear regression analysis) yields m = 0.5 and b = 3.0 for all four sets of data. Continuing the linear regression analysis, we obtain identical results for all relevant statistical parameters such as t, r2, the standard error of estimate of slope, the regression sum of squares, and the correlation coefficient.

At this point we would be sorely tempted to conclude that all four sets of data somehow "represent the same thing" since they have identical linear correlations (with the possible exception of set 4 in which the numbers themselves obviously "look funny"). A graphical analysis, however, shows us just how wrong we would be!

If we view these data sets on a 2D scatter plot, we can see patterns emerge which, in fact, tell us a great deal about the functional relationship between X and Y. Consider, for instance, the relationship between X1 and Y1 shown in Figure 1.3.

From this graph it appears that Y is strongly dependent on X in approximately a linear fashion, but with significant statistical spread, possibly due to measurement errors.
 

Figure 1.3
The functional relationship between X1 and Y1.  


Figure 1.4
The functional relationship between X2 and Y2.  



 

Next, consider the dependence of Y2 on X2. In Figure 1.4 there appears to be an obvious quadratic dependence of Y on X, with virtually no statistical scatter or measurement error.

Figure 1.5
The functional relationship between X3 and Y3.  



 

Figure 1.6
The functional relationship between X4 and Y4.  



 

Figure 1.5 indicates that Y3 is clearly a precise linear function of X3 for all points except the tenth one. Because nature tends to be well-behaved, we can probably conclude that a measurement error was involved in point 10 of Figure 1.5. If this is experimental data, it would probably be wise to repeat measurement 10.

The only conclusion we can draw from Figure 1.6 is that Y is essentially independent of X as long as X remains at 8 but increases sharply as X increases to 19. This apparently strange behavior closely resembles the temperature/pressure curves of certain materials as they change state (e.g. freeze or boil).

The graphical display of these data sets reveals dependencies and behavior that is totally masked by the raw numerical data and standard statistical tests. Visualizing this data with simple 2D graphics reveals such important relationships.
 

Computer Simulations

Supercomputers provide "artificial laboratories" for simulating the behavior of complex physical systems. Simulations are valuable as research tools for modeling physical system properties that are too difficult, too dangerous, or too expensive to measure. In some cases, such as an exploding neutron star or molten fluid flow at the center of the earth one can model systems on which measurements are impossible.

Computer simulations are also of tremendous value to engineers in modeling mechanical, thermal, and electromagnetic systems by the techniques of finite element analysis. Hundreds of finite element models may be designed, built, and tested for the same cost as building and testing a single laboratory model. The behavior of such computer models may be examined under conditions of stress and temperature unavailable in most laboratories.
 

Figure 1.7
Computer simulation of gravitational waves generated on the surface of a black hole. The classical radius (Schwartzshield radius) is shown as a white circle.  

 

As an example of physical simulations of systems totally inaccessible to laboratory measurement, consider the simulation of a gravity wave on the surface of a black hole shown in Figure 1.7. The oscillating mass in the black hole is seen to generate gravity waves (regions of greater and lesser gravity) that propagate away from the surface of the black hole at the speed of light.

Special features of the process under study may be highlighted by processing the computed image. In Figure 1.8, we show the same image using color, shown as shading, to emphasize the crests and troughs of the gravity wave.
 

Figure 1.8
Shading enhanced version of simulated gravity waves generated by a black hole. Regions of stronger gravity (crests) are shown in red and those of weaker gravity (troughs) are shown in blue in the original color enhancement.  

 

Prior to the development of supercomputers it was necessary to build expensive wind tunnels, wave tanks, and scale models of local topography in order to predict the flow of fluids over airplane wings, ship hulls, and drainage basins. Such physical models suffered from the disadvantages of high cost, slow production, and inaccuracies in the final results due to uncertainties in the scaling laws. Supercomputers allow us to solve complex hydrodynamic equations in order to predict fluid flow in arbitrary environments.

In Figure 1.9, we show the results of computing the development of a vortex such as one might observe in the drain in a sink. Note that some of the artifacts introduced by the finite grid on which the model is computed are even more apparent in the shading enhanced version (Figure 1.12).
 

Figure 1.9
Computer simulation of vortex development in fluid flow.  



 

These four examples demonstrate the power of visualization as an effective new paradigm in computing. The effectiveness of this paradigm arises from the excellent coupling between the graphical output of the computer and the visual system of humans. The human visual system has an enormous bandwidth for information, and the computer acts as a transformer to convert indigestible numeric results into a form that optimizes communication with humans.
 

Visualization in Scientific Computing

Since the original NSF study of Visualization in Scientific Computing there has been a tremendous upsurge of interest in visualization by scientists and engineers.,, As Robert Wolff stated in the issue of Computers in Physics devoted completely to scientific visualization,
 

"Visualization is not new, but its awareness by the general scientific community is."
 

Motivation for Visualization

In addition to the improved communication at the human/computer interface discussed previously, a primary motivation for visualization in scientific computing concerns the huge amount of data with which scientists and engineers must contend. The sources of this data glut include:
 

Improved transducers and telemetry systems generate data at such tremendous rates that it is impossible for individual scientists to process and make sense of it by conventional techniques. It is estimated that less than one part in a million of this data receives careful analysis by researchers using standard analysis and plotting techniques.

Supercomputers, instead of solving this problem, aggravate it by producing additional vast amounts of data through computer simulations of systems. For instance, Robert Wolff reported on a supercomputer simulation of the magnetohydrodynamic interaction of the solar wind with Venus. A single simulation on a 2D grid of 300 ¥ 300 points required 20 hours on a Cray X-MP and generated over 30 GB of data. For purposes of our discussion on visualization we consider the simulations generated by supercomputers as equivalent to the experimental data generated by standard measurement techniques.
 

Modes of Visualization

While discussing visualization in scientific computing, it is helpful to distinguish the following categories of computer graphics applications:
 

 

Imaginary World Information

We have already described two applications classed as imaginary world applications of computer graphics. All simulation calculations fall into this category, and it is a big one indeed! Problem areas include population dynamics, turbulent flow, fission and fusion processes and reactors, racing hull design, and finite element analysis of mechanical and electromagnetic systems. In addition, the elegant functional analysis of programs such as Mathematica and the beautiful patterns generated by fractal programs may be considered as projections of an imaginary world.

The computer provides to researchers in mathematics and the theoretical side of the physical world the advantage of not being bound by the constraints of the real world. As long as a concept of some imaginary reality can be expressed in definite algorithmic language, one can explore the behavior of the theoretical system and its ramifications until the subject (or the researcher) is exhausted.
 

Real World Information

Tools are evolving for examining real world phenomena with much greater resolution and detail. Present earth satellites, for instance, are providing photographic resolution one hundred times that of a few years ago. Not only do the new satellites produce several orders of magnitude more data than did the older ones, but the older ones continue to produce data at a slower rate. The sheer volume of this data has overwhelmed space scientists who are simply "warehousing" the data in hopes of newer and more efficient analysis techniques.

Several examples illustrate the success of visualization techniques in recent years. The first is the extraordinarily effective reporting on national weather conditions provided by newspapers carrying the colored weather map service. In one 10-second glance you can get an accurate overview of the temperature and weather conditions throughout a whole continent. The author receives two major metropolitan newspapers. One carries the daily color-coded weather map and the other carries a plain black and white weather map with a greater number of weather service symbols encoded. The color-coded map is incomparably more efficient in communicating the global weather picture to all but trained meteorologists.

The second prime example of scientific visualization is the outstanding work done by NASA and the Jet Propulsion Laboratory in communicating the results of space exploration from our spacecraft. It is quite remarkable that the visual images transmitted from an obsolete spacecraft built fifteen years earlier can reach the press and general public within hours of their reception.

A final example of an area in which visualization techniques are critical for studying real world information is medical imaging. New techniques in magnetic resonance imaging (MRI) and positron emission tomography (PET) produce large quantities of data which are most effectively displayed graphically. Frequently the scanning is performed as a series of 2D slices that can be combined graphically to yield 3D images. These techniques allow the doctor to use volume visualization for "seeing into" the patient in a non-invasive manner for the early detection of medical problems.
 

Applications of Visualization- Scientific and Otherwise

While the awakening of interest in visualization by the scientific community is an important and relatively recent phenomenon, other segments of society have long recognized the importance of computer graphics and have accepted visualization as an effective mode for communication. The aspect of visualization changing most rapidly is the ease of creating striking graphics and the hardware for transmitting images to computers and representing them on a variety of hardware devices. These new technologies are examined in detail in Chapter 2.

Here we summarize very briefly some major areas of society in which computer graphics plays an important role. The list is not exhaustive, and the student is urged to become aware of the use of graphics and observant as to its effectiveness.
 

2D Graphics in Magazines and Newspapers

Many national news magazines, particularly those with a business orientation such as U. S. News & World Report, make extensive and effective use of graphics. The format usually consists of bar graphs or pie charts to show trends and distributions. Many daily newspapers have followed suit and present at least one graph on the front page, several in the business section, and a graphical weather map.

If used properly, these graphics can quickly convey the relative sizes or trends of the topic under discussion. There is a tendency, however, for some graphics designers to become too "cute" in the design of their graphics. The availability of huge catalogs of clip art on personal workstations facilitates the use of decoration and design in ways which often confuse the eye and hide the meaning of the numbers being plotted. The availability of drawing programs capable of image manipulation and transformation has made it even easier to distort graphical data and create false impressions (sometimes intentionally). Tufte describes these tendencies as graphics with a "richness of design and a poverty of information" and characterizes them as "chartoons" and "schlock."
 

Business and Presentation Graphics

Presentation graphics has become one of the most influential forces driving the development of new and sophisticated graphical software and hardware. The business community was quick to recognize the effectiveness of sharp, simple, colorful graphics for sales meetings, training sessions, employee meetings, annual reports, and so on. The almost universal acceptance of the graphics medium for communication in the business community is built on the principle:
 

Three observations may be relevant here. First, the efficacy of presentation graphics is beyond dispute. There is no need to sell businesses on this medium-they have already bought it. Second, a wealth of extremely capable software has been developed for the presentation graphics market. The final section of this text will describe several of these software packages, and the middle section will help you understand how they work. Finally, the main challenges in presentation graphics involve access to data (database query systems) and communication of data between systems (networking); e.g., "What was the sales of garden tractors to Kenya in 1978?"
 

Art and Design

A growing area of computer graphics application involves art and design. Specific applications include:
 

Two examples illustrate the application of computer graphics in art and design. The pseudopod water creature in the film, The Abyss, was created purely by computer graphics, making this the most technically advanced motion picture filmed up to that point. The pseudopod required six months to design on some of the most powerful workstations running Pixar's RenderMan rendering system and a host of other animation, modeling, and image processing tools. In addition, the project used two Macintosh IIs running 3D animation and image processing programs for creating hard copy of the "story boards," the scripts from which the special effects people worked.

The second example is Modadrape, a fabric design program that can map various textile fabric designs onto models of the human figure. The program correctly models the clinging, wrinkling, and draping behavior of real textiles on real human models. The program contains over 2,000 images and libraries of fabrics and flat block patterns. Productivity increases over conventional techniques of clothing design are estimated to be as high as 400 percent.

Note how the examples of visualization in art and design blend smoothly from art to design. It is very difficult to classify a particular area of computer graphics as purely "art" or purely "design" or purely "scientific visualization." Some of the most artistic achievements of the movie makers' art are outstanding examples of scientific visualization, and some of the graphics generated in research on scientific visualization are works with tremendous visual appeal, i. e., art. The next section moves further along the art - science continuum by examining how computer graphics is applied to engineering design.
 

Engineering Design

Two major areas in which computer graphics plays central roles in engineering design are CAD (Computer Aided Design) and FEA (Finite Element Analysis). In fact, CAD has been probably the most influential force in the development of sophisticated computer graphics techniques. It was the first area to make widespread use of graphics workstations in a production environment. Design shops found the high cost of early CAD graphics workstations more than offset by the enormous productivity increases made possible. Many of the interactive pointing, selection, and dragging techniques which we now take for granted were first used routinely on CAD systems.

CAD systems are now essential tools for the design of integrated circuits, manufactured parts, complex mechanical systems, and architectural plans. In addition to the drawing and drafting tools found on all systems, some systems provide for inclusion of material properties, the calculations of volumes and masses, the specification of bill of materials, and the estimation of costs. Advanced systems even permit the direct connection of the CAD design phase to the CAM (Computer Aided Manufacturing) phase. Design parameters from the CAD program are used directly for the numerical control of machine tools. This eliminates the intermediate stage of hard copy blueprints and provides an additional increase in productivity.

FEA models are essentially CAD models to which physical properties have been assigned in order to predict their behavior under condition of stress or excitation. CAD models, at least in their first stage, are purely geometric in nature, i.e. the designer uses drawing and drafting tools to build the model in 2D or 3D space. The first phase of finite element analysis is also purely geometrical-a geometric model of the object under study is created, often with a standard CAD program. Next the material properties of the model are assigned to each element. Then the boundary conditions are defined, that is, the designer specifies which parts of the model bridge are clamped and which parts are free to move. Finally, the forces acting on various parts of the model are specified and the program "turned loose" to calculate the resulting displacements and oscillations of all parts of the structure.

Finite element analysis, because of its power, flexibility, and incorporation of physical law, is becoming the standard design tool in many areas of engineering. It is widely used in modeling of automotive parts, electromagnetic devices such as motors and generators, and the thermal properties of many physical systems. While FEA was born on mainframes, it is now routinely implemented on personal workstations.
 

Cartography

Cartography, the production, analysis, and display of maps, is a natural application of computer graphics. The official maps of the United States Geological Survey (USGS) have recently become available in computer format, and one company has converted every 7.5 minute quadrangle map to AutoCAD format. Various layers provided by these maps include:
 

The powerful AutoCAD computer aided design program allows the user to display water features, roads, railroads, airports, pipelines, power lines, elevation contours, and names of cities, towns, and other landmarks. In the 3D mode of AutoCAD it is possible to display a terrain relief grid, providing the user with an actual 3D image of the terrain covered by the map.

The USGS maps available for a variety of workstations permit professionals in civil engineering, utility management, surveying, and real estate planning to turn the tedious task of cartographic research into a creative design experience.
 

Real-Time Control

Most routine data collection and control operations are accomplished by interfacing the experiment or controlled process directly to the computer. There are a host of laboratory control software programs that provide great flexibility in designing application-specific graphical control panels complete with meters, monitors, warning lights and other indicators of the status of the experiment or process. Graphical representations of panel meters, chart recorders, push-buttons, switches, and variable potentiometers allow the user to manipulate the experiment or process just as she/he would a real control panel lined with switches and pots.

The advantages of such simulated control systems are immediately obvious. The user has complete flexibility in design and may modify the design repeatedly until just the right combination of sensitivity, intuition, simplicity, and control is achieved. These control systems frequently go under the names "work benches," "data stations," and, when combined with data logging and report generations routines, "Laboratory Information Management Systems" (LIMS).

One of the chief lessons learned in the melt down at the Three Mile Island Unit II nuclear reactor in Pennsylvania was the importance of a good machine/user control interface. One of the causes of that accident was the profusion of outdated, irrelevant, and misleading indicators which gave the operators a totally misleading picture (image?) of what was happening in the reactor. The Kemeny Commission, which studied the causes of the accident, recommended that on all reactor control panels, the archaic mechanical controls and indicators be replaced by modern graphical systems to help the operators quickly verify the true status of the system and take appropriate action.

The above summary of important visualization areas, although far from complete, gives some idea of the breadth and value of computer graphics. Computer graphics in scientific visualization is an area of particularly rapid growth of interest.
 

Visualization Tools for Image Processing

Anyone introduced to the concept of spreadsheet analysis of numerical data is soon struck by the power and flexibility of the spreadsheet model of programming. Simply put, any cell or block of cells of the matrix may be any function of any other cell, row, column, or block of cells. For example, the spreadsheet program, Wingzô provides a matrix of 32,000 ¥ 32,000 cells and approximately 150 functions to transform the information contained in these cells. Most spreadsheet programs also include graphics capabilities for displaying the contents of the data matrix. Wingz, for instance, has 20 chart styles for displaying data in 2D or 3D, including sophisticated wire frame and contour plotting.
 

Figure 1.10
A 10-level contour map of the eye of the vortex (Contour mapping done by NCSA Image.



 

Just as spreadsheet transformation transforms one set of numerical data into another set, so image processing transformations transform one image into another image. Such image processing transformations provide valuable tools for exploring the nature of the information contained in an image and for searching for the most effective form for presenting the graphical information. We have already hinted at one of the more effective transformations, that of mapping shading into color. The following are also techniques that have proven effective.
 

Contour Map of z = f(x,y) 

The image shown in Figure 1.9 simulates a photograph of the vortex in a real liquid. Our eyes interpret the brightness of the image as the level of the fluid.

Any 2D image may be represented as I(x,y) where I is the intensity (or "brightness") of the image at pixel (x,y). The intensity, in this case, plays the same role as the z elevation on a relief map of the geographic region in which z = f(x,y). And, just as relief maps may be compressed into contour maps of lines of constant elevation, the intensity map of Figure 1.9 may be contoured to show lines of constant density. Such a map is shown in Figure 1.10.
 

Wireframe and 3D Histograms 

The data on which the previous Figure 1.9 is based may also be represented as a wireframe model and viewed as a 3D object. Wireframe models may either be "pure" as that shown in Figure 1.11 or be depicted with hidden lines removed, producing a more realistic image at the expense of removing some information. Wireframe models are particularly effective in clarifying the surface structure of functions of two variables and the geometric structure of objects. Their usefulness is enhanced by providing graphical "handles" or controls for rotating the model in 3D space. This provides the sensation of flying around the object and looking at it from many angles. Variations of wireframe models include 3D histograms and elevation plots.

Figure 1.11
A 3D relief map of the eye of the vortex (Wireframe model done by NCSA Image.

 

Pseudo-Color Transformations 

Another particularly effective image processing technique involves pseudo-color transformations. In its most general form, a pseudo-color transformation involves mapping one set of colors (called a palette) into another set of colors. By mapping the gray scale image shown in Figure 1.9 into a particular color palette, we get the color enhanced image shown in grayscale in Figure 1.12. The word pseudo is used to remind us that the image does not represent a real color photograph but rather some arbitrary set of colors chosen to aid in interpreting the image. Pseudo-color is frequently used in infrared photography to show the color of dense foliage as red, while other image colors more closely approximate their real hues. Pseudo-color is also widely used in medical imaging to emphasize a particular organ or growth by mapping its shade into a vivid hue.
 
 

Figure 1.12
Color enhanced version shown in grayscale (Pseudo-color map by NCSA Image.

 

Geometric Transformations 

The standard geometric transformations include translation, scaling, and rotation. More complex transformations involve skewing, warping, projecting, and image mapping onto arbitrary surfaces. Consider the diagram of a "button box" containing two circular buttons in Figure 1.13. Note that the left side of the box was generated by a simple 180° rotation of the right side. To build the box, the right side was designed first, then duplicated, and the duplicate was then rotated by 180° and dragged into the correct position adjacent to the original. Thus, fifty percent of the final design was accomplished by three simple mouse actions.

Geometric transformations are very important for applications such as CAD in which the final object may be constructed by replicating more primitive objects and transforming them into the positions and shapes required.

As a curious sidelight related to visual cueing, note how the right button appears to be convex while the left button appears to be concave. Why should a simple rotation also appear to cause an inversion of the buttons? (Hint: How does the shading cue correlate with the "normal direction of the sun's rays"?)
 
 

Figure 1.13
Button box in which the left side is generated by a simple 180° rotation of the right side. Which button appears to be convex? Why? (Figure generated using Canvasô.) 

 

Edge Detection

Computer vision is an area of visualization in which we attempt to train the computer to visualize an image from an external transducer such as a video camera. It is particularly important for robotics, autonomous land vehicles, and any application requiring the automatic scanning of large numbers of photographs.

The essential function of this visualization process is to extract meaning from the incoming image. Computer vision requires pattern recognition, and the first step in nearly all pattern recognition tasks is edge detection. The simplest method in edge detection involves calculating the contours for which the second spatial derivative of the intensities goes to zero. Some edge detectors first smooth the edges by some spatial scale factor (called the convolution integral) and then calculate the contour lines where the second derivative of the smeared intensity pattern crosses zero. Applying an edge detection algorithm to the button box, we get the pattern shown in Figure 1.14.
 

Figure 1.14
Results of edge detection algorithm applied to Figure 1.13. Note how the boundaries between different intensities produce edges while regions of constant intensity produce no signal at all. (Edge detection by PixelPaintô.

 

Smoothing and Filtering Transformations 

Our intuition tells us that the sharper the image, the more realistic or life-like it will be. But this is not necessarily so. Our eyes are optical instruments with the pupil acting like the aperture of a camera. Because of diffraction effects, the images formed on the retina are not precisely sharp but rather diffused diffraction patterns. The eye, however, "knows" about diffraction effects and performs a Fourier analysis of the diffracted image to extract a sharper image of the object which it sends on to the brain.

To explore this effect, consider the two images shown in Figures 1.15 and 1.16. The first is a portion of the "Architect's House" image from PixelPaintô. Note the extremely sharp detail down to the pixel level.
 

Figure 1.15
"Pixel perfect" original image of architect's house (Figure by PixelPaintô.) 



 

Next, we apply a smoothing transformation which in effect averages each pixel with pixels in its immediate vicinity. The effect of this operation is to slightly blur the image, an effect that we might expect to degrade the image. Let's see what it looks like (Figure 1.16).

The interesting result is that the image appears somewhat more realistic than the original. Notice at least three effects which enhance the realism of the smoothed image. First, the patterned shadow of the original averages out to a smooth shadow in the processed image. Second, the light-colored ends of the tiles on the roof average out to a less conspicuous and more natural shade in the processed image. Finally, the detailed dark lines in the tree average to a more realistic shadow effect, and the "jaggies" in the awning support above the door are blurred into a smoother appearance.

The conclusion of this experiment is that, in certain circumstances, the loss (or degradation) of image information may actually enhance visual realism. This discovery was first proposed by the French Impressionist painters in the late 1800s:
 

 

Volume Visualization and  Transparency Mapping

The ability to render surfaces of objects realistically is well developed and available on many drawing and CAD programs. However, the science of "seeing inside" objects for which we know the composition at each point in space is still new and undergoing rapid development. One useful approach for visualizing the interior of objects for which we know the properties at each voxel (volume element) is to gradually turn intervening materials transparent and, in effect, "dissolve" our way through the object. An example of this technique is shown in the Figure 1.17, a simulated melt down of fuel rods in a power reactor.
 

Computer Animation 

Animation techniques were perfected in the precomputer era by Disney Studios and culminated in the exquisite, hand-crafted Fantasia. Even the sophisticated synthesis of real images and cartoon characters in Who Framed Roger Rabbit was primarily hand-crafted. However, the potential of computer animation became apparent in movies such as The Abyss, Beauty and the Beast, Lawnmower Man, and Alladin.

Some of the earliest and best work in scientific visualization was done by Evans and Sutherland in designing flight simulators. The Evans and Sutherland machines use powerful, dedicated graphics processing hardware to provide the high-speed processing required for real-time graphics and animation. For simulation of complex systems, e.g., turbulence in fluid flow and vibrational modes in structural mechanics, animation provides valuable insight unavailable from any other technique.
 


Figure 1.16
Smoothing function applied to Figure 1.15. Note the greater realism achieved by slight blurring and reliance on the eye to reconstruct the original image. (Image processed by PixelPaintô.) 



 

Programs are now available for performing animation on personal workstations. Capabilities include tweening for automatically filling in missing animation frames between two user supplied images. This significantly reduces the amount of tedious handcrafting required of the user. In scientific visualization, on the other hand, the emphasis is on the behavior of the system model, and the computer animation consists of "filming" the model's behavior. Physical models incorporate the physics of force, inertia, momentum and energy. As a result, the realism of animated sequences of physical models greatly exceeds that of 2D animations not grounded in physical law. Throughout this book frequent reference is made to this principle of model authenticity for achieving visual realism.
 

Interactivity 

The goal of researchers in scientific visualization is to provide a system that enables the user to interact directly with the simulation or analysis which they are observing. At present, most simulations on supercomputer systems are run in batch mode-the user sets the experimental parameters at her terminal, sends the job off to the supercomputer, and examines the results after the run is completed. Because really interesting results and directions for further investigation are often not known at set-up time, this mode provides less than the optimal environment for creative and serendipitous discovery. What is needed is a transparent system in which the investigator can monitor the progress of a simulation experiment, visualize partial results as soon as they are computed, and interact with the experiment in progress.

By visually displaying the progress of the computation and providing user interrupt options, a system with interactivity allows the user to steer the course of the investigation. The improvement in research and design productivity provided with interactivity is comparable to that achieved when program debugging moved from batch to interactive mode. The rapid increase in network bandwidth and improved postprocessing capabilities of personal workstations form the basis for the implementation of interactivity in visualization.

The IBM supercomputer, called the POWER Visualization System, represents a direct industry response to the need for interactivity and computational steering. This system uses up to thirty-two parallel processors to provide a peak computational rate of 1,280 double-precision megaflops (million floating point operations per second). Simulation results are displayed as 3D animations on a 1,920 ¥ 1,536 pixel screen.
 


Figure 1.17
Volume visualization of melt down of fuel rods in a nuclear reactor. On the left, fuel rods immersed in coolant; on the right, fuel rods with coolant gone. (Figure rendered by NCSA Image.



 

Conclusions 


In this chapter, we have attempted to show that the power of the visualization paradigm in computing has a solid physiological basis. The first two examples of visualization in communication (signs and operating systems) suggested that graphical systems are the best mode of communications in many applications. The next two examples (correlations and simulations) indicated that the only mode for understanding complex systems is frequently through visualization.

Visualization is generating great interest in the scientific community because of the tools it provides for reducing and analyzing the mountains of data produced by modern instrumentation and supercomputers. The survey of the applications of visualization indicated the wide range of human activity in which computer graphics plays an important role. Many of these activities would be practically impossible without computer graphics. Finally, several image processing tools were demonstrated for transforming images and enhancing our ability to visualize more clearly what they represent.