SGI Windows NT Toolbox
|Download Files|

Texture Mapping in Technical, Scientific, and Engineering Visualization

Michael Teschner1 and Christian Henn2

1Chemistry and Health Industry Marketing,
SGI Basel, Switzerland

2Maurice E. Mueller-Institute for Microscopy,
University of Basel, Switzerland

Table of Contents

Executive Summary
Introduction
Abstract Definition of the Texture Mapping Concept
Color-Coding-Based Application Solutions
Isocontouring on Surfaces
Displaying Metrics on Arbitrary Surfaces
Information Filtering
Arbitrary Surface Clipping
Color-Coding Pseudo-Code Example
Real-Time Volume Rendering Techniques
Volume Rendering Using 2D Textures
Volume Rendering Using 3D Textures
High-Quality Surface Rendering
Real-Time Phong Shading
Phong Shading Pseudo-Code Example
Conclusions
References

Executive Summary

As of today, texture mapping is used in visual simulation and computer animation to reduce geometric complexity while enhancing realism. In this report, this common usage of the technology is extended by presenting application models of real-time texture mapping that solve a variety of visualization problems in the general technical and scientific world, opening new ways to represent and analyze large amounts of experimental or simulated data.

The topics covered in this report are:

  • Abstract definition of the texture mapping concept
  • Visualization of properties on surfaces by color coding
  • Information filtering on surfaces
  • Real-time volume rendering concepts
  • Quality-enhanced surface rendering
In the following sections, each of these aspects will be described in detail. Implementation techniques are outlined using pseudo-code that emphasizes the key aspects. A basic knowledge in GL programming is assumed. Application examples are taken from the chemical market. However, for the scope of this report no particular chemical background is required, since the data being analyzed can in fact be replaced by any other source of technical, scientific, or engineering information processing.

Note that this report discusses the potential of released advanced graphics technology in a very detailed fashion. The presented topics are based on recent and ongoing research and therefore subject to change.

The methods described are the result of a team involving scientists from different research areas and institutions and is called the Texture Team, consisting of the following members:

  • Prof. Juergen Brickmann, Technische Hochschule, Darmstadt, Germany
  • Dr. Peter Fluekiger, Swiss Scientific Computing Center, Manno, Switzerland
  • Christian Henn, M.E. Mueller-Institute for Microscopy, Basel, Switzerland
  • Dr. Michael Teschner, SGI Marketing, Basel, Switzerland
Further support came from SGI's Advanced Graphics Division engineering group.

Color pictures and sample code are available from sgigate.sgi.com via anonymous ftp. The files will be there starting November 1, 1993 and will be located in the directory pub/SciTex.

For more information, please contact:

Michael Teschner
SGI Marketing, Basel
Erlenstraesschen 65
CH-4125 Riehen, Switzerland
Phone: (41) 61 67 09 03
Fax: (41) 61 67 12 01
E-Mail: micha@basel.sgi.com

Introduction

Texture mapping [1, 2] has traditionally been used to add realism in computer-generated images. In recent years, this technique has been transferred from the domain of software-based rendering systems to a hardware-supported feature of advanced graphics workstations. This was largely motivated by visual simulation and computer animation applications that use texture mapping to map pictures of surface texture to polygons of 3D objects [3].

Thus, texture mapping is a very powerful approach to add a dramatic amount of realism to a computer-generated image without blowing up the geometric complexity of the rendered scenario, which is essential in visual simulators that need to maintain a constant frame rate. For example, a realistic-looking house can be displayed using only a few polygons with photographic pictures of a wall showing doors and windows being mapped to. Similarly, the visual richness and accuracy of natural materials such as a block of wood can be improved by wrapping a wood grain pattern around a rectangular solid.

Up to now, texture mapping has not been used in technical or scientific visualization, because the above-mentioned visual simulation methods as well as noninteractive rendering applications such as computer animation have created a severe bias toward what texture mapping can be used for, i.e., wooden [4] or marble surfaces for the display of solid materials or fuzzy, stochastic patterns mapped on quadrics to visualize clouds [5, 6].

It will be demonstrated that hardware-supported texture mapping can be applied in a much broader range of application areas. Upon reverting to a strict and formal definition of texture mapping that generalizes the texture to be a general repository for pixel-based color information being mapped on arbitrary 3D geometry, a powerful and elegant framework for the display and analysis of technical and scientific information is obtained.

Abstract Definition of the Texture Mapping Concept

In the current hardware implementation of SGI [7], texture mapping is an additional capability to modify pixel information during the rendering procedure, after the shading operations have been completed. Although it modifies pixels, its application programmers interface is vertex-based. Therefore texture mapping results in only a modest or small increase in program complexity. Its effect on the image generation time depends on the particular hardware being used: entry-level and interactive systems show a significant performance reduction, whereas on third-generation graphics subsystems texture mapping may be used without any performance penalty. Three basic components are needed for the texture mapping procedure: (1) the texture, which is defined in the texture space, (2) the 3D geometry, defined on a per vertex basis and (3) a mapping function that links the texture to the vertex description of the 3D object.

The texture space [8, 9] is a parametric coordinate space that can be one, two, or three dimensional. Analogous to the pixel (picture element) in screen space, each element in texture space is called texel (texture element). Current hardware implementations offer flexibility with respect to how the information stored with each texel is interpreted. Multichannel colors, intensity, transparency, or even lookup indices corresponding to a color lookup table are supported.

In an abstract definition of texture mapping, the texture space is far more than just a picture within a parametric coordinate system: the texture space may be seen as a special memory segment, where a variety of information can be deposited that is then linked to object representations in 3D space. Thus this information can efficiently be used to represent any parametric property that needs to be visualized.

Although the vertex-based nature of 3D geometry in general allows primitives such as points or lines to be texture-mapped as well, the real value of texture mapping emerges upon drawing filled triangles or higher order polygons.

The mapping procedure assigns a coordinate in texture space to each vertex of the 3D object. It is important to note that the dimensionality of the texture space is independent from the dimensionality of the displayed object. For example, coding a simple property into a 1D texture can be used to generate isocontour lines on arbitrary 3D surfaces.

Color-Coding-Based Application Solutions

Color-coding is a popular means of displaying scalar information on a surface [10]. For example, this can be used to display stress on mechanical parts or interaction potentials on molecular surfaces.

The problem with traditional, Gouraud shading-based implementations occurs when there is a high-contrast color-code variation on sparsely tesselated geometry: since the color-coding is done by assigning RGB color triplets to the vertices of the 3D geometry, pixel colors will be generated by linear interpolation in RGB color space.

As a consequence, all entries in the defined color ramp laying outside the linear color ramp joining two RGB triplets are never taken into account and information will be lost. In Figure 1, a symmetric gray scale covering the property range is used to define the color ramp. On the left-hand side, the interpolation in the RGB color space does not reflect the color ramp. There is a substantial loss of information during the rendering step.

With a highly tessellated surface, this problem can be reduced. An alignment of the surface vertices with the expected color-code change or multipass rendering may remove such artifacts completely. However, these methods demand large numbers of polygons or extreme algorithmic complexity and are therefore not suited tp interactive applications.

Figure 1: Color-coding with RGB interpolation (left) and texture mapping (right).

This problem can be solved by storing the color ramp as a 1D texture. In contrast to the above-described procedure, the scalar property information is used as the texture coordinates for the surface vertices. The color interpolation is then performed in the texture space, i.e., the coloring is evaluated at every pixel (Figure 1 right). High-contrast variation in the color code is now possible, even on sparsely tessellated surfaces.

It is important to note that, although the texture is one-dimensional, it is possible to tackle a 3D problem. The dimensionality of texture space and object space is independent; thus they do not affect each other. This feature of the texture mapping method, as well as the difference between texture interpolation and color interpolation, is crucial for an understanding of the applications presented in this report.

Figure 2: Electrostatic potential coded on the solvent-accessible surface of ethanol.

Figure 2 shows the difference between the two procedures with a concrete example: the solvent-accessible surface of the ethanol molecule is colored by the electrostatic surface potential using traditional RGB color interpolation (left) and texture mapping (right).

The independence of texture and object coordinate space has further advantages and is well suited to accommodate immediate changes to the meaning of the color ramp. For example, by applying a simple 3D transformation such as a translation in texture space, the zero line of the color code may be shifted. Applying a scaling transformation to the texture adjusts the range of the mapping. Such modifications may be performed in real time.

With texture mapping, the resulting sharp transitions from one color-value to the next significantly improve the rendering accuracy. Additionally, these sharp transitions help with visually understanding the object's 3D shape.

Isocontouring on Surfaces

Similar to the color bands in general color-coding, discrete contour lines drawn on an object provide valuable information about the object's geometry as well as its properties and are widely used in visual analysis applications. For example, in a topographic map they might represent height above some plane that is either fixed in world coordinates or moves with the object [11]. Alternatively, the curves may indicate intrinsic surface properties, such as an interaction potential or stress distributions.

With texture mapping, discrete contouring may be achieved using the same setup as for general color-coding. Again, the texture is 1D, filled with a base color that represents the object's surface appearance. At each location of a contour threshold, a pixel is set to the color of the particular threshold. Figure 3 shows an application of this texture to display the hydrophobic potential of Gramicidine A, a channel-forming molecule, as a set of isocontour lines on the molecular surface.

Scaling of the texture space is used to control the spacing of contour thresholds. In a similar fashion, translation of the texture space will result in a shift of all threshold values. Note that neither the underlying geometry nor the texture itself was modified during this procedure. Adjustment of the threshold spacing is performed in real time and thus is fully interactive.

Figure 3: Isocontour on a molecular surface with different scaling in texture space.

Displaying Metrics on Arbitrary Surfaces

An extension of the concept presented in the previous section can be used to display metrics on an arbitrary surface, based on a set of reference planes. Figure 4 demonstrates the application of a 2D texture to attach tick marks on the solvent-accessible surface of a zeolithe.

In contrast to the property-based, per-vertex binding of texture coordinates, the texture coordinates for the metric texture are generated automatically: the distance of an object vertex to a reference plane is calculated by the hardware and on-the-fly translated to texture coordinates. In this particular case two orthogonal planes are fixed to the orientation of the object's geometry. This type of representation allows for exact measurement of sizes and distance units on a surface.

Figure 4: Display of metrics on a zeolithe's molecular surface with a 2D texture.

Information Filtering

The concept of using a 1D texture for color-coding of surface properties may be extended to 2D or even 3D. Thus a maximum of three independent properties can simultaneously be visualized. However, appropriate multidimensional color lookup tables must be designed based on a particular application, because a generalization is either nontrivial or eventually impossible. Special care must be taken not to overload the surface with too much information.

One possible, rather general solution can be obtained by combining a 1D color ramp with a 1D threshold pattern as presented in the isocontouring example, i.e., color bands are used for one property, whereas orthogonal, discrete isocontour lines code for the second property. In this way it is possible to display two properties simultaneously on the same surface, while still being capable of distinguishing them clearly.

Another approach uses one property to filter the other and display the result on the object's surface, generating additional insight in two different ways: (1) the filter allows the scientist to distinguish between important and irrelevant information, e.g., to display the hot spots on an electrostatic surface potential, or (2) the filter puts an otherwise qualitative property into a quantitative context, e.g., to use the standard deviation from a mean value to provide a hint as to how accurate a represented property actually is at a given location on the object surface.

A good role model for this is the combined display of the electrostatic potential (ESP) and the molecular lipophilic potential (MLP) on the solvent-accessible surface of Gramicidine A. The electrostatic potential gives some information on how specific parts of the molecule may interact with other molecules; the molecular lipophilic potential gives a good estimate of where the molecule has contact either with water (lipophobic regions) or with the membrane (lipophilic regions). The molecule itself is a channel-forming protein and is located in the membrane of bioorganisms, regulating the transport of water molecules and ions. Figure 5 shows the color-coding of the solvent-accessible surface of Gramicidine A against the ESP filtered with the MLP. The texture used for this example is shown in Figure 8.

Figure 5:Solvent-accessible surface of Gramicidine A, showing the ESP filtered with the MLP.

The surface is color-coded, or gray-scale as in the printed example, only at those loactions where the surface has a certain lipophobicity. The surface parts with lipophilic behavior are clamped to white. In this example the information is filtered using a delta-type function, suppressing all information not exceeding a specified threshold. In other cases, a continuous filter may be more appropriate to allow a more fine-grained quantification.

Another useful application is to filter the electrostatic potential with the electric field. Taking the absolute value of the electric field, the filter easily pinpoints the areas of the highest local field gradient, which helps in identifying the binding site of an inhibitor without further interaction of the scientist. With translation in the texture space, one can interactively modify the filter threshold or change the appearance of the color ramp.

Arbitrary Surface Clipping

Color-coding in the sense of information filtering affects purely the color information of the texture map. By adding transparency as an additional information channel, a lot of flexibility is gained for the comparison of multiple property channels. In a number of cases, transparency even helps in geometrically understanding a particular property. For example, the local flexibility of a molecule structure according to the crystallographically determined B-factors can be visually represented: the more rigid the structure is, the more opaque the surface will be displayed. Increasing transparency indicates higher floppiness of the domains. Such a transparency map may well be combined with any other color-coded property, as it is of interest to study the dynamic properties of a molecule in many different contexts.

An extension to the continuous variation of surface transparency as in the example of molecular flexibility mentioned above is the use of transparency to clip parts of the surface away completely, depending on a property coded into the texture. This can be achieved by setting the alpha values at the appropriate vertices directly to zero. Applied to the information filtering example of Gramicidine A, one can just clip the surface using a texture where all alpha values in the previously white region a reset to 0, as is demonstrated in Figure 6.

Figure 6: Clipping of the solvent-accessible surface of Gramicidine A according to the MLP.

There is a distinct advantage in using alpha texture as a component for information filtering: irrelevant information can be completely eliminated, while geometric information otherwise hidden within the surface is revealed directly in the context of the surface. And again, it is worthwhile to mention that by a translation in texture space, the clipping range can be changed interactively!

Color-Coding Pseudo-Code Example

All above-described methods for property visualization on object surfaces are based upon the same texture mapping requirements. They are not very demanding in terms of features or the amount of texture memory needed.

Two options are available to treat texture coordinates that fall outside the range of the parametric unit square. Either the texture can be clamped to constant behavior or the entire texture image can be periodically repeated. In the particular examples of 2D information filtering or property clipping, the parametric s coordinate is used to modify the threshold (clamped) and the t coordinate is used to change the appearance of the color code (repeated). Figure 7 shows different effects of transforming this texture map, while the following pseudo-code example expresses the presented texture setup. GL=specific calls and constants are highlighted in boldface:

        texParams = {
TX_MINIFILTER, TX_POINT,
TX_MAGFILTER, TX_POINT,
TX_WRAP_S, TX_CLAMP,
TX_WRAP_T, TX_REPEAT,
TX_NULL
};
texdef2d(
texIndex,numTexComponents,
texWidth,texHeight,texImage,
numTexParams,texParams
);
texbind(texIndex);

The texture image is an array of unsigned integers, where the packing of the data depends on the number of components being used for each texel.

Figure 7: Example of a 2D texture used for information filtering, with different transformations applied: original texture (left), translation in s coordinates to adjust filter threshold (middle), and scaling along in t coordinates to change meaning of the texture colors (right).

The texture environment defines how the texure modifies incoming pixel values. In this case we want to keep the information from the lighting calculation and modulate this with the color coming from the texture image:

        texEnvParams = {
TV_MODULATE, TV_NULL
};

tevdef(texEnvIndex,numTexEnvParams,texEnvParams);
tevbind(texEnvIndex);

Matrix transformations in texture space must be targeted to a matrix stack that is reserved for texture modifications:

        mmode(MTEXTURE);
translate
(texTransX,0.0,0.0);
scale(1.0,texScaleY,1.0);
mmode(MVIEWING);

The drawing of the object surface requires the binding of a neutral material to get a basic lighting effect. For each vertex, the coordinates, the surface normal, and the texture coordinates are traversed in the form of calls to v3f, n3f, and t2f.

The afunction() call is only needed in the case of surface clipping. It will prevent the drawing of any part of the polygon that has a texel color with alpha = 0:

        pushmatrix();
loadmatrix(modelViewMatrix);
if(clippingEnabled) {
afunction(0,AF_NOTEQUAL);
}
drawTexturedSurface();
popmatrix();

Figure 8: Schematic representation of the drawTexturedSurface() routine.

Real-Time Volume Rendering Techniques

Volume rendering is a visualization technique used to display 3D data without an intermediate step of deriving a geometric representation such as a solid surface or a chicken wire. The graphical primitives characteristic of this technique are called voxels, derived from volume element and analogous to the pixel. However, voxels describe more than just color and in fact can represent opacity or shading parameters as well.

A variety of experimental and computational methods produce such volumetric data sets: computer tomography (CT), magnetic resonance imaging (MRI), ultrasonic imaging (UI), confocal light scanning microscopy (CLSM), electron microscopy (EM), and X-ray crystallography, just to name a few. Characteristic to these data sets are a low signal-to-noise ratio and a large number of samples, which makes it difficult to use a surface-based rendering technique, both from a performance and a quality standpoint.

The data structures employed to manipulate volumetric data come in two flavors: (1) the data may be stored as a 3D grid, or (2) it may be handled as a stack of 2D images. The former data structure is often used for data that is sampled more or less equally in all three dimensions, wheras the image stack is preferred with data sets that are high resolution in two dimensions and sparse in the third.

Historically, a wide variety of algorithms has been invented to render volumetric data, ranging from ray tracing to image compositing [12]. The methods cover an even wider range of performance, where the advantage of image compositing clearly emerges; several images are created by slicing the volume perpendicular to the viewing axis and then combined back to front, thus summing voxel opacities and colors at each pixel.

In the majority of the cases, the volumetric information is stored using one-color channel only. This allows the use of lookup tables (LUTs) for alternative color interpretation. Before a particular entry in the color channel is rendered to the frame buffer, the color value is interpreted as a lookup into a table that aliases the original color. By rapidly changing the color and/or opacity transfer function, various structures in the volume are interactively revealed.

By using texture mapping to render the images in the stack, a performance level is reached that is far superior to any other technique used today and allows the real-time manipulation of volumetric data. In addition, a considerable degree of flexibility is gained in performing spatial transformations to the volume, since the transformations are applied in the texture domain and cause no performance overhead.

Volume Rendering Using 2D Textures

As a linear extension to the original image compositing algorithm, the 2D textures can directly replace the images in the stack. A set of mostly quadrilateral polygons is rendered back to front, with each polygon binding its own texture if the depth of the polygon corresponds to the location of the sampled image. Alternatively, polygons in between may be textured in a two-pass procedure, i.e., the polygon is rendered twice, each time binding one of the two closest images as a texture and filtering it with an appropriate linear weighting factor. In this way, in-between frames may be obtained even if the graphics subsystem doesn't support texture interpolation in the third dimension.

The resulting volume looks correct as long as the polygons of the image stack are alligned parallel to the screen. However, it is important to be able to look at the volume from arbitrary directions. Because the polygon stack will result in a set of lines when being oriented perpendicular to the screen, a correct perception of the volume is no longer possible.

This problem can easily be soved. By preprocessing the volumetric data into three independent image stacks that are oriented perpendicular to each other, the most appropriate image stack can be selected for rendering based on the orientation of the volume object. As soon as one stack of textured polygons is rotated toward a critical viewing angle, the rendering function switches to one of the two additional sets of textured polygons, depending on the current orientation of the object.

Volume Rendering Using 3D Textures

As described in the previous section, it is not only possible, but almost trivial to implement real-time volume rendering using 2D texture mapping. In addition, the graphics subsystems will operate at peak performance, because they are optimized for fast 2D texture mapping. However, there are certain limitations to the 2D texture approach: (1) the memory required by the triple image stack is a factor of 3 larger than the original data set, which can be critical for large data sets as they are common in medical imaging or microscopy, and (2) the geometry sampling of the volume must be aligned with the 2D textures concerning the depth, i.e., arbitrary surfaces constructed from a triangle mesh cannot easily be colored depending on the properties of a surrounding volume.

For this reason, advanced rendering architectures support hardware implementations of 3D textures. The correspondence between the volume to be rendered and the 3D texture is obvious. Any 3D surface can serve as a sampling device to monitor the coloring of a volumetric property, i.e., the final coloring of the geometry reflects the result of the intersection with the texture. Following this principle, 3D texture mapping is a fast, accurate, and flexible technique for looking at the volume.

The simplest application of 3D textures is that of a slice plane, which cuts in arbitrary orientations through the volume, which is now represented directly by the texture. The planar polygon being used as geometry in this case will then reflect the contents of the volume as if it were exposed by cutting the object with a knife, as shown in Figure 9: since the transformation of the sampling polygon and that of the 3D texture is independent, it may be freely oriented within the volume. The property visualized in Figure 9 is the probability of water being distributed around a sugar molecule. The orientation of the volume, that is the transformation in the texture space, is the same as the molecular structure. Either the molecule, together with the volumetric texture, or the slicing polygon may be reoriented in real time.

An extension of the slice plane approach leads to complete visualization of the entire volume. A stack of slice planes, oriented in parallel to the computer screen, samples the entire 3D texture. The planes are drawn back to front and in sufficiently small intervals. Geometric transformations of the volume are performed by manipulating the orientation of the texture, keeping the planes in screen-parallel orientation, as can be seen in Figure 10, which shows a volume-rendered example of a medical application.

This type of volume visualization is greatly enhanced by interactive updates of the color lookup table used to define the texture. In fact, a general-purpose color ramp editor may be used to vary the lookup colors or the transparency based on the scalar information at a given point in the 3D volume.

Figure 9: Slice plane through the water density surrounding a sugar molecule.

The slice plane concept can be extended to arbitrarily shaped objects. The idea is to probe a volumetric property and to display it wherever the geometric primitives of the probing object cut the volume. The probing geometry can be of any shape, e.g., a sphere, collecting information about the property at a certain distance from a specified point, or it may be extended to describe the surface of an arbitrary object.

The independence of the object's transformation from that of the 3D texture offers complete freedom in orienting the surface with respect to the volume. As a further example of a molecular modeling application, this provides an opportunity to look at a molecular surface and have the information about a surrounding volumetric property updated in real time, based on the current orientation of the surface.

Figure 10: Volume rendering of MRI data using a stack of screen-parallel sectioning planes, which is cut in half to reveal detail in the inner part of the object.

High-Quality Surface Rendering

The visualization of solid surfaces with a high degree of local curvature is a major challenge for accurate shading and is where the simple Gouraud shading [13] approach always fails. Here, the lighting calculation is performed for each vertex, depending on the orientation of the surface normal with respect to the light sources. The output of the lighting calculations is an RGB value for the surface vertex. During rasterization of the surface polygon the color value of each pixel is computed by linear interpolation between the vertex colors. Aliasing of the surface highlight is then a consequence of undersampled surface geometry, resulting in moving Gouraud banding patterns on a surface rotating in real time, which is very disturbing. Moreover, the missing accuracy in shading the curved surfaces often leads to a severe loss of information on the object's shape, which is not only critical for the evaluation and analysis of scientific data, but also for the visualization of CAD models, where the visual perception of shape governs the overall design process.

Figure 11 demonstrates this problem using a simple example: on the left, the sphere exhibits typical Gouraud artifacts, on the right the same sphere is shown with a superimposed mesh that reveals the tessellation of the sphere surface. Looking at these images, it is obvious how the shape of the highlight of the sphere was generated from linear interpolation. When the sphere is rotated, the highlight begins to oscillate, depending on how near the surface normal at the brightest vertex is with respect to the precise highlight position.

Figure 11: Gouroud shading artifacts on a moderately tessellated sphere.

Correct perception of the curvature and constant, nonoscillating highlights can only be achieved with computationally much more demanding rendering techniques such as Phong shading [14]. In contrast to linear interpolation of vertex colors, the Phong shading approach interpolates the normal vectors for each pixel of a given geometric primitive, computing the lighting equation in the subsequent step for each pixel. Attempts have been made to overcome some of the computationally intensive steps of the procedure [15], but their performance is insufficient to be a reasonable alternative to Gouraud shading in real-time applications.

Real-Time Phong Shading

With 2D texture mapping it is now possible to achieve both high-performance drawing speed and highly accurate shading. The resulting picture compares exactly to the surface computed with the complete Phong model with infinite light sources.

The basic idea is to use the image of a high-quality rendered sphere as texture. The object's unit length surface normal is interpreted as texture coordinate. Looking at an individual triangle of the polygonal surface, the texture mapping process may be understood as if the image of the perfectly rendered sphere would be wrapped piecewise on the surface polygons. In other words, the surface normal serves as a lookup vector into the texture, acting as a 2D lookup table that stores precalculated shading information.

The advantage of such a shading procedure is clear: the interpolation is done in texture space and not in RGB; therefore, the position of the highlight will never be missed. Note that the tessellation of the texture mapped sphere is exactly the same as for the Gouraud shaded reference sphere in Figure 11.

Figure 12: Phong-shaded sphere using surface normals as a lookup for the texture coordinate.

As previously mentioned, this method of rendering solid surfaces with the highest accuracy can be applied to arbitrarily shaped objects. Figure 13 shows the 3D reconstruction of an electron microscopic experiment, visualizing a large biomolecular complex, the asymmetric unit membrane of the urinary bladder. The difference between Gouraud shading and the texture mapping implementation of Phong shading is obvious, and for the sake of printing quality, can be seen best when looking at the closeups. Although this trick is so far only applicable for infinitely distant light sources, it is a tremendous aid for the visualization of highly complex surfaces.

Figure 13: Application of the texture mapped Phong shading to a complex surface representing a biomolecular structure. The closeups demonstrate the difference between Gouraud shading (above right) and Phong shading (below right) when implemented using texture mapping.

Phong Shading Pseudo-Code Example

The setup for the texture mapping as used for Phong shading is shown in the following code fragment:

        texParams = {   
TX_MINIFILTER, TX_POINT,
TX_MAGFILTER, TX_BILINEAR,
TX_NULL
};

texdef2d(
texIndex,numTexComponents,
texWidth,texHeight,texImage,
numTexParams,texParams
);

texbind(texIndex);

texEnvParams = { TV_MODULATE, TV_NULL };

tevdef(texEnvIndex,numTexEnvParams,texEnvParams);
tevbind(texEnvIndex);

As texture, we can use any image of a high-quality rendered sphere with either RGB or one intensity component only. The RGB version allows the simulation of light sources with different colors.

The most important change for the vertex calls in this model is that we do not pass the surface normal data with the n3f command as we normally do when using Gouraud shading. The normal is passed as texture coordinate and therefore processed with the t3f command.

Surface normals are transformed with the current model view matrix, although only rotational components are considered. For this reason the texture must be aligned with the current orientation of the object. Also, the texture space must be scaled and shifted to cover a circle centered at the origin of the s/t coordinate system, with a unit length radius to map the surface normals:

Figure 15: Schematic representation of the drawTexPhongSurface () routine.

Conclusions

SGI has recently introduced a new generation of graphics subsystems, which support a variety of texture mapping techniques in hardware without performance penalty. The potential of using this technique in technical, scientific and engineering visualization applications has been demonstrated.

Hardware-supported texture mapping offers solutions to important visualization problems that either have not been solved yet or did not perform well enough to enter the world of interactive graphics applications. Although most of the examples presented here could be implemented using techniques other than texture mapping, the tradeoff would either be complete loss of performance or an unmaintainable level of algorithmic complexity.

Most of the examples were taken from the molecular modeling market, where one has learned over the years to handle complex 3D scenarios interactively and in an analytic manner. What has been shown here can also be applied in other areas of scientific, technical, or engineering visualization. With the examples shown in this report, it should be possible for software engineers developing application software in other markets to use the power and flexibility of texture mapping and to adapt the shown solutions to their specific case.

One important, general conclusion may be drawn from this work: one has to leave the traditional mind set about texture mapping and go back to the basics in order to identify the participating components and to understand their generic role in the procedure. Once this step is done it is very simple to use this technique in a variety of visualization problems.

All examples were implemented on SGI® CrimsonTM RealityEngineTM [7] equipped with two raster managers. The programs were written in C, either in mixed mode GLX or pure GL.

References


  1. Blinn, J.F. and Newell, M.E. Texture and reflection in computer generated images, Communications of the ACM 1976, 19, 542-547.

  2. Blinn, J.F. Simulation of wrinkled surfaces, Computer Graphics 1978, 12, 286-292.

  3. Haeberli, P. and Segal, M. Texture mapping as a fundamental drawing primitive, Proceedings of the fourth eurographics workshop on rendering, 1993, 259-266.

  4. Peachy, D.R. Solid texturing of complex surfaces, Computer Graphics 1985, 19, 279-286.

  5. Gardner, G.Y. Simulation of natural scenes using textured quadric surfaces, Computer Graphics 1984, 18, 11-20.

  6. Gardner, G.Y. Visual simulations of clouds, Computer Graphics 1985, 19, 279-303.

  7. Akeley, K. RealityEngine Graphics, Computer Graphics 1993, 27, 109-116.

  8. Catmull, E.A. Subdivision algorithm for computer display of curved surfaces, Ph.D. thesis University of Utah, 1974.

  9. Crow, F.C. Summed-area tables for texture mapping, Computer Graphics 1984, 18, 207-212.

  10. Dill, J.C. An application of color graphics to the display of surface curvature, Computer Graphics 1981, 15, 153-161.

  11. Sabella, P. A rendering algorithm for visualizing 3d scalar fields, Computer Graphics 1988, 22, 51-58.

  12. Drebin, R. Carpenter, L. and Hanrahan, P. Volume Rendering, Computer Graphics 1988, 22, 65-74.

  13. Gouraud, H. Continuous shading of curved surfaces, IEEE Transactions on Computers 1971, 20, 623-628.

  14. Phong, B.T. Illumination for computer generated pictures, Communications of the ACM 1978, 18, 311-317.

  15. Bishop, G. and Weimer, D.M. Fast Phong shading, Computer Graphics 1986, 20, 103-106.


Copyright © 1992, 1998, Silicon Graphics, Inc.