Recent discussion of the OPENGL-GAMEDEV
mailing list have explored how to support numerous (more than 8)
light sources at once with an OpenGL rendered scene. For example,
consider a hallway in a Quake-style dungeon where candles or torches
illumate dynamic objects in the corridor. The candles/torches can be be
modeled as positional, attentuated OpenGL light sources.
The problem is that most OpenGL implementations only support the
minimum required number of light sources. Minimum required number of
light sources is eight (8). Note: OpenGL implementations are free to
support an arbitary number of light sources, but to make hardware
accelerated lighting tractable, OpenGL only mandates that at least 8
light sources.
Even if there were an arbitary number of OpenGL light sources and
available, the performance implications of a dozen or more OpenGL light
sources enabled throughout the scene is likely to be prohibitive,
particularly considering that far away lights may add an extremely
insignificant or even no lighting contribution to many objects in the
scene.
A couple of solutions were proposed on the mailing list:
-
Instead of using OpenGL's lighting model, calculate your own lighting
effects and simply use glColor3f to assign the
application-computed lit color.
Application-computing lighting has the advantage of allowing arbitary
lighting effects and potentially more efficient lighting calculations.
It has the disadvantage of not utilizing graphics hardware that
accelerates lighting calculations using the graphics hardware.
-
Use textured lightmaps (a la Quake) to generate convincing static
lighting pattersn on walls and other surfaces. For example, the
lighting pattern of candles in a hallway could be generated with a
texture of each candle's localized light pattern.
Lightmaps have the advantage of permitting convincing lighting effects
that mimic radiosity solutions and largely avoid transform costs for
lighting (instead, lightmaps stress texture mapping and blending).
Lightmaps have the disadvantage of being fairly static (often
pre-computed) so they do not easily handle lighting situations
involving dynamic objects.
-
Use OpenGL light sources, but do per-object calculations to determine
which light sources will most affect the overall lit appearance of each
lit object. Dynamically configure and enable the appropriate light
sources before rendering each lit object. For example, in the hallway
of candles, only the candles nearest a particular object should affect
the object significantly because candles are dim. This approach is
called virtualized light sources because a potentially large
number of scene light sources are mapped dynamically to OpenGL's
limited number of (potentially hardware accelerated) OpenGL light
sources.
Virtualized light sources has the advantage of leveraging OpenGL's
hardware acceleration for per-vertex lighting calculations and using
OpenGL's easy-to-use API. The technique can also improve overall
lighting performance by disabling light sources with insignificant
contributions to an object's coloration. Virtualized light sources
have the disadvantage of introducing overhead due to lighting state
changes and may not be able to completely handle (rare?) situations
where more than 8 light sources (or whatever higher limit exists) are
truly needed to correctly capture an intricate lighting effect.
The remainder of this article describes the last approach in more
detail including the presentation of screen snapshots and sample source
code to demonstrate the technique.
First, a few words about OpenGL's lighting model. Light is a
complicated phenomenon. OpenGL's lighting model is designed for
real-time interaction; OpenGL's lighting model only attempts to capture
some of the simplest lighting surface effects such as diffuse and
specular interactions. OpenGL's lighting model does not handle
complicated effects such as shadows, reflections, refraction, or
occlusion of light (relativistic and quantum light interfactions are
similarly ignored). If you want to implement effects such as reflections
and shadows with OpenGL, you can with more sophisticated rendering
techniques beyond those supported by OpenGL's lighting model. What
OpenGL does model is per-vertex interactions involving only the surface
material and a set of light sources. In practice, this is enough to
achieve some pretty nice effects at interactive rates.
In practice, you can think of OpenGL's lighting model as really a bunch
of equations that compute an RGB color value at each vertex. Indeed, if
you want to really understand OpenGL's lighting model, see the
explanation of OpenGL's lighting operation in the OpenGL 1.1 specification.
The fact that OpenGL's lighting equations are explicit makes it
straightforward for applications to quickly and robustly approximate
the contributions of various light sources in the scene more or less
the same way that OpenGL. This lets the application virtualize its
light sources. Some other 3D graphics APIs such as Direct3D do not
specify the lighting equations the API uses in enough detail to
pre-compute lighting effects reliably (in the case of
Direct3D, the API lacks both a rigorous specification and a
standard conformance suite to enforce a uniform behavior; OpenGL has both).
Before we get much further describing the approach, let's take a look
at a screen snapshot from the multilight.c example (the full
workign source code is available; the program uses the OpenGL Utility
Toolkit for portability):

So what is the image showing? The sphere in the scene wanders among the
two rows of light sources (indicated by each of the small color
spheres). Think of the two rows as light sources as candles in a
hallway if you want (use your imagination). Notice that most of the
light sources are numbered. The closer (less distant from the sphere)
light sources have smaller numbers (the blue "0" light source
is the closest; light sources "7" and "4" are
actually hidden behind the sphere). The distant light sources (what
would be "8" and beyond) are simply not enabled. Because of
the way these light sources attenuate over distances, the distable
light sources wouldn't change the sphere's appearance even if they were
enablable.
Indeed, look at the following snapshot of bascially the same scene:

What's the difference? If you notice, light sources "7",
"6", "5", and "4" are gray, not white.
This indicates that these light sources are disabled. Note that the
sphere's lighting looks basically the same; this is because previously
listed 4 light sources really aren't affecting the coloration of the
sphere in any significant way (yet even so, when enabled, they
generally still slow down your rendering!). The point is that if we are
clever about know how light sources contribute to the scene, we can get
the same scene appearance with less lighting overhead.
The idea is that if light sources are localized (technically, if the
light sources are positional and attenuated), it makes sense to not
enable light sources that are too dim to contribute to the lighting of
an object. The more light sources in your scene, the truer this becomes
because all those extra light sources would probably suck performance
without significantly improving your 3D scene.
When you run the multilight example (I encourage you to
compile it and try it out), you'll notice that as the sphere wanders
among the light sources, the distances between the sphere and the light
sources changes. The program automatically updates what light sources
are active. You'll see that the numbers change as the sphere's location
changes. The un-numbered and high-numbered light sources are always the
ones most distant (that is, likely to not affect the sphere's
coloration much).
Think about the light sources labelled "4" and "7"
in the snapshots above. These lights are not going to affect the sphere
from the view shown because they "behind" the sphere. Their
diffuse contribution to the sphere's lighting is nil for all of the
sphere we can see from this view. This is true even though these light
sources are fairly close to the sphere. Our distance-based
determination of how "bright" or "dim" a light
source is may not be the best determination of what light sources
should be enabled or not.
Lambert's Law (explained in Section 6.3.2 of Ed Angel's OpenGL
textbook) models the way diffuse reflections occur. Basically,
Lambert's Law explains that the diffuse light bouncing off a diffuse
surface is proprotional to the cosine of the angle between the normal
of the surface and the direction of the light source. A more
sophisticated determination of which diffuse light sources affect a
diffuse object should use Lambert's Law, not simply rely on distance.
The multilight example implements such a scheme. See the
snapshot below:

In this version, the picture looks about the same, but light sources
behind the sphere are no longer in the "top 8" light sources
affecting the object. The Lambertian-based approach does a better job
determining what light sources are really going to contribute to the
lighting of the sphere based on not just their distance from the
sphere, but also the nature of diffuse reflections from the surface.
A better determination of which lights are most important to the
object's lighting is important because it means that in more
complicated situations where lots of lights matter, the determination
doesn't needlessly enable light source just because they are close.
Fewer enabled light sources = improved performance.
Here's the same scene from a different viewpoint positioned so that the
blue light marked "0" above is actually behind the sphere now:

Notice that the blue light source that was "0" in the scene
before is now not even enabled; and the right light source now marked
"0" was not even enabled before. This makes sense because of
how diffuse reflection works. Unlike the distance-based approach, as
the view changes, so will what light sources contribute more to the
object's lighting.
One caveat is that to quickly approximate diffuse reflection to
determine which light sources are most significant, multilight
makes an assumption that the "normal" of the sphere directly
faces us. That's approximately true for most of the sphere that is
facing us; it is not really true for the sides of the sphere that are
still visible to us.
So how does this work in practice? Generally, for the scene in movelight
(admittedly contructed to demonstrate this point), generally only
about 4 light sources really significantly contribute to the scene. By
enabling only the four most significant light sources (instead of all
eight), movelight can render frames 33% faster on a 200 Mhz Indigo2 XL
(lighting calculations are done on the main CPU, not off-loaded to
dedicated graphics hardware on this machine) compared to naively
enabling 8 light sources (presumably if the full 12 light sources in
the scene could be enabled, it would be even slower). The point is that
virtualized light sources can permit faster rendering at basically the
same visual quality as naively an OpenGL light source per light source
in your scene.
You can download or read the multilight.c
source code.
If you want to find more information about using OpenGL for
sophisticated rendering effects, check out these other OpenGL rendering techniques.
- OPENGL Web site