Topic Name: Innovative Computer Graphics Machine that Reduce the Computational Cost of Making Realistic Smoky and Foggy 3-D Images Using Ray Tracing Algorithms
Category: Computer science & technology
Research persons: Wojciech Jarosz
Location: University of California - San Diego, United States
UC San Diego computer scientists have created a fog and smoke
machine for computer graphics that cuts the computational cost of making
realistic smoky and foggy 3-D images, such as beams of light from a lighthouse
piercing thick fog.
By cutting the computing cost for creating highly realistic imagery from
scratch, the UCSD computer scientists are helping to pull cutting edge graphics
techniques out of research labs and into movies and eventually video games and
beyond. The findings are being presented this week at Europe’s premier computer
graphics conference, Eurographics 2008 in Crete, Greece on April 17.
This new work is part of a shift in the computer graphics, film, animation and
video game industries toward greater realism through the use of “ray tracing
algorithms.” Much of the realism in ray tracing technologies comes from
calculating how the light in computer generated images would behave if it were
set loose in the real world and followed the laws of nature.
At the heart of the new UCSD advance are computationally slimmed “photon
mapping” algorithms, which are a subset of the ray tracing algorithms. The
computer scientists found a way to collect all of the pertinent lighting
information in computer generated scenes at once, which made the new photon
mapping approach more lightweight than conventional photon mapping. This
technique is especially good for creating smoky, foggy or cloudy scenes and
producing images that do not have much unwanted visual noise.
“We took an algorithm that is already great and made it more efficient,” said
Wojciech Jarosz, the first author on the new Eurographics paper and a Ph.D.
candidate from the Department of Computer Science and Engineering at UCSD’s
Jacobs School of Engineering.
This approach is an improvement upon the Academy Award winning photon mapping
technique first developed by UCSD computer science professor Henrik Wann Jensen
during his doctoral studies. Jensen is an author on this new Eurographics paper.
UCSD computer science professor Matthias Zwicker is an author on the paper as
To date, ray tracing algorithms are primarily used in settings where ultimate
realism is required and where heavy computation can be tolerated – such as
offline environments that do not require real-time image rendering. An
ice-cube-filled drink in the animated movie Final Fantasy is one example of
photon mapping in the movies.
Computational constraints, however, have limited the use of photon mapping and
other ray tracing approaches in places where speed and lightweight computation
are crucial – such as video games. These domains have embraced an approach
called rasterization, which is faster, but unable to easily simulate advanced
The new photon mapping approach from UCSD, and related advances, are poised to
increase the reach of ray tracing algorithms – perhaps even into the domain of
3-D animation software for the general public and video games. The Larrabee chip
that Intel is working on is just one indication that ray tracing technologies
may play an increasing important role in consumer oriented graphics of the
Computerized Fog Costs
Much of the richness in images created with photon mapping algorithms comes from
precise accounting for the amount of light is in a scene and where that light
is. Photon mapping algorithms provide a way to follow the light around the
scene, as it bounces off various objects and lands on other objects. Photon
mapping can also determine how light will interact with fog, smoke or other
“participating media” that absorb, reflect and scatter some portion of the light
– a task that has been traditionally quite computationally costly to perform
because it requires sampling the light at many locations in order to make sure
that nearly all the light is accounted for.
“Instead of computing the light at thousands of discrete points along the ray
between the camera and the object, which is the conventional approach, we
compute the lighting along the whole length of the ray all at once,” said Jarosz.
To Movies, Then Milk
This more efficient approach to photon mapping could be extended well beyond
foggy and smoky scenes, because many materials, including skin, milk and plants,
behave like fog or smoke, but on a more limited basis.
“Most natural materials behave like really dense fog because light penetrates
them to a limited extent, so this work has a lot of potential future
applications,” said Jensen, who published work in 2007 at SIGGRAPH on a graphics
model capable of generating realistic milk images based on the fat and protein
content. This research is pushing the field of computer graphics into the realms
of diagnostic medicine, food safety and atmospheric science.
While photon mapping and other ray tracing algorithms that more closely mimic
the natural world are making their way into movie special effects and animated
films, Jarosz does not expect movies and video games to strictly follow the laws
“In live action movies, the lighting is incredibly controlled. If a character
walks into a shadow, they will add light to the face even if you would never get
that kind of light in a real shadow. The composition on the screen must tell the
story and not distract the viewer. Realism doesn’t always matter. It’s the
Note for Ray Tracing
Ray tracing is a general technique from geometrical optics of modeling the path
taken by light by following rays of light as they interact with optical
surfaces. It is used in the design of optical systems, such as camera lenses,
microscopes, telescopes and binoculars. The term is also applied to mean a
specific rendering algorithmic approach in 3D computer graphics, where
mathematically-modeled visualisations of programmed scenes are produced using a
technique which follows rays from the eyepoint outward, rather than originating
at the light sources. It produces results similar to ray casting and scanline
rendering, but facilitates more advanced optical effects, such as accurate
simulations of reflection and refraction, and is still efficient enough to
frequently be of practical use when such high quality output is sought. Ray
tracing may also be applied in other areas of science and engineering, such as
in the calculation of radio signal paths.
Optical ray tracing describes a method for producing visual images constructed
in 3D computer graphics environments, with more photorealism than either ray
casting or scanline rendering techniques. It works by tracing a path from an
imaginary eye through each pixel in a virtual screen, and calculating the color
of the object visible through it.
Scenes in raytracing are described mathematically by a programmer or by a visual
artist (using intermediary tools). Scenes may also incorporate data from images
and models captured by means such as digital photography.
Typically, each ray must be tested for intersection with some subset of all the
objects in the scene. Once the nearest object has been identified, the algorithm
will estimate the incoming light at the point of intersection, examine the
material properties of the object, and combine this information to calculate the
final color of the pixel. Certain illumination algorithms and reflective or
translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or "backwards" to send rays away from the
camera, rather than into it (as actual light does in reality), but doing so is
in fact many orders of magnitude more efficient. Since the overwhelming majority
of light rays from a given light source do not make it directly into the
viewer's eye, a "forward" simulation could potentially waste a tremendous amount
of computation on light paths that are never recorded. A computer simulation
that starts by casting rays from the light source is called Photon mapping, and
it takes much longer than a comparable ray trace.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray
intersects the view frame. After either a maximum number of reflections or a ray
traveling a certain distance without intersection, the ray ceases to travel and
the pixel's value is updated. The light intensity of this pixel is computed
using a number of algorithms, which may include the classic rendering algorithm
and may also incorporate techniques such as radiosity.
Ray tracing's popularity stems from its basis in a realistic simulation of
lighting over other rendering methods (such as scanline rendering or ray
casting). Effects such as reflections and shadows, which are difficult to
simulate using other algorithms, are a natural result of the ray tracing
algorithm. Relatively simple to implement yet yielding impressive visual
results, ray tracing often represents a first foray into graphics programming.
The computational independence of each ray makes ray tracing amenable to
A serious disadvantage of ray tracing is performance. Scanline algorithms and
other algorithms use data coherence to share computations between pixels, while
ray tracing normally starts the process anew, treating each eye ray separately.
However, this separation offers other advantages, such as the ability to shoot
more rays as needed to perform anti-aliasing and improve image quality where
needed. Although it does handle interreflection and optical effects such as
refraction accurately, traditional ray tracing is also not necessarily
photorealistic. True photorealism occurs when the rendering equation is closely
approximated or fully implemented. Implementing the rendering equation gives
true photorealism, as the equation describes every physical effect of light
flow. However, this is usually infeasible given the computing resources
required. The realism of all rendering methods, then, must be evaluated as an
approximation to the equation, and in the case of ray tracing, it is not
necessarily the most realistic. Other methods, including photon mapping, are
based upon ray tracing for certain parts of the algorithm, yet give far better
Note for Larrabee
Larrabee is the codename for a discrete graphics processing unit (GPU) chip that
Intel is developing as a revolutionary successor to its current line of graphics
accelerators. The video card containing Larrabee is expected to compete with the
GeForce and Radeon lines of video cards from NVIDIA and AMD/ATI respectively.
Intel plans to have engineering samples of Larrabee ready by the end of 2008,
with public release in late 2009 or 2010.
Larrabee will differ from other GPUs currently on the market such as the GeForce
8 Series and the Radeon 2/3000 series in that it will use a derivative of the
x86 instruction set for its shader cores instead of a custom graphics-oriented
instruction set, and is thus expected to be more flexible. In addition to
traditional 3D graphics for games, Larrabee is also being designed explicitly
for general purpose GPU (GPGPU) or stream processing tasks: for example, to
perform ray tracing or physics processing, in real time for games or perhaps
offline as a component of a supercomputer.
Larrabee will not be Intel's first discrete GPU. In the late 1990s, Intel
subsidiary Real3D created an AGP and PCI graphics accelerator, the Intel740.
However, Intel's participation in the graphics hardware market has subsequently
been limited to integrated graphics chips under the Intel GMA brand. Although
the low cost and power consumption of Intel GMA chips make them ideal for small
laptops and less demanding tasks, including general office activities such as
word processing, they lack the 3D graphics processing power to compete with
NVIDIA and AMD/ATI for a share of the high-end gaming computer market or a place
in popular home games consoles, which Larrabee aims to have.
Note for Photon Mapping
In computer graphics, photon mapping is a two pass global illumination algorithm
developed by Henrik Wann Jensen that solves the rendering equation. Rays from
the light source and rays from the camera are traced independently until some
termination criterion is met, then they are connected in a second step to
produce a radiance value. It is used to realistically simulate the interaction
of light with different objects. Specifically, it is capable of simulating the
refraction of light through a transparent substance such as glass or water,
diffuse interreflection between illuminated objects, the subsurface scattering
of light in translucent materials, and some of the effects caused by particulate
matter such as smoke or water vapor. It can also be extended to more accurate
simulations of light such as spectral rendering.
The effects of the refraction of light through a transparent medium are called
caustics. A caustic is a pattern of light that is focused on a surface after
having had the original path of light rays bent by an intermediate surface. For
example, as light rays pass through a glass of wine sitting on a table and the
liquid it contains, they are refracted and focused on the table the glass is
standing on. The wine also changes the pattern and color of the light.
Diffuse interreflection is apparent when light from one diffuse object is
reflected onto another. Photon mapping is particularly adept at handling this
effect because the algorithm reflects photons from one surface to another based
on that surface's bidirectional reflectance distribution function(BRDF), and
thus light from one object striking another is a natural result of the method.
Diffuse interreflection was first modeled using radiosity solutions. Photon
mapping differs though in that it separates the light transport from the nature
of the geometry in the scene. Color bleed is an example of diffuse
Subsurface scattering is the effect evident when light enters a material and is
scattered before being absorbed or reflected in a different direction.
Subsurface scattering can accurately be modeled using photon mapping. This was
the original way Jensen implemented it; however, the method becomes slow for
highly scattering materials, and thus BSSRDFs can be used more effectively in
With photon mapping, light packets called photons are sent out into the scene
from the light sources. Whenever a photon intersects with a surface, the
intersection point and incoming direction are stored in a cache called the
photon map. Typically, two photon maps are created for a scene: one especially
for caustics and a global one for other light. After intersecting the surface, a
probability for either reflecting, absorbing, or transmitting/refracting is
given by the material. A Monte Carlo method called Russian roulette is used to
choose one of these actions. If the photon is absorbed, no new direction is
given, and tracing for that photon ends. If the photon reflects, the surface's
BRDF is used to determine a new direction. Finally, if the photon is
transmitting, a different function for its direction is given depending upon the
nature of the transmission.
Once the photon map is constructed (or during construction), it is typically
arranged in a manner that is optimal for the k-nearest neighbor algorithm, as
photon look-up time depends on the spatial distribution of the photons. Jensen
advocates the usage of kd-trees. The photon map is then stored on disk or in
memory for later usage.
In figure 1, The same scene. The same image rendering time.
Very different results. The noisy and pixilated image on the left was created
with the conventional “ray marching” photon mapping technique while the images
on the right was created using the new “beam estimate” approach to photon
mapping that is computationally more efficient.
In figure 2, The above picture from Antelope Canyon Navajo Tribal Park provides
a real life example of the kinds of scenes that the new research can capture in
a computationally efficient manner. The interplay between the dust in the air
and the light pouring down from above creates the same smoky ambiance that
Wojciech Jarosz creates in his cutting edge computer graphics research at UCSD’s
Jacobs School of Engineering.
In figure 3, The light piercing the fog in the top image is smooth, realistic
and computationally light-weight thanks to a new method for gathering light for
computer graphics via photon mapping. This UC San Diego work will be presented
at Europe’s premier computer graphics conference, Eurographics on April 17,
2008. In the bottom image, the very same scene was generated using the
conventional light gathering approach. Both images consumed the same
ASU scientists improve chip memory by stacking cells, Computer Program Traces Ancestry Using Anonymous DNA Samples, Computing and Monitoring System for Discovery BY UCoMS, EtherNet/IP Performance Test Tool Enables Manufacturers to Predict the Performance of Data Communication System Machines, How Small Can Computers Get? Computing In A Molecule, Intel's New Breed of Chips: The chip maker tries to diversify with system-on-chip designs, Luftman presents correlations between information technology (IT)-business alignment in MIS Quarterly Executive, MIT Researchers develop lecture search engine to aid students, New tool transforms the Internet into seismologists, Rensselaer Researcher Gets Firsthand View of Behind-the-Scenes Military Technology, Researcher revealed that Internet users give up privacy in exchange for trust, Researchers has demonstrated a highly efficient add-drop filter using a three-dimensional photonic crystal, Researchers say Software can now analyze your e-mails, Robot Enlisted to Spot Rare Woodpecker, Software-Defined Networking, The breakthroughs in superconductivity bring us to the threshold of a new age, Theoritical solution of supercomputers problem, U of N Reported Impact of Human Values to the Enlargement of Innovative Computer Technology, U of R Researchers Successfully Compressed Music File 1,000 Times Smaller than MP3, UCLA mathematician works to make virtual surgery a viable technology, Vanish : Self Destructing Digital Data