



|
|
Early
graphics
The term
prehistoric art refers generally to the paintings, engravings, and sculptures
created from about 35,000 to 12,000 years ago during the last Ice Age
in Eurasia. The term is also often applied to the art of the succeeding
Mesolithic and Neolithic cultures in this area, as well as to the rock
art produced by various nonliterate cultures from later periods.
PALEOLITHIC
ART
The earliest
stone tools date from more than 2 million years ago, but prehistoric art
is a relatively recent development; it first appeared during the Upper
PALEOLITHIC PERIOD, the last division of the Old Stone Age. It is associated
with the remains of CRO-MAGNON MAN (Homo sapiens sapiens), who appeared
in Europe about 35,000 years ago, having gradually replaced the NEANDERTALERS.
It does not follow that no art existed among the Neandertalers; their
burial sites, some of which may date from as far back as 100,000 years
ago, suggest a preoccupation with life after death and possibly with animal
cults. The evidence also suggests that body ornament may have been used,
but so far no direct evidence of Neandertal art has been found. Various
attempts at portraying animals and symbols, however, must have been made
before the Cro-Magnon people executed their remarkably elaborate works
of art.
The Paleolithic
surroundings of 35,000 years ago were harsh, like those of semiglacial
Siberia today. Great herds of bison and reindeer roamed through the plains
of central and western Europe. Extinct species like the mammoth and the
woolly rhinoceros flourished and were successfully hunted; their bones
have been found in the caves where the hunters lived. The general impression
is of small, nomadic bands of hunter-gatherers, moving with their prey,
living in open-air encampments during the summer and taking winter quarters
wherever caves or rockshelters were available.
ADVANCES
OF THE 1960s
The next
big advance in computer graphics was to come from another MIT student,
Ivan Sutherland. In 1961 Sutherland created another computer drawing program
called Sketchpad. Using a light pen, Sketchpad allowed you to draw simple
shapes on the computer screen, save them and even recall them later. The
light pen itself had a small photoelectric cell in its tip. This cell
emitted an electronic pulse whenever it was placed in front of a computer
screen and the screen's electron gun fired directly at it. By simply timing
the electronic pulse with the current location of the electron gun, it
was easy to pinpoint exactly where the pen was on the screen at any given
moment. Once that was determined, the computer could then draw a cursor
at that location.
Sutherland seemed to find the perfect solution for many of the graphics
problems he faced. Even today, many standards of computer graphics interfaces
got their start with this early Sketchpad program. One example of this
is in drawing constraints. If you want to draw a square for example, you
don't have to worry about drawing four lines perfectly to form the edges
of the box. You can simply specify that you want to draw a box, and then
specify the location and size of the box. The software will then construct
a perfect box for you, with the right dimensions and at the right location.
Another example is that Sutherland's software modeled objects -- not just
a picture of objects. In other words, with a model of a car, you could
change the size of the tires without affecting the rest of the car. You
could stretch the body of the car without deforming the tires.
These early computer graphics were Vector graphics, composed of thin lines
whereas modern day graphics are Raster based using pixels. The difference
between vector graphics and raster graphics can be illustrated with a
shipwrecked sailor. He creates an SOS sign in the sand by arranging rocks
in the shape of the letters "SOS." He also has some brightly colored rope,
with which he makes a second "SOS" sign by arranging the rope in the shapes
of the letters. The rock SOS sign is similar to raster graphics. Every
pixel has to be individually accounted for. The rope SOS sign is equivalent
to vector graphics. The computer simply sets the starting point and ending
point for the line and perhaps bend it a little between the two end points.
The disadvantages to vector files are that they cannot represent continuous
tone images and they are limited in the number of colors available. Raster
formats on the other hand work well for continuous tone images and can
reproduce as many colors as needed.
Also in 1961 another student at MIT, Steve Russell, created the first
video game, Spacewar. Written for the DEC PDP-1, Spacewar was an instant
success and copies started flowing to other PDP-1 owners and eventually
even DEC got a copy. The engineers at DEC used it as a diagnostic program
on every new PDP-1 before shipping it. The sales force picked up on this
quickly enough and when installing new units, would run the world's first
video game for their new customers.
E. E. Zajac, a scientist at Bell Telephone Laboratory (BTL), created a
film called "Simulation of a two-giro gravity attitude control system"
in 1963. In this computer generated film, Zajac showed how the attitude
of a satellite could be altered as it orbits the Earth. He created the
animation on an IBM 7090 mainframe computer. Also at BTL, Ken Knowlton,
Frank Sindon and Michael Noll started working in the computer graphics
field. Sindon created a film called Force, Mass and Motion illustrating
Newton's laws of motion in operation. Around the same time, other scientists
were creating computer graphics to illustrate their research. At Lawrence
Radiation Laboratory, Nelson Max created the films, "Flow of a Viscous
Fluid" and "Propagation of Shock Waves in a Solid Form." Boeing Aircraft
created a film called "Vibration of an Aircraft."
It wasn't long before major corporations started taking an interest in
computer graphics. TRW, Lockheed-Georgia, General Electric and Sperry
Rand are among the many companies that were getting started in computer
graphics by the mid 1960's. IBM was quick to respond to this interest
by releasing the IBM 2250 graphics terminal, the first commercially available
graphics computer.
Ralph Baer, a supervising engineer at Sanders Associates, came up with
a home video game in 1966 that was later licensed to Magnavox and called
the Odyssey. While very simplistic, and requiring fairly inexpensive electronic
parts, it allowed the player to move points of light around on a screen.
It was the first consumer computer graphics product.
Also in 1966, Sutherland at MIT invented the first computer controlled
head-mounted display (HMD). Called the Sword of Damocles because of the
hardware required for support, it displayed two separate wireframe images,
one for each eye. This allowed the viewer to see the computer scene in
stereoscopic 3D. After receiving his Ph.D. from MIT, Sutherland became
Director of Information Processing at ARPA (Advanced Research Projects
Agency), and later became a professor at Harvard.
Dave Evans was director of engineering at Bendix Corporation's computer
division from 1953 to 1962. After which he worked for the next five years
as a visiting professor at Berkeley. There he continued his interest in
computers and how they interfaced with people. In 1968 the University
of Utah recruited Evans to form a computer science program, and computer
graphics quickly became his primary interest. This new department would
become the world's primary research center for computer graphics.
In 1967 Sutherland was recruited by Evans to join the computer science
program at the University of Utah. There he perfected his HMD. Twenty
years later, NASA would re-discover his techniques in their virtual reality
research. At Utah, Sutherland and Evans were highly sought after consultants
by large companies but they were frustrated at the lack of graphics hardware
available at the time so they started formulating a plan to start their
own company.
A student by the name of Ed Catmull got started at the University
of Utah in 1970 and signed up for Sutherland's computer graphics class.
Catmull had just come from The Boeing Company and had been working on
his degree in physics. Growing up on Disney, Catmull loved animation yet
quickly discovered that he didn't have the talent for drawing. Now Catmull
(along with many others) saw computers as the natural progression of animation
and they wanted to be part of the revolution. The first animation that
Catmull saw was his own. He created an animation of his hand opening and
closing. It became one of his goals to produce a feature length motion
picture using computer graphics. In the same class, Fred Parkes created
an animation of his wife's face. Because of Evan's and Sutherland's presence,
UU was gaining quite a reputation as the place to be for computer graphics
research so Catmull went there to learn 3D animation.
As the UU computer graphics laboratory was attracting people from all
over, John Warnock was one of those early pioneers; he would later found
Adobe Systems and create a revolution in the publishing world with his
PostScript page description language. Tom Stockham led the image processing
group at UU which worked closely with the computer graphics lab. Jim Clark
was also there; he would later found Silicon Graphics, Inc.
The first major advance in 3D computer graphics was created at UU by these
early pioneers, the hidden-surface algorithm. In order to draw a representation
of a 3D object on the screen, the computer must determine which surfaces
are "behind" the object from the viewer's perspective, and thus should
be "hidden" when the computer creates (or renders) the image.
ADVANCES
OF THE 1970s
The 1970s saw the introduction of computer graphics in the world of television.
Computer Image Corporation (CIC) developed complex hardware and software
systems such as ANIMAC, SCANIMATE and CAESAR. All of these systems worked
by scanning in existing artwork, then manipulating it, making it squash,
stretch, spin, fly around the screen, etc. . . Bell Telephone and CBS
Sports were among the many who made use of the new computer graphics.
While flat shading can make an object look as if it's solid, the sharp
edges of the polygons can detract from the realism of the image. While
you can create smaller polygons (which also means more polygons), this
increases the complexity of the scene, which in turn slows down the performance
of the computer rendering the scene. To solve this, a Henri Gouraud in
1971 presented a method for creating the appearance of a curved surface
by interpolating the color across the polygons. This method of shading
a 3D object has since come to be known as Gouraud shading. One of the
most impressive aspects of Gouraud shading is that it hardly takes any
more computations than Flat shading, yet provides a dramatic increase
in rendering quality. One thing that Gouraud shading can't fix is the
visible edge of the object. The original flat polygons making up the torus
are still visible along the edges of the object.
One of the most important advancements to computer graphics appeared on
the scene in 1971, the microprocessor. Using Integrated Circuit technology
developed in 1959, the electronics of a computer processor were miniaturized
down to a single chip, the microprocessor, sometimes called a CPU (Central
Processing Unit). One of the first desktop microcomputers designed for
personal use was the Altair 8800 from Micro Instrumentation Telemetry
Systems (MITS). Coming through mail-order in kit form, the Altair (named
after a planet in the popular Star Trek series) retailed for around $400.
Later personal computers would advance to the point where film-quality
computer graphics could be created on them.
In that same year, Nolan Kay Bushnell along with a friend formed Atari.
He would go on to create an arcade video game called Pong in 1972 and
start an industry that continues even today to be one of the largest users
of computer graphics technology.
In the 1970's a number of animation houses were formed. In Culver City,
California, Information International Incorporated (better known as Triple
I) formed a motion picture computer graphics department. In San Rafael,
California, George Lucas formed Lucasfilm. In Los Angeles, Robert Abel
& Associates and Digital Effects were formed. In Elmsford, New York,
MAGI was formed. In London, England, Systems Simulation Ltd. was formed.
Of all these companies, almost none of them would still be in business
ten years later. At Abel & Associates, Robert Abel hired Richard Edlund
to help with computer motion control of cameras. Edlund would later get
recruited to Lucasfilm to work on Star Wars, and eventually to establish
Boss Film Studios creating special effects for movies and motion pictures
and winning four Academy Awards.
In 1970 Gary Demos was a senior at Caltech when we saw the work of John
Whitney Sr. This immediately developed an interest in him for computer
graphics. This interest was further developed when he saw work done at
Evans & Sutherland, along with the animation that was coming out of
the University of Utah. So in 1972 Demos went to work for E&S. At
that time they used Digital PDP-11 computers along with the custom built
hardware that E&S was becoming famous for. These systems included
the Picture System that featured a graphics tablet and color frame buffer
(originally designed by UU).
It was at E&S that Demos met John Whitney Jr., the son of the original
graphics pioneer. E&S started to work on some joint projects with
Triple I. Founded in 1962, Triple I was in the business of creating digital
scanners and other image processing equipment. Between E&S and Triple
I there was a Picture Design Group. After working on a few joint projects
between E&S and Triple I, Demos and Whitney left E&S to join Triple
I and form the Motion Picture Products group in late 1974. At Triple I,
they used PDP-10s and a Foonley Machine (which was a custom PDP-10). They
developed another frame buffer that used 1000 lines; they also built custom
film recorders and scanners along with custom graphics processors, image
accelerators and the software to run it. This development led to the first
use of computer graphics for motion pictures in 1973 when Whitney and
Demos worked on the motion picture "Westworld". They used a technique
called pixellization which is a computerized mosaic created by breaking
up a picture into large color blocks. This is done by dividing up the
picture into square areas, and then averaging the colors into one color
within that area.
In 1973 the Association of Computing Machinery's (ACM) Special Interest
Group on Computer Graphics (SIGGRAPH) held its first conference. Solely
devoted to computer graphics, the convention attracted about 1,200 people
and was held in a small auditorium. Since the 1960's the University of
Utah had been the focal point for research on 3D computer graphics and
algorithms. For the research, the classes set up various 3D models such
as a VW Beetle, a human face, and the most popular, a teapot. It was in
1975 that a M. Newell developed the Utah teapot, and throughout the history
of 3D computer graphics it has served as a benchmark, and today it's almost
an icon for 3D computer graphics. The original teapot that Newell based
his computer model on can be seen at the Boston Computer Museum displayed
next to a computer rendering of it.
Ed Catmull received his Ph. D. in computer science in 1974 and his thesis
covered Texture Mapping, Z-Buffer and rendering curved surfaces. Texture
mapping brought computer graphics to a new level of realism. Catmull had
come up with the idea of texture mapping while sitting in his car in a
parking lot at UU and talking to another student, Lance Williams, about
creating a 3D castle. Most objects in real life have very rich and detailed
surfaces, such as the stones of a castle wall, the material on a sofa,
the wallpaper on a wall, the wood veneer on a kitchen table. Catmull realized
that if you could apply patterns and textures to real-life objects, you
could do the same for their computer counterparts. Texture mapping is
the method of taking a flat 2D image of what an object's surface looks
like, and then applying that flat image to a 3D computer generated object.
Much in the same way that you would hang wallpaper on a blank wall.
The z-buffer aided the process of hidden surface removal by using zels
which are similar to pixels but instead of recording the luminance of
a specific point in an image, they record the depth of that point. The
letter "z" reflecting the depth (as does Y for vertical position and X
for horizontal position). The z-buffer was then an area of memory devoted
to holding the depth data for every pixel in an image. Today high-performance
graphics workstations have a z-buffer built-in.
While Gouraud shading was a great improvement over Flat shading, it still
had a few problems as to its realism. If you look closely at the Gouraud
shaded torus you will notice slight variations in the shading that reveal
the underlying polygons. These variations can also cause reflections to
appear incorrectly or even disappear altogether in certain circumstances.
This was corrected however by Phong Bui-Toung, a programmer at the UU
(of course). Bui-Toung arrived at UU in 1971 and in 1974 he developed
a new shading method that came to be known as Phong shading. After UU,
Bui-Toung went on to Stanford as a professor, and early in 1975 he died
of cancer. His shading method accurately interpolates the colors over
a polygonal surface giving accurate reflective highlights and shading.
The drawback to this is that Phong shading can be up to 100 times slower
than Gouraud shading. Because of this, even today, when animators are
creating small, flat 3D objects that are not central to the animation,
they will use Gouraud shading on them instead of Phong. As with Gouraud
shading, Phong shading cannot smooth over the outer edges of 3D objects.
A major breakthrough in simulating realism began in 1975 when the French
mathematician, Dr. Benoit Mandelbrot published a paper called "A Theory
of Fractal Sets." After some 20 years of research he published his findings
and named them Fractal Geometry. To understand what a fractal is, consider
that a straight line is a one-dimensional object, while a plane is a two-dimensional
object. However, if the line curves around in such a way as to cover the
entire surface of the plane, then it is no longer one dimensional, yet
not quite two dimensional. Mandelbrot described it as a fractional dimension,
between one and two.
To understand how this helps computer graphics, imagine creating a random
mountain terrain. You may start with a flat plane, then tell the computer
to divide the plane into four equal parts. Next the new center point is
offset vertically some random amount. Following that, one of the new smaller
squares is chosen, subdivided, with its center slightly off-set randomly.
The process continues recursively until some limit is reached and all
the squares are off-set.
Mandelbrot followed up his paper with a book entitled "The Fractal Geometry
of Nature." This showed how his fractal principles could be applied to
computer imagery to create realistic simulations of natural phenomena
such as mountains, coastlines, wood grain, etc.
After graduating in 1974 from UU, Ed Catmull went to a company called
Applicon. It didn't last very long however, because in November of that
same year he was made an offer he couldn't refuse. Alexander Schure, founder
of New York Institute of Technology (NYIT), had gone to the UU to see
their computer graphics lab. Schure had a great interest in animation
and had already established a traditional animation facility at NYIT.
After seeing the setup at UU, he asked Evans what equipment he needed
to create computer graphics. He told his people to "get me one of everything
they have." The timing happened to be just right because UU was running
out of funding at the time. Schure made Ed Catmull Director of NYIT's
new Computer Graphics Lab. Then other talented people in the computer
graphics field such as Malcolm Blanchard, Garland Stern and Lance Williams
left UU and went to NYIT. Thus the leading center for computer graphics
research soon switched from UU to NYIT.
One talented recruit was Alvy Ray Smith. As a young student at New Mexico
State University in 1964, he had used a computer to create a picture of
an equiangular spiral for a Nimbus Weather satellite. Despite this early
success, Smith didn't take an immediate interest in computer graphics.
He moved on to Stanford University, got his Ph.D., then promptly took
his first teaching job at New York University. Smith recalls, "My chairman,
Herb Freeman, was very interested in computer graphics, some of his students
had made important advances in the field. He knew I was an artist and
yet he couldn't spark any interest on my part, I would tell him 'If you
ever get color I'll get interested.' Then one day I met Dr. Richard Shoup,
and he told me about Xerox PARC (Palo Alto Research Center). He was planning
on going to PARC to create a program that emulated painting on a computer
the way an artist would naturally paint on a canvas."
Shoup had become interested in computer graphics while he was at Carnegie
Mellon University. He then became a resident scientist at PARC and began
working on a program he called "SuperPaint." It used one of the first
color frame buffers ever built. At the same time Ken Knowlton at Bell
Labs was creating his own paint program.
Smith on the other hand, wasn't thinking much about paint programs. In
the meantime, he had broken his leg in a skiing accident and re-thought
the path his life was taking. He decided to move back to California to
teach at Berkeley in 1973. "I was basically a hippie, but one day I decided
to visit my old friend, Shoup in Palo Alto. He wanted to show me his progress
on the painting program, and I told him that I only had about an hour,
and then I would need to get back to Berkeley. I was only visiting him
as a friend, and yet when I saw what he had done with his paint program,
I wound up staying for 12 hours! I knew from that moment on that computer
graphics was what I wanted to do with my life." Smith managed to get himself
hired by Xerox in 1974 and worked with Shoup in writing SuperPaint.
A few years later in 1975 in nearby San Jose, Alan Baum, a workmate of
Steve Wozniak at Hewlett Packard, invited Wozniak to a meeting of the
local Homebrew Computer Club. Homebrew, started by Fred Moore and Gorden
French, was a club of amateur computer enthusiasts, and it soon was a
hotbed of ideas about building your own personal computers. From the Altair
8800 to TV typewriters, the club discussed and built virtually anything
that resembled a computer. It was a friend at the Homebrew club that first
gave Wozniak a box full of electronic parts and it wasn't long before
Wozniak was showing off his own personal computer/toy at the Homebrew
meetings. A close friend of Wozniak, Steve Jobs, worked at Atari and help
Wozniak develop his computer into the very first Apple computer. They
built the units in a garage and sold them for $666.66.
In the same year William Gates III at the age of 19 dropped out of Harvard
and along with his friend Paul Allen, founded a company called Microsoft.
They wrote a version of the BASIC programming language for the Altair
8800 and put it on the market. Some five years later in 1980, when IBM
was looking for an operating system to use with their new personal computer,
they approached Microsoft and Gates remembered an operating system for
Intel 8080 microprocessors written by Seattle Computer Products (SCP)
called 86-DOS. Taking a gamble, Gates bought 86-DOS from SCP for $50,000,
rewrote it, named it DOS and licensed it (smartly retaining ownership)
to IBM as the operating system for their first personal computer. Today
Microsoft dominates the personal computer software industry with gross
annual sales of almost 4 billion dollars, and now it has moved into the
field of 3D computer graphics.
Meanwhile back at PARC, Xerox had decided to focus solely on black and
white computer graphics, dropping everything that was in color. So Alvy
Ray Smith called Ed Catmull at NYIT and went out east with David DiFrancesco
to meet with Catmull. Everyone hit it off, so Smith made the move from
Xerox over to NYIT; this was about two months after Catmull had gotten
there. The first thing Smith did was write a full color (24-bit) paint
program, the first of its kind.
Later others joined NYIT's computer graphics lab including Tom Duff, Paul
Heckbert, Pat Hanrahan, Dick Lundin, Ned Greene, Jim Blinn, Rebecca Allen,
Bill Maher, Jim Clark, Thaddeus Beier, Malcom Blanchard and many others.
In all, the computer graphics lab of NYIT would eventually be home to
more than 60 employees. These individuals would continue to lead the field
of computer graphics some twenty years later. The first computer graphics
application NYIT focused on was 2D animation and creating tools to assist
traditional animators. One of the tools that Catmull built was "Tween,"
a tool that interpolated in-between frames from one line drawing to another.
They also developed a scan-and-paint system for scanning and then painting
pencil-drawn artwork. This would later evolve into Disney's CAPS (Computer
Animation Production System).
Next the NYIT group branched into 3D computer graphics. Lance Williams
wrote a story for a movie called "The Works," sold the idea to Schure,
and this movie became NYIT's major project for over two years. A lot of
time and resources were spent in creating 3D models and rendering test
animations. "NYIT in itself was a significant event in the history of
computer graphics" explains Alvy Ray Smith. "Here we had this wealthy
man, having plenty of money and getting us whatever we needed, we didn't
have a budget, we had no goals, we just stretched the envelope. It was
such an incredible opportunity, every day someone was creating something
new. None of us slept, it was common to work 22 hour days. Everything
you saw was something new. We blasted computer graphics into the world.
It was like exploring a new continent."
However, the problem was that none of the people in the Computer Graphics
Lab understood the scope of making a motion picture. "We were just a bunch
of engineers in a little converted stable on Long Island, and we didn't
know the first thing about making movies" said Beier (now technical director
for Pacific Data Images). Gradually over a period of time, people became
discouraged and left for other places. Smith continues, "It just wasn't
happening. We all thought we would take part in making a movie. But at
the time it would have been impossible with the speed of the computers."
Alex Schure made an animated movie called "Tubby the Tuba" using conventional
animation techniques, and it turned out to be very disappointing. "We
realized then that he really didn't have what it takes to make a movie,"
explains Smith. Catmull agrees, "It was awful, it was terrible, half the
audience fell asleep at the screening. We walked out of the screening
room thinking 'Thank God we didn't have anything to do with it, that computers
were not used for anything in that movie!'" The time was ripe for George
Lucas.
Lucas, with the success of Star Wars under his belt, was interested in
using computer graphics on his next movie, "The Empire Strikes Back".
So he contacted Triple I, who in turn produced a sequence that showed
five X-Wing fighters flying in formation. However disagreements over financial
aspects caused Lucas to drop it and go back to hand-made models.
The experience however showed that photorealistic computer imagery was
a possibility, so Lucas decided to assemble his own Computer Graphics
department within his special effects company, Lucasfilm. Lucas sent out
a person to find the brightest minds in the world of Computer Graphics.
He found NYIT. Initially the individual went to Carnegie Mellon University
and talked to a professor who referred him to one of his students, Ralph
Guggenheim, who referred him to Catmull at NYIT. After a few discussions,
Catmull flew out to the west coast and met with Lucas and accepted his
offer.
Initially only five from NYIT went with Catmull including Alvy Ray Smith,
David DiFrancesco, Tom Duff and Ralph Guggenheim. Later however, others
would take up the opportunity. Slowly the computer graphics lab started
to fall apart and ceased to be the center of computer graphics research.
The focus had shifted to Lucasfilm and a new graphics department at Cornell
University. Over the next 15 years, Lucasfilm would be nominated for over
20 Academy Awards, winning 12 Oscars, five Technical Achievement Awards
and two Emmys.
Looking back at NYIT, Catmull reflects "Alex Schure funded five years
of great research work, and he deserves credit for that. We published
a lot of papers, and were very open about our research, allowing people
to come on tours and see our work. However now there are a lot of lawsuits
going on, mainly because we didn't patent very much. People then subsequently
acquired patents on that work and now we are called in frequently to show
that we had done the work prior to other people."
Catmull continues, "We really had a major group of talented people in
the lab, and the whole purpose was to do research and development for
animation. We were actually quite stable for a long time, that first five
years until I left. However, the primary issue was to make a feature film,
and to do that you have to gather a lot of different kinds of skills;
Artistic, Editorial, etc.. Unfortunately, the managers of the school did
not understand this. They appreciated the technical capabilities. So as
a group we where well taken care of, but we all recognized that in order
to produce a feature film we had to have another kind of person there,
movie people, and basically those people weren't brought into the school.
We were doing the R & D but we just could not achieve our goals there.
So when Lucas came along, and proved that he did have those kind of capabilities
and said I want additional development in this area (of computer graphics),
we jumped at it."
Thus in 1979 George Lucas formed the new computer graphics division of
Lucasfilm to create computer imagery for motion pictures. Catmull became
vice president and during the next six years, this new group would assemble
one of the most talented teams of artists and programmers in the computer
graphics industry. The advent of Lucasfilm's computer graphics department
is viewed by many as another major milestone in the history of computer
graphics. Here the researchers had access to funds, but at the same time
they were working under a serious movie maker with real, definite goals.
The ACM in 1976 allowed for the first time, exhibitors in the annual SIGGRAPH
conference. This turned up 10 companies who exhibited their products.
By 1993 this would grow to 275 companies with over 30,000 attendees.
Systems Simulation Ltd. (SSL) of London created an interesting computer
graphics sequence for the movie "Alien" in 1976. The scene called for
a computer-assisted landing sequence where the terrain was viewed as a
3D wireframe. Initially a polystyrene landscape was going to be digitized
to create the terrain. However, the terrain needed to be very rugged &
complex and would have made a huge database if digitized. Alan Sutcliffe
of SSL decided to write a program to generate the mountains at random.
The result was a very convincing mountain terrain displayed in wireframe
with the hidden lines removed. This was typical of early efforts at using
computer generated imagery (CGI) in motion pictures, using it to simulate
advanced computers in Sci-Fi movies.
Meanwhile the Triple I team was busy in 1976 working on "Westworld's"
sequel, "Futureworld." In this film, robot Samurai warriors needed to
materialize into a vacuum chamber. To accomplish this, Triple I digitized
still photographs of the warriors and then used some image processing
techniques to manipulate the digitized images and make the warriors materialize
over the background. Triple I developed some custom film scanners and
recorders for working on films in high resolutions, up to 2,500 lines.
Also in that same year at the Jet Propulsion Laboratory in Pasadena, California
(before going to NYIT), James Blinn developed a new technique similar
to Texture Mapping. However, instead of simply mapping the colors from
a 2D image onto a 3D object, the colors were used to make the surface
appear as if it had a dent or a bulge. To do this, a monochrome image
is used where the white areas of the image will appear as bulges and the
black areas of the image will appear as dents. Any shades of gray are
treated as smaller bumps or bulges depending on how dark or how light
the shade of gray is. This form of mapping is called Bump Mapping.
Bump maps can add a new level of realism to 3D graphics by simulating
a rough surface. When both a texture map and a bump map are applied at
the same time, the result can be very convincing. Without bump maps, a
3D object can look very flat and un-interesting.
Busy Blinn also published a paper in that same year on creating surfaces
that reflect their surroundings. This is accomplished by rendering six
different views from the location of the object (top, bottom, front, back,
left and right). Those views are then applied to the outside of the object
in a way similar to standard texture mapping. The result is that an object
appears to reflect its surroundings. This type of mapping is called environment
mapping.
In December of 1977, a new magazine debuted called Computer Graphics World.
Back then the major stories involving computer graphics revolved around
2D drafting, remote sensing, IC design, military simulation, medical imaging
and business graphics. Today, some 17 years later, CGW continues to be
the primary medium for computer graphics related news and reviews. Computer
graphics hardware was still prohibitively expensive at this time. The
National Institute of Health paid 65,000 dollars for their first frame
buffer back in 1977. It had a resolution of 512x512 with 8 bits of color
depth. Today a video adapter with the same capabilities can be purchased
for under 100 dollars.
During the late 1970's Don Greenberg at Cornell University created a computer
graphics lab that produced new methods of simulating realistic surfaces.
Rob Cook at Cornell realized that the lighting model everyone had been
using best approximated plastic. Cook wanted to create a new lighting
model that allowed computers to simulate objects like polished metal.
This new model takes into account the energy of the light source rather
than the light's intensity or brightness.
As the second decade of computer graphics drew to a close the industry
was showing tremendous growth. In 1979, IBM released its 3279 color terminal
and within 9 months over 10,000 orders had been placed for it. By 1980,
the entire value of all the computer graphics systems, hardware, and services
would reach a billion dollars.
ADVANCES
OF THE 1980s
During the
early 1980's SIGGRAPH was starting to really take off. Catmull explains,
"SIGGRAPH was a very good organization. It was fortuitous to have the
right people doing the right things at the right time. It became one of
the very best organizations where there is a lot of sharing and a lot
of openness. Over the years it generated a tremendous amount of excitement
and it was a way of getting a whole group of people to work together and
share information, and it is still that way today."
At the 1980 SIGGRAPH conference a stunning film entitled "Vol Libre" was
shown. It was a computer generated high-speed flight through rugged fractal
mountains. A programmer by the name of Loren Carpenter from The Boeing
Company in Seattle, Washington had studied the research of Mandelbrot
and then modified it to simulate realistic fractal mountains.
Carpenter had been working in the Boeing Computer Services department
since 1966 and was an undergraduate at the University of Washington. Starting
around 1972 he started using the University's engineering library to follow
the technical papers being published about computer graphics. He eventually
worked his way into a group at Boeing that was working on a computer aided
drawing system. This finally got him access to computer graphics equipment.
Working there with other employees, he developed various rendering algorithms
and published papers on them.
In the late 70s Carpenter was creating 3D rendered models of aircraft
designs and he wanted some scenery to go with his airplanes. So he read
Mandelbrot's book and was immediately disappointed when he found that
the formulas were not practical for what he had in mind. Around this time
"Star Wars" had been released and being a big fan of the imagination Carpenter
dreamed of creating some type of alien landscape. This drove him to actually
do it; by 1979 he had an idea about how to create fractal terrain in animation.
While on a business trip to Ohio State in 1979, Carpenter ran into a person
who knew quite a few people in the computer graphics field including individuals
like Ed Catmull. He explained how Catmull had just been hired by George
Lucas to set up a lab at Lucasfilm. Carpenter was immediately interested
but didn't want to send in his resume yet, because he was still working
on his fractal mountain movie. "At the time they were getting enough resumes
to kill a horse" explains Carpenter.
Carpenter continues, "I wanted to demonstrate that these (fractal) pictures
would not only look good, but would animate well too. After solving the
technical difficulties, I made the movie, wrote a paper to describe it
and made a bunch of still images. I happened to be on the A/V crew of
SIGGRAPH 1980, so one of my pictures ended up on an A/V T-shirt. I had
this campaign to become as visible as possible because I wanted to work
at Lucasfilm and when I showed my film, the people from Lucasfilm were
there in the audience. Afterward they spoke to me and said, 'You're in,
we want you.'" Later, in 1981 Carpenter wrote the first renderer for Lucasfilm,
called REYES (Renders Everything You Ever Saw). REYES would eventually
turn into the Renderman rendering engine and today, Carpenter is still
with Pixar.
Turner Whitted published a paper in 1980 about a new rendering method
for simulating highly reflective surfaces. Known today as Ray Tracing,
it makes the computer trace every ray of light, starting from the viewer's
perspective back into the 3D scene to the objects. If an object happens
to be reflective, the computer follows that ray of light as it bounces
off the object until it hits something else. This process continues until
the ray of light hits an opaque non-reflective surface or it goes shooting
off away from the scene. As you can imagine, ray tracing is extremely
computational intensive. So much so that some 3D animation programmers
(such as the Yost Group who created 3D Studio) refuse to put ray tracing
into their software. On the other hand, the realism that can be achieved
with ray tracing is spectacular.
Around 1980 two individuals, Steven Lisberger, a traditional animator,
and Donald Kushner, a lawyer-turned-movie distributor decided to do a
film about a fantasy world inside a video game. After putting together
a presentation, Lisberger and Kushner sought backing from the major film
companies around Los Angeles. To their surprise, it was Tom Wilhite, a
new production chief at Disney, that took them up on the idea. After many
other presentations to Disney executives, they were given the 'OK' from
Disney to proceed.
The movie, called "Tron," was to be a fantasy about a man's journey inside
of a computer. It called for nearly 30 minutes of film quality computer
graphics, and was a daunting task for computer graphics studios at the
time. The solution lay in splitting up various sequences and farming them
out to different computer graphics studios. The two major studios were
Triple I and MAGI (Mathematical Applications Group Inc.). Also involved
were NYIT, Digital Effects of New York and Robert Abel & Associates.
The computer generated imagery for "Tron" was very good but unfortunately
the movie as a whole was very bad. Disney had sunk about $20 million into
the picture and it bombed at the box office. This, if anything, had a
negative influence on Hollywood toward computer graphics. Triple I had
created computer graphics for other movies such as Looker in 1980, but
after "Tron," they sold off their computer graphics operation. Demos and
Whitney left to form a new computer graphics company called Digital Productions
in 1981.
Digital Productions had just got started then they landed their first
major film contract. It was to create the special effects for a Sci-Fi
movie called "The Last Starfighter." In Starfighter, however, everyone
made sure that the story was somewhat good before generating any computer
graphics. Digital Productions invested in a Cray X-MP supercomputer to
help process the computer graphics frames. The effects themselves were
very impressive and photorealistic but the movie cost $14 million to make
and only grossed about $21 million - enough to classify as a "B" grade
movie by Hollywood standards - it still didn't make Hollywood sit up and
take notice of computer graphics.
Carl Rosendahl launched a computer graphics studio in Sunnyvale, California
in 1980 called Pacific Data Images (PDI). Rosendahl had just graduated
from Stanford University with a degree in electrical engineering and for
him, computer graphics was the perfect solution for his career interest,
television production and computers. A year later Richard Chuang, one
of the partners, wrote some anti-aliasing rendering code, and the resulting
images allowed PDI's client base to increase. While other computer graphics
studios were focusing on film, PDI focused solely on television network
ID's, such as the openings for movie-of-the-week programs. This allowed
them to carve a niche for themselves. Chris Woods set up a computer graphics
department in 1981 at R/Greenberg Associates in New York. In August of
1981 IBM introduced their first personal computer, the IBM PC. The IBM
PC, while not the most technologically advanced personal computer, seemed
to break PCs into the business community in a serious way. It used the
Intel 16-bit 8088 microprocessor and offered ten times the memory of other
personal computer systems. From then on, personal computers became serious
tools that business needed. With this new attitude toward PCs came tremendous
sales as PCs spread across the country into practically every business.
Another major milestone in the 1980's for computer graphics was the founding
of Silicon Graphics Inc. (SGI) by Jim Clark in 1982. SGI focused its resources
on creating the highest performance graphics computers available. These
systems offered built-in 3D graphics capabilities, high speed RISC (Reduced
Instruction Set Chip) processors and symmetrical (multiple processor)
architectures. The following year in 1983, SGI rolled out its first system,
the IRIS 1000 graphics terminal.
In 1982, Lucasfilm signed up with Atari for a first-of-its-kind venture
between a film studio and video game company. They planned to create a
home video game based on the hit movie "Raiders of the Lost Ark." They
also made plans to develop arcade games and computer software together.
Some of Lucasfilm's games included PHM Pegasus, Koronis Rift, Labyrinth,
Ballblazer, Rescue on Fractalus and Strike Fleet. They also developed
a networked game called Habitat that is still very popular in Japan. Today
the LucasArts division of Lucasfilm creates the video games and is a strong
user of 3D computer graphics.
In 1982, John Walker and Dan Drake along with eleven other programmers
established Autodesk Inc. They released AutoCAD version 1 for S-100 and
Z-80 based computers at COMDEX (Computer Dealers Exposition) that year.
Autodesk shipped AutoCAD for the IBM PC and Victor 9000 personal computers
the following year. Starting from 1983, their yearly sales would rise
from 15,000 dollars to 353.2 million dollars in 1993 as they helped move
computer graphics to the world of personal computers.
At Lucasfilm, special effects for film were handled by The Industrial
Light and Magic (ILM) division, yet early on they didn't want much to
do with computer graphics. Catmull explains, "They considered what we
were doing as too low of a resolution for film. They felt it didn't have
the quality, and they weren't really believers in it. There wasn't an
antagonistic relationship between us, we got along well, it was just that
they didn't see computer graphics as being up to their standards. However,
as we developed the technology we did do a couple pieces such as the Death
Star projection for 'Return of the Jedi.' It was only a single special
effect yet it came out looking great." For "Return of Jedi" in 1983, Lucasfilm
created a wireframe "hologram" of the Death Star under construction protected
by a force field for one scene.
The computer graphics division of Lucasfilm was next offered a special
effects shot for the movie "Star Trek II: The Wrath of Kahn." There was
an effect that could have been done either traditionally or with CGI.
The original screenplay called for the actors to go into a room containing
a coffin shaped case in which could be seen a lifeless rock. The "Genesis"
machine would then shoot this rock and make it look green and lifelike.
ILM, however, didn't think of that as very impressive, so they went to
the computer graphics division and asked if they could generate the effect
of the rock turning life-like. Then Alvy Ray Smith came back and said,
"Instead of having this rock in front of this glass box why don't we do
what's meant to be a computer simulation and a program showing how it
works for the whole planet." Thus Smith came up with the original idea
and ILM decided to go for it. And so they generated a one minute long
sequence. It was largely successful because it was meant to be a computer
generated image in the movie, so it didn't need to have the final touches
of realism added to it. The effect was rendered on Carpenter's new rendering
engine, REYES. It turned out to be a very, very successful piece. As Smith
would later say, "I call it 'the effect that never dies' It appeared in
three successive Star Trek movies, Reebok and other commercials, the Sci-Fi
channel, you see it everywhere." Following the "Genesis" effect, Lucasfilm
used computer graphics for the movie "Young Sherlock Holmes" in 1985.
In this movie, a stained glass window comes to life to terrorize a priest.
Tom Brigham, a programmer and animator at NYIT, astounded the audience
at the 1982 SIGGRAPH conference. Tom Brigham had created a video sequence
showing a woman distort and transform herself into the shape of a lynx.
Thus was born a new technique called "Morphing". It was destined to become
a required tool for anyone producing computer graphics or special effects
in the film or television industry. However, despite its impressive response
by viewers at the conference, no one seemed to pay the technique much
attention until a number of years later in 1987 when LucasFilm used the
technique for the movie "Willow" in which a sorceress was transformed
through a series of animals into her final shape as a human.
Scott Fischer, Brenda Laurel, Jaron Lanier along with Thomas Zimmerman
worked at the Atari Research Center (ARC) during the early eighties. Jaron
Lanier, working for Atari as a programmer in 1983, developed the DataGlove.
A glove for your hand wired with switches to detect and transmit to the
computer any movements you make. The computer interprets the data and
allows you to manipulate objects in 3D space within a computer simulation.
He left later that year and teamed up with Jean-Jacques Grimaud; together
they founded a company 2 years later in 1985 called VPL Research, which
would develop and market some of the first commercial virtual reality
products. Zimmerman, an MIT graduate who had developed the "Air Guitar"
software and a DataGlove that allowed you to play a virtual guitar, also
joined VPL Research. Zimmerman left in 1989 while Lanier stayed with VPL
Research until November of 1992.
AT&T formed the Electronic Photography and Imaging Center (EPIC) in
1984 to create PC-based videographic products. In the following year they
released the TARGA video adapter for personal computers. This allowed
PC users for the first time to display and work with 32-bit color images
on the screen. EPIC also published the TGA Targa file format for storing
these true color images.
Early animation companies such as Triple-I, Digital Productions, Lucasfilm,
etc. had to write their own software for creating computer graphics, however
this began to change in 1984. In Santa Barbara, California a new company
was formed called Wavefront. Wavefront produced the very first commercially
available 3D animation system to run on off-the-shelf hardware. Prior
to Wavefront, all computer graphics studios had to write their own programs
for generating 3D animation. Wavefront started a revolution that would
shape the future of all computer graphics studios. Also in that same year,
Thomson Digital Image (TDI) was founded by three engineers working for
Thomson CSF, a large defense contractor. TDI released its 3D animation
software in 1986.
Up until this point, all of the image synthesis methods in use were based
on incidental light, where a light source was shining directly on a surface.
However, most of the light we see in the real world is diffused or light
reflected from surfaces. In your home, you may have halogen lamps that
shine incidental light on the ceiling, but then the ceiling reflects diffuse
light to the rest of the room. If you were going to create a 3D computer
version of the room, you might place a light source in the lamp shining
up on the ceiling. However, the rest of the room would appear dark, because
the software is based on direct light, incidental light and it would not
reflect off the ceiling to the rest of the room. To solve this problem,
a new rendering method was needed and in 1984 Cindy Goral, Don Greenberg
and others at Cornell University published a paper called, "Modeling the
Interaction of Light Between Diffuse Surfaces." The paper described a
new method called Radiosity that uses the same formulas that simulate
the way heat is dispersed throughout a room to determine how light reflects
between surfaces. By determining the exchange of radiant energy between
3D surfaces very realistic results are possible.
In January of 1984, Apple Computer released the first Macintosh computer.
It was the first personal computer to use a graphical interface. The Mac
was based on the Motorola microprocessor and used a single floppy drive,
128K of memory, a 9" high resolution screen and a mouse. It would become
the largest non IBM-compatible personal computer series ever introduced.
Around 1985, multimedia started to make its big entrance. The International
Standards Organization (ISO) created the first standard for Compact Discs
with Read Only Memory (CD-ROM). This new standard was called High Sierra,
after the area near Lake Tahoe, where ISO created the standard. This standard
later changed into the ISO 9660 standard. Today multimedia is a major
marketplace for personal computer 3D animation. In that same year, Commodore
launched the new Amiga personal computer line. The Amiga offers many advanced
features, including a hardware level compatibility with the IBM personal
computer line. The Amiga uses Motorola's 68000 microprocessor and has
its own proprietary operating system. The base unit's retail price is
$1,295.
Daniel Langlois in Montreal, Canada founded a company called Softimage
in 1986. Then in early 1987 he hired some engineers to help create his
vision for a commercial 3D computer graphics program. The Softimage software
was released at the 1988 SIGGRAPH show and it became the animation standard
in Europe with over 1,000 installations world-wide by 1993.
During this time, Jim Henson of Muppets fame approached Brad DeGraf at
Digital Productions with the idea of creating a digital puppet. Henson
had brought with him a Waldo unit that he had previously used to control
one of his puppets remotely. The device had gotten its name from NASA
engineers years earlier. They in turn had taken the name from a 1940's
Sci-Fi book written by Robert A. Heinlein about a disabled scientist who
built a robot to amplify his limited abilities. The scientist's name was
Waldo. Thus when NASA (and later Henson) built their own version of the
unit, they dubbed it Waldo.
The programmers at Digital Productions managed to hook up the Waldo and
create animation with it, but that animation was never used for a commercial
project. Still, the idea of Motion Capture was born. Today motion capture
continues to be a major player in creating computer graphics. As for Digital
Productions, at the time things were going great. They had purchased a
Cray X-MP supercomputer because it was the fastest computer that money
could buy. They were interfacing film recording and scanning equipment
and they had about 75 to 100 employees. They had just finished their fist
big movie project, "The Last Starfighter" and they did some special effects
scenes for the movie 2010 (the swirling surface of Jupiter). They also
worked on "Labyrinth" in 1986. Things were going very well for Digital
Productions, perhaps things went too well.
In 1986 the two largest computer graphics houses in the United States
were bought out by Omnibus Computer Graphics Inc. in hostile takeovers,
Digital Productions (in June) and Robert Abel & Associates Inc. (in
October). The reason this is significant, is that both companies had invested
heavily in high-end supercomputers like the Cray X-MP (which cost about
$13 million each). This had put the focus on buying the fastest number
cruncher money could buy and then creating your own custom software.
As soon as Omnibus took control of Digital Productions the two co-founders
of Digital, John Whitney and Gary Demos, sued the majority owner of Omnibus,
Santa Clara-based Ramtek, for a portion of the sale proceeds. Omnibus
subsequently locked both of them out of their offices at Digital Productions
in July of 1986. In September Omnibus obtained a temporary restraining
order against Whitney and Demos alleging that Whitney and Demos founded
a competing firm, Whitney Demos Productions, and had hired at least three
employees away from Omnibus and were using software and other information
that belonged to Omnibus. The restraining order required Whitney and Demos
to return certain property temporarily to Omnibus.
In October, Omnibus acquired Robert Abel & Associates Inc. for $8.5
million. However, by March of 1987, Omnibus started defaulting on the
$30 million it had borrowed from several major Canadian creditors. Most
of the debts were the result of acquiring Digital Productions and Robert
Abel & Associates the previous year. In May, Omnibus officially closed
down and laid off all its employees.
According to Gary Demos, "Abel & Associates was sunk just the same
as us. At the time, we were the two largest effects studios, and that
crash fragmented the entire industry. It changed the whole character of
the development of computer graphics." Talented people from both studios
found their way into other animation studios. Jim Rigel went to Boss Films,
Art Durenski went to a studio in Japan, some went to PDI, some went to
Philips Interactive Media (then known as American Interactive Media),
others went to Rhythm & Hues, Metrolight, and Lucasfilm. Whitney and
Demos created Demos Productions in 1986. It lasted for two years, then
they split up and formed their own companies in 1988. Whitney formed US
Animation Labs, while Demos formed DemoGraphics.
In the personal computer field, computer graphics software was booming.
Crystal Graphics introduced TOPAS, one of the first high-quality 3D animation
programs for personal computers, in 1986. Over the years, Crystal Graphics
would continue to be a major contender in the PC based 3D animation field.
The following year, Electric Image was founded and released a 3D animation
package for both SGI machines and Apple Macintosh computers. In Mountain
View, California, a new 3D software company was founded under the name
Octree Software Inc.. They later changed their name to Caligari Corporation
and now offer 3D animation programs for both the Amiga and PC platforms.
Also in 1986 computer graphics found a new venue, the courtroom. Known
as Forensic Animation, these computer graphics are more geared to technical
accuracy than to visual aesthetics. Forensic Technologies Inc. started
using computer graphics to help jurors visualize court cases. Still creating
Forensic animation today, they have been in the business longer than any
other company. Now they use SGI workstations from RS-4000's up through
Crimson Reality Engines. For their 3D software they exclusively use Wavefront
but have a few interfaces to CAD modeling packages. For 2D animation they
use a program called Matador by Parallax.
In that same year, Disney made its first use of computer graphics
in the film "The Great Mouse Detective." In this first Disney attempt
at merging computer graphics and hand draw cel animation, they only used
the computer for some of the mechanical devices such as gears and clockworks.
A Computer Generated Imagery (CGI) department was formed and went on to
work on such films as "Oliver and Company," "The Little Mermaid," "Rescuers
Down Under," "Beauty and the Beast" and "Aladdin." With the highly successful
results of "Aladdin" and "Beauty and the Beast," Disney has increased
the animators in the CGI department from only 2 to over 14.
About this time at Lucasfilm, things were getting a little complicated.
The computer graphics division wanted to move toward doing a feature length
computer animated film. Meanwhile ILM was getting interested in the potential
of computer graphics effects. Catmull explains, "Lucas felt this company
was getting a little too wide and he wanted to narrow the focus into what
he was doing as a filmmaker. Our goals weren't really quite consistent
with his." So the computer graphics division asked if they could spin
off as a separate company and Lucas agreed to do that.
It took a year about a year of trying to make that happen. Catmull continues,
"One of the last things I did was hire two people to come in and start
a CGI group for ILM because they still wanted CGI special effects capabilities.
So I went out to a number of people but mainly focused on Doug Kay and
George Joblove. They turned us down the first time. We talked to them
and interviewed them and they called up and said 'We decided not to come
up, because we have our own company.' So I put down the phone and thought
'damn I have to keep on looking.' Then that night I called back again,
and said 'Doug, you're crazy! This is the opportunity of a lifetime! Something
went wrong in the interview. Come back up here and let's do this thing
again.' He said 'OK!' so I brought him up again, we went through it all
again, and this time they accepted."
The computer graphics division of ILM split off to become Pixar
in 1986. Part of the deal was that Lucasfilm would get continued access
to Pixar's rendering technology. It took about a year to separate Pixar
from Lucasfilm and in the process, Steve Jobs became a majority stockholder.
Ed Catmull became president and Alvy Ray Smith became vice-president.
Pixar continued to develop their renderer, putting a lot of resources
into it and eventually turning it into Renderman.
Created in 1988, Renderman is a standard for describing 3D scenes.
Pat Hanrahan of Pixar organized most of the technical details behind Renderman
and gave it its name. Since then Hanrahan has moved to Princeton University
where he is currently Associate Professor of Computer Science.
The Renderman standard describes everything the computer needs to know
before rendering your 3D scene such as the objects, light sources, cameras,
atmospheric effects, and so on. Once a scene is converted to a Renderman
file, it can be rendered on a variety of systems, from Macs to PCs to
Silicon Graphics Workstations. This opened up many possibilities for 3D
computer graphics software developers. Now all the developer had to do
was give the modeling system the capability of producing Renderman compatible
scene descriptions. Once it did this, then the developer could bundle
a Renderman rendering engine with the package, and not worry about writing
their own renderer. When the initial specification was announced, over
19 firms endorsed it including Apollo, Autodesk, Sun Microsystems, NeXT,
MIPS, Prime, and Walt Disney.
An integral part of Renderman is the use of 'shaders' or small pieces
of programming code for describing surfaces, lighting effects and atmospheric
effects. Surface shaders are small programs that algorithmically generate
textures based on mathematical formulas. These algorithmic textures are
sometimes called procedural textures or spatial textures. Not only is
the texture generated by the computer, but it is also generated in 3D
space. Whereas most texture mapping techniques map the texture to the
outside 'skin' of the object, procedural textures run completely through
the object in 3D. So if you were using a fractal based procedural texture
of wood grain on a cube, and you cut out a section of the cube, you would
see the wood grain running through the cube.
The interesting part however was that Kay and Joblove (along with the
other CGI specialist at ILM) became so efficient and the CGI grew and
grew until today the CGI group is ILM. Its not a major department, it
is...ILM. This is viewed by some as one of the most stunning developments
in computer graphics history. One of the reasons the CGI department became
so important is that it succeeded in what it intended to do. They set
goals, budgets and they met them. Meanwhile, back at Pixar in December
of 1988, Steve Jobs stepped down from his post of Chairman of Pixar, and
Ed Catmull took his place. Charles Kolstad, the company's VP of manufacturing
and engineering, became the new president.
Paul Sidlo had worked as Creative Director for Cranston/Csuri Productions
from 1982 until 1987 when he left to form his own computer graphics studio,
ReZ.n8 (pronounced resonate). Since then, ReZ.n8 has continued to be a
leader in producing high quality computer graphics attracting such clients
as ABC, CBS, Fox, ESPN, NBC along with most of the major film studios.
Jeff Kleiser had been a computer animator at Omnibus were he directed
animation for the Disney feature film "Flight of the Navigator." Before
Omnibus Kleiser had founded Digital Effects and worked on projects such
as "Tron" and "Flash Gordon." As things started to fall apart at Omnibus
he did some research into motion capture. Then when Omnibus closed, he
joined up with Diana Walczak and formed a new company in 1987, Kleiser-Walczak
Construction Company. Their new firm's specialty was human figure animation.
In 1988 they produced a 3-1/2 minute music video with a computer generated
character named Dozo. They used motion control to input all of her movements.
Brad DeGraf, also from Omnibus, joined forces with Michael Wahrman to
form DeGraf/Wahrman Production company. At the SIGGRAPH conference of
1988, the showed, "Mike the Talking Head" which was an interactive 3D
computer animated head. Using special controls, they were able to make
it interact with the conference participants. Later DeGraf would leave
Wahrman and go to work for Colossal Pictures in San Francisco.
The Pixar Animation Group made history on March 29, 1989 by winning an
Oscar at the Academy Awards for their animated short film, "Tin Toy."
The film was created completely with 3D computer graphics using Pixar's
Renderman. John Lasseter directed the film with William Reeves
providing technical direction.
At the 1989 SIGGRAPH in Boston, Autodesk unveiled a new PC based animation
package called Autodesk Animator. As a full featured 2D animation and
painting package, Animator was Autodesk's first step into the multimedia
tools realm. The software-only animation playback capabilities achieved
very impressive speeds and became a standard for playing animation on
PCs.
In 1989 an underwater adventure movie was released called "The Abyss."
This movie had a direct impact on the field of CGI for motion pictures.
James Cameron, director and screenwriter for Abyss, had a specific idea
in mind for a special effect. He wanted a water creature like a fat snake
to emerge from a pool of water, extend itself and explore an underwater
oil-rig and then to interact with live characters. He felt it couldn't
be done with traditional special effects tools and so he put the effect
up for bid and both Pixar and ILM bid on it. ILM won the bid and used
Pixar's software to create it. Catmull explains, "We really wanted to
do this water creature for the Abyss, but ILM got the bid, and they did
a great job on it."
Cameron viewed this effect as a test piece and that if it didn't work
out then he could have done without it. But it did work, and it worked
so well and had enough of an impact, that it convinced him that CGI could
create a major character in his next film which would be "Terminator 2."
ADVANCES
OF THE 1990s
In May of 1990, Microsoft shipped Windows 3.0. It followed a GUI structure
similar to the Apple Macintosh, and laid the foundation for a future growth
in multimedia. While in 1990 only two of the nation's top ten programs
ran under Windows, this rose to nine out of ten just a year later in 1991.
Later that year, in October, Alias Research signed a 2.3 million dollar
contract with ILM. The deal called for Alias to supply 3D, state of the
art computer graphics systems to ILM for future video production. While
ILM in turn would test these new systems and provide feedback.
NewTek, a company founded in 1985, released the Video Toaster in October
of 1990. The Video Toaster is a video production card for Amiga personal
computers that retails for $1,595. The card comes with 3D animation, and
24-bit paint software and offers video capabilities such as a 24-bit frame
buffer, switching, digital video effects, and character generation. The
practical video editing uses of the Video Toaster made it very popular,
and today it is used on broadcast television shows such as Sea Quest and
Babylon 5 for 3D computer graphics.
Also in 1990, AutoDesk shipped their first 3D Computer animation product,
3D Studio. Created for AutoDesk by Gary Yost (The Yost Group), 3D Studio
has risen over the past four years to the lead position in PC based 3D
computer animation software.
Disney and Pixar announced in 1991 an agreement to create the first computer
animated full length feature film, called "Toy Story," within two
to three years. This project came as a fulfillment to those early NYIT'ers
who had the dream of producing a feature length film. Pixar's animation
group, with the success of their popular Listerine, Lifesavers and Tropicana
commercials, had the confidence that they could pull off the project on
time and on budget.
"Terminator 2" (T2) was released in 1991 and set a new standard for CGI
special effects. The evil T-1000 robot in T2 was alternated between the
actor Robert Patrick and a 3D computer animated version of Patrick. Not
only were the graphics photorealistic, but the most impressive thing was
that the effects were produced on time and under budget.
The same year another major film was released in which CGI played a large
role, "Beauty and the Beast." After previously having one success
after another with computer graphics, Disney pulled out all the stops
and used computer graphics throughout the movie. In terms of the beauty,
color and design Disney did things that they could not possibly have done
without computers. Many scenes contained 3D animated objects, yet they
were flat shaded with bright colors so as to blend in with the hand-drawn
characters. The crowning sequence was a ballroom dance in a photorealistic
ballroom complete with a 3D crystal chandelier and 158 individual light
sources to simulate candles.
The effect of these two movies in 1991 on Hollywood was remarkable. Catmull
explains, "So what happened was in 1991 'Beauty and the Beast' came out,
'Terminator 2' came out and Disney announced that they had entered into
a relationship with us to do a feature length film computer animated film
for them. Beauty and T2 where phenomenal financial successes and all of
a sudden everybody noticed. That was the turning point, for all the ground
work that other people had been doing yet hadn't been noticed before.
It all turned around in 1991, it was the year when the whole entertainment
industry said 'Oh my God!' and it took them by storm. Then they all started
forming their groups and their alliances and so forth."
Early in 1991, Steve Jobs gave the ax to all application development at
Pixar. Fearing that the selling of application software would discourage
other third party software developers from writing software for Job's
NeXT computer he halted all application development at Pixar. He gave
the employees 30 days to try and spin off a separate company to focus
on application software. This of course did not prove to be enough time,
so the president of Pixar, Chuck Kolstad, along with about 30 employees
(almost half of Pixar's workers) were laid off. Ed Catmull moved back
into the position of president. Pixar lost a lot of talent including Alvy
Ray Smith who went on to start a new company called Altamira (funded by
Autodesk) and created a PC version of his IceMan image editing software
he created at Pixar. This product is now commercially available on the
market under the name, Altamira Composer.
A Technical Award was given to six developers from Walt Disney's Feature
Animation Department and three developers from Pixar for their work on
CAPS. CAPS is a 2D animation system owned by Disney that simplifies and
automates much of the complex post-production aspects of creating full
length cartoon animations.
In 1993, Wavefront acquired Thomson Digital Image (TDI) which increased
Wavefront's market share in the high-end computer graphics market. Wavefront
immediately begin integrating products from TDI into their own line of
computer graphics software.
Early in 1993, IBM, James Cameron (writer/director/producer), Stan Winston
(special effects expert) and Scott Ross (visual effects executive from
ILM) joined forces to create a new visual effects and digital production
studio called Digital Domain. Located in the Los Angeles area, Digital
Domain hopes to give ILM a run for its money. Not to be out done, ILM
followed with their own announcement in April to form a joint "media lab"
with Silicon Graphics Inc. called JEDI (Joint Environment for Digital
Imaging). ILM will get the latest and greatest SGI hardware and SGI will
get to use ILM as a testing facility.
PDI opened their Digital Opticals Group in Hollywood to create special
effects for motion pictures such as "Terminator 2: Judgment Day," "Batman
Returns," and "The Babe." Now, PDI has become one of the leaders in digital
cleanup work such as wire removal, for motion pictures. Often wires are
used for special effects like people flying or jumping through the air.
Sometimes scratches occur on irreplaceable film footage. For "Terminator
2," PDI used image processing to erase the wires that guided Arnold Schwarzenegger
and his motorcycle over a perilous jump. PDI uses software to automatically
copy pixels from the background and paste them over the pixels that represent
the wires.
Another edit for T2 involved a semi truck crashing through a wall and
down into a storm ditch. The original shot was made at the wrong angle.
So the director wanted the footage flipped left to right, to keep the
continuity consistent with surrounding shots. Normally this would not
be a problem, yet in this instance a street sign was in the picture, and
even the driver could be seen through the windshield of the truck. So
these elements prevented the normal flip that any studio could have performed.
To solve these problems, PDI first flipped the footage. Then they cut
the sign from the unflipped footage and pasted over top of the flipped
sign. Then they copied and pasted the driver from the left side of the
truck to the right side. The finished sequence looked flawless.
PDI performed many other sleights of hand for the movie "Babe," a documentary
about baseball legend Babe Ruth. A number of challenges faced the producers,
one of which was that the main actor, John Goodman is right handed, while
Babe Ruth was left handed. As you can imagine, this really threw off many
scenes where John had to pitch the ball. To resolve this problem, PDI
used digital image processing.
To create the effect of a pitch, John Goodman simply mimed it, without
using a ball. Then they filmed a left handed pitcher throwing the ball
from the same position. Then the baseball from the second shot was composited
onto the first shot. However, the actor playing the catcher had to fake
it along with John Goodman and the result was he didn't catch the ball
at the same time it arrived. To solve this problem, they split the scene
down the middle and merged the catcher from the second shot into the first
shot. This resulted in a flawless left-handed fastball. "Cleanup" special
effects like this have become a mainstay for computer graphics studios
in the 80's and 90's.
Nintendo announced an agreement with Silicon Graphics, Inc. (the leader
in computer graphics technology) to produce a 64-bit 3D Nintendo platform
for home use. Their first product, Ultra64 will be an arcade game to be
released in 1994, while a home version will follow in late 1995. The home
system's target price will be $250.
In the early 1990's Steven Spielberg was working on a film version of
the latest Michael Crichton best seller, "Jurassic Park." Since the movie
was basically about dinosaurs chasing (and eating) people, the special
effects presented quite a challenge. Originally, Spielberg was going to
take the traditional route, hiring Stan Winston to create full scale models/robots
of the dinosaurs, and hiring Phil Tippett to create stop-motion animation
of the dinosaurs running and movements where their legs would leave the
ground.
Tippett is perhaps the foremost expert on stop-motion and inventor of
go-motion photography. Go-motion is a method of adding motion blur to
stop-motion characters by using computer to move the character slightly
while it is being filmed. This new go-motion technique eliminates most
of the jerkiness normally associated with stop-motion. As an example,
the original King Kong movie simply used stop-motion and was very jerky.
ET on the other hand used Tippett's go-motion technique for the flying
bicycle scene and the result was very smooth motion. Tippett went to work
on Jurassic Park and created a test walk-cycle for a running dinosaur.
It came out OK, although not spectacular.
At the same time, however, animators at ILM began experimenting. There
was a stampeding herd of Gallimimus dinosaurs in a scene that Spielberg
had decided to cut from the movie because it would have been impossible
to create an entire herd of go-motion dinosaurs running at the same time.
Eric Armstrong, an animator at ILM, however, experimented by creating
the skeleton of the dinosaur and then animating a walk cycle. Then after
copying that walk cycle and making 10 other dinosaurs running in the same
scene, it looked so good that everyone at ILM was stunned. They showed
it to Spielberg and he couldn't believe it. So Spielberg put the scene
back into the movie.
Next they tackled the Tyrannosaurs Rex. Steve Williams created a walk-cycle
and output the animation directly to film. The results were fantastic
and the full motion dinosaur shots were switched from Tippett's studio
to the computer graphics department at ILM.
This was obviously a tremendous blow to the stop-motion animators. Tippett
was later quoted in ON Production and Post-Production magazine as saying,
"We were reticent about the computer-graphic animators' ability to create
believable creatures, but we thought it might work for long shots like
the stampede sequence." However as it progressed to the point where the
CGI dinosaurs looked better than the go-motion dinosaurs, it was a different
story, he continues, "When it was demonstrated that on a photographic
and kinetic level that this technology could work, I felt like my world
had disintegrated. I am a practitioner of a traditional craft and I take
it very seriously. It looked like the end."
However, Tippett's skills were very much needed by the computer animators.
In order to create realistic movement for the dinosaurs, Tippett along
with the ILM crew developed the Dinosaur Input Device (DID). The DID is
an articulate dinosaur model with motion sensors attached to its limbs.
As the traditional stop-motion animators moved the model, the movement
was sent to the computer and recorded. This animation was then touched
up and refined by the ILM animators until it was perfect. Eventually 15
shots were done with the DID and 35 shots were done using traditional
computer graphics methods.
The animators at ILM worked closely with Stan Winston, using his dinosaur
designs so the CGI dinosaurs would match the large full-scale models Winston
was creating. Alias Power Animator was used to model the dinosaurs, and
the animation was created using Softimage software. The dinosaur skins
were created using hand-painted texture maps along with custom Renderman
surface shaders. The final scene which is a show-down between the T-Rex
and the Velociraptors was added at the last minute by Spielberg since
he could see that ILM's graphics would produce a realistic sequence. The
results were spectacular and earned ILM another Special Effects Oscar
in March of 1994.
In February 1994, Microsoft Corporation acquired Softimage for 130 million
dollars. Microsoft's initial use of TDI technology will be internal, to
enhance their multimedia CD-ROM products and interactive TV programs.
Microsoft also plans to port the Softimage software over to its Windows
NT operating system. This may be the first move in starting a trend for
the shifting of high-end graphics software from workstations to personal
computers.
The summer of 1994 featured blockbusters full of computer graphics. Some
effects however, were so photorealistic that the computer's role was undetectable.
For example in the movie "Forrest Gump," artists at ILM used digital compositing,
overlaying different video sequences on top of each other, to give the
illusion that the actor Tom Hanks was in the same scene as some famous
American politicians like John F. Kennedy. They also used standard image
editing techniques to "cut" the legs off of an actor who played the part
of a wounded soldier who lost his legs in war. They simply had him wear
knee-high blue tube socks. Then after the film was scanned into the computer,
the artists used Parallax software to copy portions of the background
over the blue tube socks in every frame. The result is that Tom Hanks
picks the actor up off a bed and it looks as if the actor really has no
legs.
Another major project for ILM was the movie, "The Mask." In this movie,
the computer graphics artist at ILM had full creative freedom in producing
wild and extravagant personalities for the character of the Mask. In one
case, they digitally removed his head and replaced it with the head of
a computer generated wolf. In another scene, they animated a massive cartoon
style gun that the Mask pulls on a couple of criminals. This gun has multiple
barrels, swinging chains of machine gun bullets, even a guided missile
with a radar locks on the criminals. All of it was created photorealistically
using 3D graphics and then composited onto the live action shot.
Considering the quality and realism that we see in computer graphics today,
it's hard to imagine that the field didn't even exist just 30 years ago.
Yet even today the SIGGRAPH, the conference and exposition, continues
to excite the computer graphics community with new graphics techniques.
And while companies have come and gone over the years, the people haven't.
Most of the early pioneers are still active in the industry and just as
enthusiastic about the technology as they were when they first started.
Many of these pioneers that were discussed can be readily reached on the
Internet. This access is similar to being an artist and being able to
pick up the phone and call Monet, Michelangelo, Renoir, or Rembrandt.
|