Date: Mon, 18 Apr 2005 20:29:58 -0400
To: WSFA members <WSFAlist at WSFA.org>
From: "Mike B." <omni at omniphile.com>
Subject: [WSFA] Re: SF by computer
Reply-To: WSFA members <WSFAlist at WSFA.org>

At 06:48 PM 4/18/05 EDT, MarkLFischer at aol.com wrote:
>In a message dated 4/18/2005 5:12:18 PM Eastern Daylight Time,
>omni at omniphile.com writes:
>
>>I got the DVD, but haven't watched it yet, and I missed it in the  theaters.
>>My main interest in it is the technology used to make it.   Story would be
>>a nice extra... ;-)
>
>CGI is a terrible crutch for lazy filmmakers, for examples see the "Matrix"

It can be, same as any other tool in the box.  If you don't use it
properly, it will scar the results.  Sky Captain, like Tron, Star Wars,
Jurasic Park and others, pushed the limits of what is possible beyond what
had been done before.  I don't expect perfection on a first try, but I'm
interested in seeing what they came up with, because it's a sure thing it
won't be the last we see of this sort of thing.

>channel puts out.  I guess the idea is that if you stun the audience by
>filling the frame with moving detail, they won't notice that the story was
>missing.

My biggest problem with most CGI is that the people doing it are more
artist than engineer.  They don't seem to have ever played ball games as a
kid, or flown kites or model planes or done anything else that would give
them a gut-level feel for how things move, what accelerations occur in
reality, etc..  The result is that their photorealistically rendered,
perfectly textured objects move like cartoons, not real things.  They have
no weight to them, inertia isn't a factor, and they are just generally not
very believable.  It's as bad as watching stop motion animation where the
animator didn't pay attention to center of gravity and has models standing
on one leg, obviously off balance, but just sort of hovering there rather
than falling over as they should be.

There are exceptions, but way too much of the CGI I've seen has this
problem, and it's just sad, since the same computer that calculates the
light beams to do the rendering is more than capable of handling the
physics models to make the motion look realistic too.

>Actors are hard-pressed to turn in decent performances when they  not only
>can't see what they're supposed to be interacting with, but often have  no
>idea what it is.

That's been a problem in SF from the beginning.  They can learn to cope
though.  In Sky Captain there was little or nothing for them to base
anything on.  The scene were whatshername climbs out of the plane for
instance...all that was there was a box to stand in for the wheel as she
stepped down...everything else other than her was bluescreen.

One potential solution is to do some of the rendering ahead of time, so you
can at least show the actors what it will look like later and they can use
their imagination as they do the scene.  Another solution, especially if
you are going to use fully-rendered characters and just need the motion
capture, is to put things they will touch in the set, ala the box/wheel,
and let them wear virtual reality helmets so they do see something to
interact with...even if it's not full-finished quality.  If you need them
to do the scene without the helmets, they can wear them for rehearsal, and
then do the scene from memory.  Actors are good at repeating things
identically, or with only minor variations.

>CGI for the fan television-maker isn't as straightforward as it  sounds.

It isn't all that straightforward for the professionals either, at the
moment.  That's slowly changing though.  As the technology matures, it will
standardize more and more, get better user interfaces, and be
plug-compatible with other bits it needs to work with and the need to write
custom interfaces and whatever will decline.  It's far better than it was
even 10 years ago.  Compare the "write a script" methods of Pov-Ray with
something like trueSpace or Lightwave 3D's real time 3D interface for
instance.  Pov-Ray you had to use other scripts to adjust your scene
scripts incrementally to produce multiple images that you then used other
programs to stitch together into an .avi file or whatever.  In trueSpace
you use keyframe animation, with physics models if you like, and go
straight to a finished animation, after watching a 3D preview of the
animation, with shaded solids, though not photorealistic renders.  All that
for well under $1000.  What would take several evenings with PovRay I can
do in trueSpace in 10 minutes.  The computers are a lot faster too...what
took hours in 1988 takes seconds today.

>Babylon 5 was often straining to get its digital footage rendered  in time
to
>meet the broadcast schedule, and those were not on the cutting edge  of CGI
>photorealism.

Those were rendered on Amiga home computers...a lot of them.  They used
Lightwave 3D from before it was ported to the PClones and NT.  That's
mid-80s technology, and well short of what even that same program can do
today...though it was good enough for TV and mechanical things like space
ships.  Couldn't handle organic things like animals all that well
through...that advance required inventing nurbs, metaballs, muscles and
some other techniques that weren't around in the early 90s, but which were
by the time Jurasic Park came along...one of the first to create realistic
animals.  Lightwave and trueSpace have all that now too...along with
built-in scripting languages to change the scene in synch with the
animation, do lipsynch and other such things.

>Even at NTSC resolution, home computer equipment is not  powerful
>enough to render complex, well-lit, ray-traced frames in any kind of
>reasonable time frame unless you use a LOT of  computers.

Depends on the complexity, and how much you can share from frame to frame
(ala MPEG), and what you condider "reasonable time frame".  My PC here can
render a frame a lot faster than what was used for B5 for instance.  Most
of the Pixar stuff is done on Unix workstations, which aren't all that much
more powerful than a high end home PC...which are remarkably powerful if
you don't load them down with the bloatware called Windows.  Even if you
need a bunch of them, at $1000 or so each, you can get a lot of them for
the cost of one episode of Enterprise.  How long it takes to render a frame
mostly depends on how many polygons there are, how many light sources, and
whether you are using fancy advanced things like tracking reflections from
surfaces onto other surfaces (white couch sitting next to a blue table for
instance, that picks up some blue reflections on the white couch).  This
turns all objects into light sources, and really ups the ante on compute
time...but it isn't always needed to fool the eye, and isn't used on lots
of professional work either.  It's also possible to build special purpose
hardware to do this stuff...today that's mostly confined to video cards for
real time 3D rendering in action games, but there's no reason it couldn't
be adapted to support rendering for other uses if there was enough market
for it.

For an example of the time frames, check out 405 The Movie.  That was done
in about 6 months, and that includes writing, live action shots, rendering,
compositing, sound effects, music, final edit and putting the web site
together...by two guys and a few friends in their off hours after work
using their own PCs.  They have a lot more CGI in there than you might
expect...watch the film, make notes of what CGI you saw, then look at the
site.

Oh, and that thing was made several years ago now...things have advanced
even since then and will continue to do so.  If you base your future plans
on what is possible today, you will always be behind the curve...look ahead
at where things are going...that's the SF way!

-- Mike B.
--
Maybe if we made a giant badger....