At the SMPTE meeting, Peter Fasciano (a co-founder of Avid), Rob Jaczko (from the music department at Berklee) and Ron Labbe (studio3d.com), talked about the history, issues and lexicon of 3D capture and playback.
Much of it, to be honest, I didn’t completely follow, or it went by so fast I couldn’t write it down, but here’s the key things I came away with:
This is the distance between your eyes. In theory, the distance between the camera lenses should match this (the distance between the camera lenses is referred to as the interaxular distance.)
While adjusting this distance will result in “distorted” 3D images, that distortion can be used for effect: moving the lenses closer together will have the effect of making what you are shooting appear to be larger. Moving them further apart makes the object appear smaller. As an example of the latter, Peter talked about shooting a 3D still of the Grand Canyon with an interaxular distance of 500 feet, and ending up with what looked like a photo of the Grand Canyon ashtray.
Convergence vs parallel
When you have two lenses, they can either be aligned parallel, or they can converge on a point of focus. It appears that parallel is the preferred format, though curiously the demo (see clip below) used converged cameras. Interestingly, Avatar was shot using convergence, and the new Panasonic 3D HD camera has converging lenses.
My impression is that the advantage with convergence is that you have your final results right away (i.e. the live demo was easier to setup and produce results with converged cameras) whereas parallel requires more post processing, but gives you more options in post processing.
Ron Labbé demos a stereoscopic camera setup using two video cameras and two stacked projectors with polarizing filters.
The 3D Window
In 3D there’s a “window” which is the plane of the screen; things are either behind or in front of that window.
If there’s a lot of “stuff” behind that window, then your eyes want to diverge and that causes headaches. The “difference” in position for an object in the background in the left and right image (i.e. the offset distance between left image and the right) should be no more than one percent of the image (for a 1920 image, about 19 pixels.)
If something is too close (too forward of the window), it causes exophoria.
Hollywood thinks it can present 3D movies better than home theater. This is partly because people tend to sit (proportionally) further away from the screen at home, which limits the 3D effect. Unfortunately, I think the history of television shows that people are quite happy to stay home for a poor copy of what they can get in the theater.
Issues when shooting
There’s a language to 3D movies that 2D movie makers are unfamiliar with. Here are just some of the issues that were raised at the two sessions I attended:
- You shouldn’t shoot down, which – as was pointed out – is a problem for sports where many camera positions are above the action
- Fast pans, fast zooms, and fast motion don’t work well
- Placement of the “window,” and things in front and behind, is more complicated than shooting in "2D'
- The further the object is from the viewer, the smaller should be the 3D effect. There may be no 3D effect if something is greater than ~50 feet from the camera
- Much of Avatar isn’t really in 3D, or has no 3D effect visible (at least according to some people who saw the movie)
One interesting aside; they showed some different “kinds” of 3D; stereo pair, anaglyph, polarization, and “wigglers.” Wigglers can be interesting. In wiggler stereoscopy, two or more images are displayed in succession, allowing the viewer to see relative motion and a sense of depth. You can see an example here: Wiggler example
After seeing the wiggler, I was wondering whether the attraction of the constant “jerky”cam effect used in movies like the Bourne Ultimatum isn’t some crude derivation of the wiggler effect; but when I think about it, most of those are just changing what the camera is pointing at, not moving the position of the camera. This suggests that the attraction is just that it creates discomfort in the audience. But I wonder if the jerky cam would be even more upsetting in 3D?
Tomorrow: Sony’s take on things in Part II