Catching the Light - Film Technology of the future

Shooting is finished. During post-production, errors are discovered that had previously gone unnoticed. With light-field processing, this is no problem. The technology offers new possibilities for film production and allows scenes to be modified.

 

The scene is in the can. The film crew has packed up its cameras, rails, floodlights, and monitors. The actors are already on the next set. All takes are safely stored on digital media and are ready for post-processing and editing. The director and cameraperson inspect the material and select the scenes for the initial rough cut together with the editor. It is precisely here that issues come to light which, for all the meticulous planning, create new doubts in the cold light of the screening room and post-production. Did the camera really travel correctly during the tracking shot? Were there little wobbles and lack of focus in there that should not be in the final cut? When working with one or two cameras, this often means that nothing can be changed, because it would be too expensive and too much effort.

Changes are possible even after shooting

This is exactly where light­field technology comes in, a new kind of tool for recording and above all for post-production. The technology uses a variety of different perspectives, which are recorded on set with multi-camera systems, to alter and creatively adapt sequences. For example, the perspective of a scene can be changed, the depth of field can be shifted, and effects such as virtual tracking shots can be subsequently integrated. All this, of course, has long been everyday practice in studios for computer-generated scenes. In the Moving Picture Technologies department at Fraunhofer IIS, scientists have been working for many years on the application of light­field technology for use in live­action movie scenes. The head of the department, Dr. Siegfried Foessel, and his colleague Dr. Frederik Zilly are fascinated by the technological and creative challenges that a practically viable solution entails. “The starting point for our considerations about how expensive, time­consuming pick­ups and re­shoots could be avoided and how unrepeatable scenes could be used in new film versions were topics in our many discussions with our customers and partners in the film industry in Germany, Europe, and Hollywood,” recalls Siegfried Foessel. “Increasingly, it was the cost and also the generally unsatisfactory results when merging real scenes with virtual, digital effects that drove the search for a solution.” The scientist’s team has been working together closely with the technological and creative side of the film industry on the transition to digital technology since 1998. One of the first digital film cameras to be suitable for practical application, the ARRI D20/21, was developed in the laboratories of his engineers, as was the internationally used easyDCP post­production software, as he proudly explains while looking at the screen in the in-house cinema.

3D as catalyst for new technology

Meanwhile, Frederik Zilly and his team are presenting the first production­ready test versions for post­production tools for light­field and multi­camera recordings. A physicist by training, Zilly has been working on camera set­ups for 3D and special effects since 2007. He has been looking for a way of combining various levels of an image or various views of a scene with each other as efficiently as possible in order to create new scene views. “For multi­camera recordings, the so-called depth maps that have to be calculated for each scene are decisive,” explains Zilly excitedly. “You need them to be able to generate virtual views in high quality. This procedure is extremely time­ and labor­intensive in most cases, and it harbors multiple sources of potential errors. However, depth maps are an absolute prerequisite for effects such as rack focuses, virtual tracking shots, and combining reality and virtual effects. On today’s film sets, there is more than just one main camera in use. Using the variety of recordings and views of these cameras for creative post­production work is where light­field technology and light­field processing come into play.”

Light field – a century-old technology conquers 21st century film sets

The technique of recording all rays of light in a scene and not only those that strike the lens of the main camera is not actually a new approach – it is just that digital and synchronizable technology has made it feasible. The French scientist Gabriel Lippmann worked on light field from as early as 1908. “In purely mathematical terms, you define a light field by means of a function which describes the quantity of light that falls on each point of a three-dimensional space in all directions,” explains Frederik Zilly as he scrutinizes graphs and point clouds on his office monitors. “With the autostereoscopic monitors that allow users to watch 3D movies without special glasses – and whose popularity is quickly spreading, particularly in the high­resolution 3D sector – you need to generate an extremely high number of views of a scene, the more the better. The more complete the number of views, the freer viewers are to change position in front of the screen without perceiving gaps or blurring in the picture.”

© Photo Fraunhofer IIS/David Hartfiel

Fraunhofer IIS’s technology saves the entire light information for a scene, as symbolized here in the picture by the goalkeeper cradling the illuminated ball.

Program package for post- production of light-field recordings.

»THE CHANGE OF PERSPECTIVE IS THE STAR ATTRAC- TION OF LIGHT-FIELD TECHNOLOGY.«

Based on these experiences, results were also used for shooting live­action movie scenes. To illustrate, Frederik Zilly points to a mini­scene with Lego figures in his studio, which is used for stop­motion films. Holding 16 HD cameras in a 4×4 array, the stand does not move. After a pre-processing step to synchronize the cameras, all 16 individual recordings are in- spected and synthesized. Next, intermediate views are generated, and the scenes are color­ matched. Embedded in a professional post-production tool, various adjustable effects now appear via a plug­in. With a few simple maneuvers (which actually conceal a large number of sophisticated algorithms) Zilly shows how he can shift the focus from the front to the back without making any adjustments to the settings of the lenses. The star attraction, he explains, is the change of perspective. Conventional post­production software usually relies on zooming to simulate a virtual tracking shot. By contrast, the Fraunhofer IIS software calculates a real parallax from the various views, creating the impression of an actual dolly shot. This is particularly important for situations in which objects or people are close to the viewer as they move past. To give a specific movie example, let us say that a car driver sees a pedestrian on the near sidewalk while driving past. If the apparent change in position of the pedestrian is not taken into account when the driver changes his or her own position while driving past, we perceive the situation as jarring and unrealistic. It is precisely these kinds of possibilities and effects that make light­field technology appealing for productions that use green screens to prerecord complex scenes for subsequent use. Light­field methods allow pre-produced scenes to be adapted directly to the recordings in front of the green screen. Nowadays this happens only with faraway scenes in order to minimize the disruptive effects. Using light­field technology and intelligent algorithmics, it becomes possible to select close­ up scenes as well.

First pilot clip in conjunction with Stuttgart Media University

In conjunction with Stuttgart Media University, Frederik Zilly has already tackled real pilot productions. With the help of students, a pilot clip was recorded using a camera configuration made up of nine HD cameras and a high­resolution cine camera on a mirror rig. Processing of the scenes and post-production were carried out subsequently using the tools developed by Frederik Zilly and his team. The result was a clip called Coming Home, which featured real actors, various special effects, green screen recordings, and additional virtual objects, all of which put the performance of the algorithms through their paces under the tough real conditions of an ordinary film set. Particularly impressive was the efficient creation of depth maps, which allowed students to make depth-based color corrections in post-production. As a result, they could make precise adjustments, such as changing only the background colors and lighting. Moreover, relighting – a step that permits further light sources to be subsequently added to a scene and for the lighting of a scene to be edited, including side effects such as shadows, reflections, etc. – is also possible thanks to light­field algorithmics.

A look into the future

Light­field approaches hold huge potential for the future of moviemaking. Although the technology is still in its infancy as regards professional motion pictures, influential producers and cinematographers are taking an ever keener interest. “As one of the first developers and providers of professionally usable software, we’ve attracted lively interest,” says Siegfried Foessel. “More and more productions are making inquiries about possible collaborations with us to create the first pilot productions. In this way, they want to play a role in advancing the technology.” How long will it be before the first production using Fraunhofer technology is shown on our TV screens or in movie theaters? “Multi­camera or light­field technology is always used as a complement to one of the main cameras. Accordingly, a pure light­field movie is unlikely ever to happen,” explains Siegfried Foessel. However, Zilly and Foessel both expect that light­field technology will blossom into a serious new processing method in movie and TV production over the next three to five years. To be continued!