Text for the Estaleiro Book, Vila do Conde Workshop Series
Film Post-production: more than the sum of bits and bytes
by Dirk Meier
In the past decade we have witnessed the transition from film originated movie production to digital acquisition. Some readers may argue that this transition started much earlier and most believe that it’s not over yet. On the acquisition side there has been a great deal of discussion, at least in the past ten years, over when the analogue times will end. This is mainly because there is good reason to keep analogue film alive and as an option for shooting movies.
Conversely, digital postproduction completely took over many years ago and hardly anyone argues about that transition. It seems we only gained options and quality in moving from the flatbed editing table, optical printers and printer light correction to the file based digital postproduction. As a colorist working on computer based digital colorgrading systems, I’m not going to argue with this. I do however believe that along this transition we have lost something. As with every new technology that replaces older technology some skills and knowledge are lost. I know of filmschools that force their editing students to cut at least one movie on a flatbed editing table, and camera students that have to develop at least some stills by themselves in a darkroom. Seeing how some, mostly young, people working on filmsets treat flashcards and harddisks with the original camera files, I believe this kind of training is a good idea.
The omnipresence of digital images from mobile phones, ever-smaller digital cameras on laptops, and tablets, makes for less significance of the image. Resulting in less respect when dealing with the digital media on a professional film production. Just because you can instantly replay a take in the camera, download and copy it losslessly, downsample, grade and email it from set, doesn’t make its worth any less.
The same is true for the image postproduction. Today technically you could do the whole process of editing, VFX, grading and mastering a feature film in 2K resolution on a single powerful laptop and a bunch of harddisks. Consider building a film lab and developing 30.000m of 35mm film and doing workprints before you can even start syncing sound. But as we already learnt just because everyone runs editing software on his or her home computer, not everyone is an editor. The variety of options doesn’t make it easier, but instead complex. This is the situation today; the analogue-only-workflow is gone and instead we have innumerable options of workflows, a multitude of analogue (mechanical) and digital cameras multiplied by dozens of recording options internally and externally, RAW files with many ways of interpretation (development), at least 3 aims and methods for the colorgrading, and many, many deliverables – digital, file-based, tape-based, analogue filmprints, 2D and 3D at different brightness levels… just to name a few.
It is due to these variables that skilled and experienced people are needed to deal with the different steps of postproduction. I am absolutely convinced that a good postproduction supervisor actually saves money on a production and/or improves the quality of the movie. The same applies to DOPs, directors, producers and editors who should all have a common and basic understanding of the image postproduction process and the possibilities in colorgrading. In particular the grading should be very familiar for cinematographers given the amount of image manipulation that is possible today.
During our three day workshop end of May 2012 we spoke a lot about the basics of bits and bytes and how our analogue world actually gets “digitized”. The basic understanding of so-called color subsampling and compression or data-rates allows a cinematographer to make a better decision about the camera and/or recorder he or she chooses to use on a film production. And this becomes the foundation of the options we have in the colorgrading. The best grading system won’t be able to recover information in the image that was lost during recording, maybe because of compression, or of over- or underexposure. You needed to know the peculiarities of different filmstocks in the analogue days. Unfortunately today your filmstock is comprised of many different factors interacting with each other, the camera with a particular sensor and the processing in the camera, the recorder with different data-rates, color sampling and codecs, and with RAW cameras even the software and settings that are used in postproduction to “develop” your digital files.
We then discussed a few current digital cameras like the Arri Alexa, Red One and Sony’s F3 with their actual specs. Comparing official specifications and actual results from my experience and with a closer look at what is fact and what is myth. Even the Alexa tricks you in giving higher pixel numbers on the spec sheet than it is able to use for image capture. And e.g. all of these three cameras claim to have a Super 35 sized sensor. But the Red One only covers 89% of the Super 35 area. In terms of the field of view this makes almost the difference between a 32 and 36mm lens.
I also tried my best to explain the ubiquitous term “LUT”. The Look-Up-Table is actually a very helpful tool in many regards in the image processing. It’s a translation table, that changes image values e.g. to hold a simple color grade. That’s why people load LUTs in cameras, so that you don’t have to watch flat, desaturated RAW images on set, but a nice contrasty and colorful picture; and again in the editing system so that you don’t have to edit looking at the RAW images all the time. And in grading we can use it to emulate a certain print stock. So it is a versatile tool and many cinematographers train themselves the basics of some grading software to be able to create their own LUTs.
LUTs can also be used in color management. It was here we entered a new and heavy topic that is somehow relevant to everyone dealing with images. I introduced the basics of how color management works and we successfully calibrated a few of the participants’ laptops; since all color grading becomes worthless when the display you are grading on is not calibrated to a known standard. The minimum standard, which at the same time is an almost worldwide standard, is the so-called ITU Recommendation 709: a standard describing the actual colors and “gamma” (somehow the contrast) for high definition television. When your display matches this standard, it is calibrated to Rec. 709 and you know that your images will look more or less the same on any other display that is calibrated to Rec. 709.
From the technical basics to the art of grading...
Knowing the technology and its possibilities allows creative people to use it best and bring their visions to life. As a colorist I’m dependent on the images I’m given by the DOP, the filmmaker and increasingly so the VFX people. In no other movie I have yet had the experience of such a successful collaboration in regards to the final images than “AntiChrist”.
For the Lars von Trier movie I met the DOP Anthony Dod Mantle weeks before the actual shooting when he was doing test shots with Lars and the 2nd unit DOP and digital camera supervisor Stefan Ciupek. We spent a whole day in a grading suite playing around with the footage after Anthony showed me a whole bunch of mood images. These ranged from old paintings of Dreyer to high speed photography. And for the sheer fun and joy we explored the possibilities with the test footage they shot with color filters, shift and tilt lenses and in slow-motion.
For me this laid the foundation to believe in an artistic approach that was beyond the usual terms of enhancing the images. So I graded the dailies material for the movie during production and worked as a team with Stefan Ciupek on the final grading. He had graded quite a few movies before this one with Anthony and was leading the pace.
But even when we started working in alternating shifts to get through the movie for the Cannes deadline we spent a lot of time together in the grading suite with Anthony. Some of the so-called “Visualisation Shots” like Charlotte Gainsbourg walking through the ferns or over a bridge shot in super slow-motion at 1000 frames per second were just so beautiful, fascinating and important for the story that we spent hours on these single shots in grading.
Anthony loves to look at the projection screen in the grading suite as the canvas of a painter and he actually likes to stand in front of it and show the colorists where and how to apply our “brushes”. These particular shots are still my favorite examples of the artistic collaboration between a director, a DOP and the VFX supervisor, in this case Peter Hjorth.
It required a strong visual idea and concept in the script, filled with the right location, framing and lighting on set and the seamless manipulation in VFX to enter the grading suite in a state that allowed for the final creative touches. And only then grading can go beyond the usual enhancement, like the focusing of the viewers’ attention or the plain fixing of issues in exposure or color matching. That doesn’t mean all other movies l worked on were uninteresting movies, just that most of them didn’t explore the possibilities of color grading in a similar, extensive way. Of course not every story is suited for such an approach.
So what comes next…
It seems we already fully acknowledged that simply everything is possible now in movie making. Digital image manipulation has no limits anymore as long as you have enough money to buy you render time on the big VFX houses’ render farms. The average producer’s reality looks different and luckily not every story needs artificial characters and VFX spectacles to be told. The knowledge about what is already possible and affordable for a small budget can always help to tell the story. The simple knowledge on how best to handle your digital files from shooting to delivery helps every producer, director and cinematographer to do their job. I see an increasing understanding of what a good grading offers for every movie, but I personally wish it would be taken more into account during planning and shooting.
The tools for previsualisation are there, look creators for on-set are there for free, and good color grading software is also available for free. This all can help to improve the quality of your movie, as long as you’re willing to hire skillful people and educate yourself.
Technically speaking, are we there yet? Is there anything to improve? Besides the obvious trends of shooting stereo 3D or converting afterwards and going higher resolution again to 4k, there are some interesting developments. The “Image Interchange Framework” with its new colorspace called ACES is probably my favorite development at the moment. It allows colorists to more seamlessly mix and match footage of different sources, like film and digital and even different digital cameras. It allows easier integration with VFX and the exchange between post houses and it gives you a future proof master version for archiving. It comes at the prices of bigger file sizes at the same resolution and more processing needs in the grading systems. But I’m convinced this is the future.
And then there is UHD, HDR and HFR, i.e. Ultra High Definition, High Dynamic Range and Higher Frame Rates. The recently standardized successor of HD goes to resolutions of 7680 by 4320 pixels. Well, I hope my eyes will hold up long enough to get some kind of improvement out of a 4K digital projection – at least when I’m forced to sit in the front rows of a huge theater. I don’t see the majority of cameras and lenses even for cinema production delivering enough actual resolution for 4k today.
HDR as a special technique eventually will become obsolete because the sensors today allow for a dynamic range in a single exposure that was only possible with HDR a couple of years ago. And that trend is just going on and on. The higher frame rate I believe is more of a habit thing. Once the videogame-generation is the majority of cinemagoers they’ll be happy with 48 or 60fps shooting and projection. But that won’t be for me.
What I wouldn’t mind is to have the choice, maybe in editing, maybe in grading to smoothly select between the current framerate in a shot. When a pan was just too fast and shutters annoyingly I’d like to go up to a higher framerate without the audience noticing it. Then you could use this artistically again and separate between a clean superficial modern world playing back a 60fps and the juddering frantic 24fps world in a run-down place, to give a simple example.
When shooting something in stereo 3D at least I want to be able to select objects in the grade by its depth. Why do I still have to make a shape around a body or face? I just want to click on it and it is automatically selected and tracked throughout the whole shot. So there is still a lot of room for improvement.
(copyright Dirk Meier, October 2012)