I promise, there will be a blog entry – soon – on production. But I wanted to delve into more detail on something that’s very important, and I think underreported: preparing your film for the editor.
In the previous entry I focused on the big post picture. Today I’m going to stick to the first three steps I outlined: transcoding, synching, and logging the footage. The goal is to get acquainted with the film you’ve shot (as opposed to the one in your head), save your editor unnecessary headaches (and you unnecessary time and money) hunting for footage, and get your brain thinking about sound, visual effects, titles, music, and other post elements.
Workflow: When To Do This
On a big enough film, your script supervisor would make the continuity book, the 2nd AC and the mixer would write reports, and your assistant editor would transcode, log, and synch, all while you’re shooting. The advantages are fairly obvious: you’ll find out about coverage or technical problems while you’re shooting, and you’ll get to a rough cut that much sooner. However, on micro-budget films, this is a comparative luxury.
On Found In Time, we had no script supervisor, the sound mixer was doing about three other jobs – though he did take very good notes – and we didn’t have an editor in place during the shoot. I’m assuming that many of you are in a similar situation.
Transcoding
The Canon 5D records to an H.264 Quicktime-playable file. H.264 is a highly compressed format that somehow retains a lot of detail despite throwing out an enormous amount of picture information. Part of how it does this is by storing frame data in a long-GOP format. GOP = Group Of Pictures. Essentially, what the codec does is analyze a group of frames (in most cases, 6 or 15). It stores the first frame, then stores the DIFFERENCES ONLY between the first frame and all subsequent frames within the group.
There’s a LOT more involved than this, but here’s the main point: editing H.264 footage can be difficult. Your cuts are probably NOT going to be on the first frame in a group, which means the computer will have to analyze and rebuild frames every time you cut picture. The result can tax your system, leading to dropped playback frames and a lot of rendering time. It’s also more difficult to do a final conform, render effects, etc. In other words, H.264 is a great origination and online distribution format, but you don’t want to edit with it.
Transcoding the footage from the original H.264 files to an I-frame format (which compresses and stores each frame individually) is thus an easy decision. But there are several software programs to do the trick:
MPEG Streamclip is preferred by many, and with good reason: it’s fast, user-friendly, free, and can batch-process clips very easily. But the quality of the resulting clips is not quite as good as we were hoping for. It also strips out the original timecode from the file, substituting its own.
Rarevision’s 5DtoRGB on the other hand, is supposed to do the best overall job in terms of image quality, but lacks a batch feature (at this time; it’s still under development). It also takes the longest to transcode.
We considered Compressor, but have had problems with batch transcodes in Compressor and haven’t been super happy with the results. After going on Creative Cow and talking to a few folks, we decided on Canon’s own Final Cut Pro plug-in, the EOS Plugin-E1. It produces decent results, processes batches of clips at a time without any hiccups, didn’t take too long, and retained the original clip timecode.
The next decision: what to transcode TO. The obvious choice for editing in Final Cut was Apple ProRes, but ProRes comes in several flavors, ranging from Proxy (small file size/lower quality) to HQ (huge file size, better quality). After thinking about it, trolling the forums, and consulting with some experts, we decided on ProRes LT, which is somewhere between the two ends of the spectrum. The data rate is approximately 100Mpbs, roughly the same as DVCProHD, and nearly 3x the 5D files’ 40Mbps. This means in practical terms that we were getting something very, very good – that we could put together into rough cut shape if we needed more investors or to assembly a festival screener out of – but we wouldn’t kill our hard drive.
The long term plan, once the film has been picture-locked, is to note the selects (the clips that make it into the locked picture), and re-transcode the camera originals to QuickTime HQ using the 5DtoRGB utility.
Setting Up Final Cut For Transcoding
At this point, we set up a new Final Cut project with a sequence default of 1920×1080 23.976p, with 48KHz 16-bit stereo sound. During the shoot the DP created folders by day, running to lettered bins if he had to copy more than one card per day (so we have Day1, Day1b, Day2, etc. folders on the hard drive). We started out by creating camera reel bins to mirror the originals. Within each bin, I created three sub-bins: Scraps (for NG or goofing-off material), Video (for source video clips), and Audio (for source audio clips).
We also created a database in FileMaker (which is cross-platform, by the way), to capture information on each clip. Initially, we just dumped a directory listing of all the clips into a text file then imported that into FileMaker, so we’d have a list of the 840 video clips and 735 sound files (we had a good number of MOS takes).
During the transcoding process itself, we renamed each clip to "sceneshot-take" format, then also filled in the scene, shot/take, reel (camera reel), angle, and loggingNotes fields. We went in shoot (as opposed to scene order), and limited the batches to one-or-two scenes worth of material depending on the number of individual clips. The entire process took about four days, and was highly automated. A good tutorial on it is on Canon’s own site. TWO THINGS TO NOTE:: when you name the clip (scene-shot-take), the utility actually renames the transcoded Quicktime file. So if you ever want to go back to your camera masters for retranscoding, make sure to keep a list of the original filename and the new one.
Also, the utility REQUIRES that all the clips be inside of a folder called DCIM off the root of the hard drive. That’s because the plug-in is expecting to be reading from an SD card (which uses DCIM as the main folder to put all saved video and still files in). Note that you CAN nest folders inside of the DCIM folder.
Once each batch was done, I moved the transcoded clips into matching day folders on the edit drive. This way instead of having over 800 clips in one folder to sift through, I would only have to look through a few dozen at a time.
Synchronizing
This was probably the most boring part. Anthony, our sound mixer, had wisely named nearly every sound file in the scene-shot-take format. So figuring out which sound take went with which video file was relatively trivial.
This is where Pluraleyes, from Singular Software, saved my butt. It’s a standalone program which works with Final Cut sequences and synchs video-to-video (in the case of multi-camera shoots) and video-to-audio footage. It creates a new sequence for each synched clip. So instead of going clip-by-clip, I was able to drag a dozen or so clips at a time to a sequence in my Final Cut file, line them up very roughly to their matching audio sequences, and click "Sync" in Pluraleyes. A few minutes later I had a dozen sequences with synched sound. Since we used a slate and had the original camera audio as a reference, Pluraleyes rarely had difficulty finding the right sync point. (BTW: the software is free to try for 30 days).
NOW, there was one surprise. For whatever reason, the audio in the original camera file was exactly one or two frames AHEAD from the video – you could tell because the slate was ahead. However, there was no drift. So I had to manually check the sync on each new sequence and adjust by one/two frames – but again, because we had the slates, this was a no-brainer. Other people on Creative Cow have complained of the same problem. There doesn’t seem to be a clear-cut solution, nor does it seem to be universal.
After moving the sound one/two frames, I muted the original camera audio, clipped the trailing and leading audio so the sequence would start on the first frame of video, and changed the sequence timecode to match the video timecode (so instead of starting at 01:00:00:00 the TC would start at 18:31:15:00, for example).
Last (but not least), we took the synched clips, along with the source video and audio files, and the scrap clips, and put them into scene bins. The scene bins ultimately replaced the day bins we had established, and had the same structure (Audio, Video, and Scrap sub-bins). Synched sequences went into a new sub-bin called Sync.
This process was also fairly mechanical, and took about two weeks (working part time).
Logging
Now I was ready to log the footage. This consisted of two parts: makes notes about each clip in my database, and lining the script. Lining the script is a BIG topic, and I’m no script supervisor, but the gist of it is that you want to visually indicate where each individual camera setup begins and ends within each scene, what lines and blocking have changed from script to shoot, what scenes have been omitted or added, and what gaps in coverage you might have. As you can imagine, this is a fairly time-consuming process.
In my database I had the following information already:
* individual clip name
* the scene, shot and take number
* the timing (media start, end, and duration)
* The angle (Master, CU John, OTS Jane on Jack, ECU pill bottle, etc.)
* Logging Note
* The original (camera source) filename
* The sound take file name
Most of this information I was able to get by exporting a file list from Final Cut, importing it into the database, then going through it quickly to make sure I didn’t miss anything.
To this laundry list of information I added:
* the first frame of action (usually after the DP calls "frame" or set but before you’ve called "action")
* A description of the shot
* Some kind of evaluation of the shot
* A list of visual problems in the shot (boom dips in at 23:04:10)
* Sound problems
On Found in Time, we shot 840 individual clips. Of these, about 75 or so were complete mistakes, goofing around shots, slates for MOS series, and otherwise unusable bits. These didn’t take long to log.
I later figured out that I was able to log between 10 and 20 clips per hour, depending on how complex each clip was. I managed to log everything in just over two weeks.
Why Do This To Yourself
You can get interns to transcode and synch, and maybe even do some of the logging, so why do this yourself? In my case, it was a way of getting familiar with the film that we shot (as opposed to the one in my head). This way, I don’t have to waste time having this discussion with the editor: "don’t we have a shot of…" No, we don’t.
It also got me thinking about how to solve certain coverage problems, what effects shots I will need, and what kind of sound design/music choices would work. The big thing is that the editor didn’t have to do this work – he was able to just look at the footage and start cutting. That is a huge time and money saver on any shoot.
Okay, so this post has probably been about as fascinating as watching paint dry. I promise, more fun posts to come!