Lessons Learned During Post

2 Chrises
Anthony (Eric Martin Brown) and Two Chrises (MacLeod Andrews) have a stare-down

Quick Self-Promotion: I will be teaching a three-part course on Visual Storytelling at Brooklyn Brainery! The course looks at the tension between showing and telling in films. The first session will focus on existing films. During the second and third sessions, students will bring in works-in-progress (films, scripts, poems, novels, etc.) and discuss ways in which they can show their stories.

Where: The Brooklyn Brainery, 515 Court St., Brooklyn, NY
When: Thursdays, 6:30-8pm, October 6th, 13th, and 20th
Cost: $45
Register on the official site

Post

I’m just starting the promotion/marketing journey, but I wanted to step back for a minute and talk about the lessons I learned during post. What follows is a brief look at what I learned during post.

Make Time For VFX in Production

After Dan Loewenthal and I had scanned through the film a few times, and discussed/vetoed/decided on a few tweaks, we figured it was time to lock the picture. Up to that point, I’d put together some very rough visual effects shots, just so we had something to look at, and to give me some idea of what I wanted. I figured that few, if any, of these attempts would survive through the end of post (though a couple did). The VFX shots came in three basic flavors:

  • Hiding/Erasing booms, boom shadows, lights, and other gakk
  • Compositing plates together (there are a few shots where we extended sets and doubled characters)
  • Creative work – adding tazer effects, glows, blood, and other things that weren’t there during the shoot

What I’ve learned is that I have to pay more attention during production when setting up VFX shots. Ben Wolf, my DP, is really good at setting up and executing low-budget VFX. But I rushed through the process a bit, creating more work for Vickie later. A good example is of a composite shot called "Two Chrises". In the foreground plate, we had Anthony (left) and Chris (right) arguing, then turning around as a second Chris enters the room.

There are several problems with this shot. First, foreground Chris moves into the area that the background Chris occupies. If you’re going to shoot a shot like this without using a greenscreen, then keeping the layers separate is pretty important. Second, the lighting from outside changed slightly between shooting the foreground and background plates, so Vickie and Verne Mattson, our colorist, had to spend more time in post evening up the shots.

Ben did a superb job framing and executing the shot. And the actors’ performances were great – MacLeod Andrews (Chris) and Eric Martin Brown (Anthony) are, after all, reacting to someone who literally isn’t there, and they sell it. The problem is that I didn’t schedule enough time to proceed just a little more slowly and make some minute adjustments, so we had to rush through the shots.

On the other hand, this shows you what you can do even without a lot of money or a greenscreen. We could have tried setting up a portable greenscreen, but placing it far enough away from the actors and lighting it properly may have been very difficult in that location (it was a small office).

Regardless of these oversights, Ben, Vickie and Verne were able to put together a wonderful shot. Dan Loewenthal, the editor, broke it up into two pieces and put a reaction shot by foreground Chris in between, to heighten the impact of the shot.

DropBox!

I am not affiliated in any way, shape or form with DropBox. However, I totally swear by it. It is worth it to upgrade to the Pro Version ($10 a month). With Vickie in Queens, Quentin in Brooklyn, and Verne in New Jersey, it would have been very inefficienct for me to shuttle files back and forth. YouSendIt is a great option for sound files (Quentin and I used it a few times) but for video files, DropBox is key. It works like a virtual hard drive that synchronizes a directory on your hard drive with its online counterpart. Stick a file in your local Dropbox directory, and it will be uploaded. If you give other people permission to see your account, they can download it. No more shlepping drives and DVDs back and forth.

Amend the Script

After the picture edit is done, you should go back to your script and amend it to reflect the locked cut. You’d be surprised how many differences there are between what you wrote and what was said on set, and between that and how it was cut together. I found little chunks of dialog had been added, others taken away, and some bits rearranged within the same scene. Presenting an amended, as-edited script to your sound designer will help him/her out immensely.

You Can Never Have Enough Drives

I started out with a 2TB internal drive and a 500Gig Camera/Sound Master drive. Since then I bought three 2TB external e-SATA/FW drive – one serves as a backup of the internal drive, a second is for Vickie (and contains everything) and the third is for Verne. I also purchased a second 500Gig "shuttle drive" which went back and forth with me on those occasions when I was meeting with someone had to grab a file from them or give one to them. I will need another 2TB drive pretty soon, to back up all the behind-the-scenes footage, the various QuickTime exports I’ve made, and the VFX final files. Since space constantly gets cheaper, I only bought new drives as a I needed them.

Life After Post

Okay, that’s it for now. There’s a lot going on at the moment – we’re in the process of building a new website for the film, and creating publicity/promo materials. I’ll have more to say about that next time.

Working With Your Editor, Part 2

Snowball Because pictures of cats are always good to post

In the last entry (wow, that month went by too fast) I talked a bit about the alchemy of editing and the director/editor relationship, and got as far as the rough cut. This time around I’d like to talk a bit about how to get from the rough to the final cut.

The Dead Spots

As I mentioned before, I have a hard time going back to the big picture after a screening. I get caught up in the atomic structure of the film, especially the dead spots. I’m always afraid of boring the audience, or myself. My first instinct was to cut cut cut. Dan never lost his sense of the big picture. He warned me about cutting too much too soon, because we ran the risk of losing the moments that were buried in the middle of the dead spots.

He was correct. The first thing he did after the rough cut was to simply go through the film and trim out small bits from many of the shots. This meant cutting a few frames from the head and tail of a series of shots in a scene, to keep the tension from flagging. Sometimes it meant getting out of a scene a little sooner (again, just a few frames). Sometimes it meant starting a scene a little later, so that the actors were already warmed up or in the frame. These small changes can make big improvements, without requiring you to rethink the work as a whole.

Just by making these kinds of cuts, Dan trimmed about six minutes out of the film. The result was much, much tighter. During this time I made suggestions but mostly stayed out of Dan’s way (at least, that’s what I recall). I started working on putting rough F/X composites and titles together, and thinking about music.

When To Bring the Music In

On Caleb’s Door, I started working with a temp score only towards the very end of the picture edit. Dan suggested bringing music much earlier into the process. This made a lot more sense, particularly given the somewhat extreme state of the character’s realities, and the pacing of the chase/action scenes in the film. Also, as Dan said, a shot that seems overly long without music can sometimes seem fine with it.

Fortunately, we both found common musical ground. Dan’s a big fan of Egyptian music, and I’d been thinking about a scoring around a particular instrument – the oud. The oud is a stringed instrument that produces a very bluesy sound, and in some musical forms plays a role similar to that of a guitar in rock music. So we started dropping in temp tracks from an Egyptian composer he’s worked with, and I looked at a bunch of different sources, including artists like Stellamarra, Rabih Abou-Khalil, and others. The initial idea was to use a Middle Eastern theme to underscore the idea that that this film was taking place in an altered version of New York.

I should tell you now, DO NOT GET TOO ATTACHED TO YOUR TEMP SCORE. Chances are that unless your composer has specifically written it for you, that you’re not going to be able to afford it. I’ve seen it happen more times than I care to recount. The record labels and publishers are only too happy to give you a great deal on a festival license, because they know that you’ll be back once a deal is on the table. At that point they’re counting on you being in a terrible bargaining position – you’ll cave into the time pressure to deliver the film to a distributor (before you see any money) so you’ll ransom your cats or your unborn grandkids to pay for the score, rather than lose both money AND time to on a sound remix.

How Often To Meet

On Found In Time Dan and I generally met a couple of times a week. My ‘homework’ in between meetings was to put together rough F/X composites and titles, and pick out temp tracks. Having things to do in between meetings helped keep me from getting too obsessed. During the actual sessions we’d drop in my temp material, look at cuts that Dan had made, and run the film through (usually from start to finish). We focused a lot on the first half-hour, since that was the most problematic part of the film.

We generally worked for three or four hours during the week, and then a longer session on the weekend. Working this way, we averaged about one cut of the film per week. With each cut we got closer to the target running time – about ninety minutes. We stopped and talked a lot during the process. Not just about the film, but about life, love and film. Far from distracting us, these chats strengthened our working relationship, and helped me get over my anxiety and deal with the film in smaller chunks.

The Feedback Screening

After about nine weeks, we had a feedback screening. It’s an important part of the process, but the feedback should not be taken too literally. There are two important factors: inviting the right people, and taking the right attitude.

You want to invite people who will give you honest, direct feedback, and are willing to get specific. A mix of film and non-film people is good. A small group is better than a bigger one.

The right attitude to take is to be open to everything, to withhold your defensiveness and feedback until after everyone’s gone. The best response to criticism is ‘can you elaborate on that’ or ‘that’s really interesting. What else?’ No matter how ridiculous the suggestion or feedback, look at the person and try to take it seriously. You may know out of the gate that what they’re asking for is impossible – you can’t afford reshoots, you don’t have the material, it would create too many problems in the third act. But what they’re responding to is a real problem that may have a solution that IS within your reach. Plus, these people are spending their precious time with you, so do them the courtesy of being polite and encouraging.

What you’re looking for are patterns. If one or two people have problems with something, then they may be more perceptive than everyone else, or they may have differing tastes than you. But if everyone has issues with the same scenes or characters, then you have an actual problem that needs to be addressed. Often good sound design and music can get people more involved in the story – watching a fine cut without corrected sound is a lot like looking at a really great sketch for a painting. Adjusting the pacing can solve a lot of problems.

What became apparent to me was that the first act was too slow. It took too long to get into the story, and Chris’s problems were over-commented on. So this is where Dan and I concentrated our efforts over the next two weeks.

In the next blog entry, I’ll talk about the transition from picture to sound editing, and how best to think about your score.

The Art of (S)logging

I promise, there will be a blog entry – soon – on production. But I wanted to delve into more detail on something that’s very important, and I think underreported: preparing your film for the editor.

In the previous entry I focused on the big post picture. Today I’m going to stick to the first three steps I outlined: transcoding, synching, and logging the footage. The goal is to get acquainted with the film you’ve shot (as opposed to the one in your head), save your editor unnecessary headaches (and you unnecessary time and money) hunting for footage, and get your brain thinking about sound, visual effects, titles, music, and other post elements.

Workflow: When To Do This

On a big enough film, your script supervisor would make the continuity book, the 2nd AC and the mixer would write reports, and your assistant editor would transcode, log, and synch, all while you’re shooting. The advantages are fairly obvious: you’ll find out about coverage or technical problems while you’re shooting, and you’ll get to a rough cut that much sooner. However, on micro-budget films, this is a comparative luxury.

On Found In Time, we had no script supervisor, the sound mixer was doing about three other jobs – though he did take very good notes – and we didn’t have an editor in place during the shoot. I’m assuming that many of you are in a similar situation.

Transcoding

The Canon 5D records to an H.264 Quicktime-playable file. H.264 is a highly compressed format that somehow retains a lot of detail despite throwing out an enormous amount of picture information. Part of how it does this is by storing frame data in a long-GOP format. GOP = Group Of Pictures. Essentially, what the codec does is analyze a group of frames (in most cases, 6 or 15). It stores the first frame, then stores the DIFFERENCES ONLY between the first frame and all subsequent frames within the group.

There’s a LOT more involved than this, but here’s the main point: editing H.264 footage can be difficult. Your cuts are probably NOT going to be on the first frame in a group, which means the computer will have to analyze and rebuild frames every time you cut picture. The result can tax your system, leading to dropped playback frames and a lot of rendering time. It’s also more difficult to do a final conform, render effects, etc. In other words, H.264 is a great origination and online distribution format, but you don’t want to edit with it.

Transcoding the footage from the original H.264 files to an I-frame format (which compresses and stores each frame individually) is thus an easy decision. But there are several software programs to do the trick:

MPEG Streamclip is preferred by many, and with good reason: it’s fast, user-friendly, free, and can batch-process clips very easily. But the quality of the resulting clips is not quite as good as we were hoping for. It also strips out the original timecode from the file, substituting its own.

Rarevision’s 5DtoRGB on the other hand, is supposed to do the best overall job in terms of image quality, but lacks a batch feature (at this time; it’s still under development). It also takes the longest to transcode.

We considered Compressor, but have had problems with batch transcodes in Compressor and haven’t been super happy with the results. After going on Creative Cow and talking to a few folks, we decided on Canon’s own Final Cut Pro plug-in, the EOS Plugin-E1. It produces decent results, processes batches of clips at a time without any hiccups, didn’t take too long, and retained the original clip timecode.

The next decision: what to transcode TO. The obvious choice for editing in Final Cut was Apple ProRes, but ProRes comes in several flavors, ranging from Proxy (small file size/lower quality) to HQ (huge file size, better quality). After thinking about it, trolling the forums, and consulting with some experts, we decided on ProRes LT, which is somewhere between the two ends of the spectrum. The data rate is approximately 100Mpbs, roughly the same as DVCProHD, and nearly 3x the 5D files’ 40Mbps. This means in practical terms that we were getting something very, very good – that we could put together into rough cut shape if we needed more investors or to assembly a festival screener out of – but we wouldn’t kill our hard drive.

The long term plan, once the film has been picture-locked, is to note the selects (the clips that make it into the locked picture), and re-transcode the camera originals to QuickTime HQ using the 5DtoRGB utility.

Setting Up Final Cut For Transcoding

At this point, we set up a new Final Cut project with a sequence default of 1920×1080 23.976p, with 48KHz 16-bit stereo sound. During the shoot the DP created folders by day, running to lettered bins if he had to copy more than one card per day (so we have Day1, Day1b, Day2, etc. folders on the hard drive). We started out by creating camera reel bins to mirror the originals. Within each bin, I created three sub-bins: Scraps (for NG or goofing-off material), Video (for source video clips), and Audio (for source audio clips).

We also created a database in FileMaker (which is cross-platform, by the way), to capture information on each clip. Initially, we just dumped a directory listing of all the clips into a text file then imported that into FileMaker, so we’d have a list of the 840 video clips and 735 sound files (we had a good number of MOS takes).

During the transcoding process itself, we renamed each clip to "sceneshot-take" format, then also filled in the scene, shot/take, reel (camera reel), angle, and loggingNotes fields. We went in shoot (as opposed to scene order), and limited the batches to one-or-two scenes worth of material depending on the number of individual clips. The entire process took about four days, and was highly automated. A good tutorial on it is on Canon’s own site. TWO THINGS TO NOTE:: when you name the clip (scene-shot-take), the utility actually renames the transcoded Quicktime file. So if you ever want to go back to your camera masters for retranscoding, make sure to keep a list of the original filename and the new one.

Also, the utility REQUIRES that all the clips be inside of a folder called DCIM off the root of the hard drive. That’s because the plug-in is expecting to be reading from an SD card (which uses DCIM as the main folder to put all saved video and still files in). Note that you CAN nest folders inside of the DCIM folder.

Once each batch was done, I moved the transcoded clips into matching day folders on the edit drive. This way instead of having over 800 clips in one folder to sift through, I would only have to look through a few dozen at a time.

Synchronizing

This was probably the most boring part. Anthony, our sound mixer, had wisely named nearly every sound file in the scene-shot-take format. So figuring out which sound take went with which video file was relatively trivial.

This is where Pluraleyes, from Singular Software, saved my butt. It’s a standalone program which works with Final Cut sequences and synchs video-to-video (in the case of multi-camera shoots) and video-to-audio footage. It creates a new sequence for each synched clip. So instead of going clip-by-clip, I was able to drag a dozen or so clips at a time to a sequence in my Final Cut file, line them up very roughly to their matching audio sequences, and click "Sync" in Pluraleyes. A few minutes later I had a dozen sequences with synched sound. Since we used a slate and had the original camera audio as a reference, Pluraleyes rarely had difficulty finding the right sync point. (BTW: the software is free to try for 30 days).

NOW, there was one surprise. For whatever reason, the audio in the original camera file was exactly one or two frames AHEAD from the video – you could tell because the slate was ahead. However, there was no drift. So I had to manually check the sync on each new sequence and adjust by one/two frames – but again, because we had the slates, this was a no-brainer. Other people on Creative Cow have complained of the same problem. There doesn’t seem to be a clear-cut solution, nor does it seem to be universal.

After moving the sound one/two frames, I muted the original camera audio, clipped the trailing and leading audio so the sequence would start on the first frame of video, and changed the sequence timecode to match the video timecode (so instead of starting at 01:00:00:00 the TC would start at 18:31:15:00, for example).

Last (but not least), we took the synched clips, along with the source video and audio files, and the scrap clips, and put them into scene bins. The scene bins ultimately replaced the day bins we had established, and had the same structure (Audio, Video, and Scrap sub-bins). Synched sequences went into a new sub-bin called Sync.

This process was also fairly mechanical, and took about two weeks (working part time).

Logging

Now I was ready to log the footage. This consisted of two parts: makes notes about each clip in my database, and lining the script. Lining the script is a BIG topic, and I’m no script supervisor, but the gist of it is that you want to visually indicate where each individual camera setup begins and ends within each scene, what lines and blocking have changed from script to shoot, what scenes have been omitted or added, and what gaps in coverage you might have. As you can imagine, this is a fairly time-consuming process.

In my database I had the following information already:
* individual clip name
* the scene, shot and take number
* the timing (media start, end, and duration)
* The angle (Master, CU John, OTS Jane on Jack, ECU pill bottle, etc.)
* Logging Note
* The original (camera source) filename
* The sound take file name

Most of this information I was able to get by exporting a file list from Final Cut, importing it into the database, then going through it quickly to make sure I didn’t miss anything.

To this laundry list of information I added:
* the first frame of action (usually after the DP calls "frame" or set but before you’ve called "action")
* A description of the shot
* Some kind of evaluation of the shot
* A list of visual problems in the shot (boom dips in at 23:04:10)
* Sound problems

On Found in Time, we shot 840 individual clips. Of these, about 75 or so were complete mistakes, goofing around shots, slates for MOS series, and otherwise unusable bits. These didn’t take long to log.

I later figured out that I was able to log between 10 and 20 clips per hour, depending on how complex each clip was. I managed to log everything in just over two weeks.

Why Do This To Yourself

You can get interns to transcode and synch, and maybe even do some of the logging, so why do this yourself? In my case, it was a way of getting familiar with the film that we shot (as opposed to the one in my head). This way, I don’t have to waste time having this discussion with the editor: "don’t we have a shot of…" No, we don’t.

It also got me thinking about how to solve certain coverage problems, what effects shots I will need, and what kind of sound design/music choices would work. The big thing is that the editor didn’t have to do this work – he was able to just look at the footage and start cutting. That is a huge time and money saver on any shoot.

Okay, so this post has probably been about as fascinating as watching paint dry. I promise, more fun posts to come!

Post Production Workflow

The shoot is over. I’m still figuring out all the things I learned, and at some point I’ll integrate it and write a short blog entry on the topic. But at the moment my energy is going towards getting ready for the next step: cutting the film. What follows is a synopsis of the post workflow for Found In Time It’s based on things I’ve learned while making this film, my experience as post supervisor on previous features, and a lot of consultation with other folks. Many thanks to Josh Apter, head of Manhattan Edit Workshop, Creative Cow Magazine, and as always Ben Wolf.

Don’t Just Start Cutting
The temptation is probably just to dig in and start cutting scenes together, using the camera master footage. This is almost always a mistake. First off, if you’re the director, you have no perspective on the footage. I know I don’t. Secondly, you need to organize both the “physical” files on the drive, giving them a proper reel name and folder to live in; and the names of the clips in your NLE. Thirdly, you need to set up a schedule – what you want, when you want it, and what the end goal is. Hopefully you’ve done this before you shot anything, and now you’re just revising it to match your remaining money/schedule/expectations. But if not, now’s a good time to set it up.

The Schedule
Take a BIG step back. Forget about the footage burning a hole on your hard drive. Think carefully: when can I realistically finish this film? What are the steps I need to take to get there? Who’s going to do those steps?
At this point, post breaks down into nine BIG steps, that generally (though not always) follow the order below:

1. Backup, Transcoding, Logging. In an ideal world, this is happening on a daily basis. Every night the Assistant Editor takes the day’s work (either on cards or drives), backs it up to another drive, then transcodes the footage to the editing format, usually while also logging it into the NLE.

2. Picture Cutting. The film is put together, reel by reel, by the editor.

3. Reshoots/Inserts/Additional Photography. You need it, you didn’t get it. Now go get it.

4. F/X and Titles. As the film nears completion, visual effects artists go to work on the more complex material. In an ideal world, sequences are finalized in time for the online. In many cases, the online has to be pushed back until after the sound mix is done, to give the effects artists more time. Titles are usually done at this point (end credit crawls are often finalized only at the final output phase).

5. Online. The film selects (from the final cut) are retranscoded at the highest possible resolution/setting. The footage is color corrected, basic transitions (dissolves, fades) and effects work (taking out booms, minor tweaks, etc.) are done. F/X and titles are married to the locked picture.

6. Sound Editing. The dialog levels are evened out, and the “sound world” of the film created – effects, foley, music, voice-over, are inserted and brought together.

7. Music. The composer scores the final cut of the film (sometimes this happens during the editing process). Existing music is licensed (don’t do this at home, kids! You don’t have the budget. Trust me.). The music is premixed (ideally).

8. Mix. The various sound elements (dialog, effects, foley, music, ambiance) are brought together and leveled, to conform to both artistic and broadcast standards. The mixer creates final “bounce files.”

9. Final Output. The conformed film is married to the bounce tracks, and the whole thing (all the reels) are output to the “final” master medium (tape or film).

So with this outline in hand, you have to figure out: who’s going to be doing what (personnel)? With what tools (gear)? For how long (timeframe)? And what are the things each step requires (inputs) and what are the results (outputs)?

After doing some research, and thinking about what’s worked best on previous low-budget films, I came up with the following chart.

Num. Step Inputs Personnel Gear Outputs
1 Transcoding
Organizing bins
Logging clips with scene/shot/take/other info
H.264 Clips on drive
Sound WAV files on drive
Myself Final Cut
Canon5D FCP Plugin
Final Cut Project File w/bins
Named ProRes LT clips in folders on drive
Logging notes of some kind (database, spreadsheet, something)
2 Syncing ProRes LT clips
Audio files
Final Cut Project
Me Final Cut
PluralEyes
Final Cut Project File w/bins
3 Script Notes Final Cut Project
Script
Me Final Cut Pro Lined script books with notes
Binder with notes, sound reports, production reports, etc.
4 Picture Edit Final Cut Project
Binder
Hard Drive
Editor Final Cut Pro Sequences in reels
5 Feedback Screenings Rough or 2nd Cut on DVD Editor, Me, Trusted friends DVD projector Notes for next cut
6 Reshoots/Inserts Wish list of shots Skeleton crew and cast Basic camera/sound unit
Props, set dressing
Video/audio footage
7 F/X and Titles Final Cut Project
F/X footage (shot on location)
Add’l computer-generated footage
Ben Wolf
Me
Visual F/X Artist
Editor (possibly)
Final Cut Pro
Photoshop
Motion
After Effects(?)
Locked VFX sequences and titles
8 Transcode for Online FCP sequences (reels)
Camera master files
Me Final Cut Pro
5DtoRGB tool
ProRes HQ (422) or ProRes 444 versions of selects only (clips that made the final cut)
Notes
9 Conform ProRes HQ clips
Offline Final Cut Pro sequences (reels)
VFX and title sequences
Me Final Cut Pro Final Cut Pro sequences, linked to ProRes HQ clips
10 Color Correction/Basic Compositing Final Cut Pro sequences (reels)
Notes
Colorist
Ben (DP)
Myself
Final Cut Pro
Color
Motion
After Effects
Color corrected reels with all titles and effects in place
11 Prep for Sound Edit Audio files
Final Cut Reels (preferably color corrected, but at least the final conforms
Me Final Cut Pro Quicktimes for each reel per the sound designer/composer specs
Sound tracks grouped per spec
OMF files per reel
Sound Design Notes in binder
12 Sound Design OMF files, etc. as above Sound Designer
Foley Artist?
Dialog Editor?
ProTools or other sound software
Final Cut Pro
Stereo LTRT session files
Possibly 5.1 session files
13 Music Quicktimes and sound notes Composer Instruments
Music mixing software
Soundtrack, broken into reels, premixed
14 Mix Session files
Quicktimes
Soundtrack files (if not already part of session files)
Sound Designer
Mixer(?)
ProTools
Mixing hardware
Bounce tracks
15 Final Output Blank HDCAM and Digibeta stock
Final Cut reels
Bounce tracks
Me
Post House Editor
Mixer?
Online suite Projection master
SD tape master
DVD master (Quicktimes)

Some specifics:
1. We picked ProRes LT because it offers the best compromise between file size and quality. H.264 can be difficult to edit with natively – it’s a long-GOP format, which means that Final Cut has to do a lot of math to reconstruct the frames at your edit points. This can cause machines to chug and drop frames during playback, which is not good. The whole long-GOP vs. i-frame discussion is beyond the scope of this article; but I’ll dig up some good resources for you or talk about it more in-depth at some point.

ProRes LT is an i-frame format (individual frames are stored instead of groups of frames), but the file size is manageable.

2. Pluraleyes is a stand-alone program that can take clips in a Final Cut Pro sequence and line them up. Assuming you have camera audio, Pluraleyes can line up your separate-source audio files with your video (with camera sound) files.

3. I’m glossing over a lot of the sound post process (which could have its own diagram); I’ll save that for another blog entry.

So now you’ve got a basic idea of what we’ll be doing over the next few months. Future blogs will focus on the individual steps, with more specifics and how-tos. I’d go into more detail but this entry is getting pretty long as it is. Until next time then!