Take 9 Extended Interviews

TAKE 9, POV BOX: DIALOGUE EDITOR

FULL INTERVIEW: CLAIRE ELLIS

 

Up until now, we have heard from people exclusively involved with music in the film soundtrack.  It will be useful, for the interviews in the next few chapters, to learn something about the other two elements of the soundtrack, dialogue and sound effects.  Given how closely related it is to music—and indeed, how it too uses the human voice—the dialogue shares with music certain basic aspects such as pitch and volume.  Our interviewee in this section, Claire Ellis, is a dialogue editor at a leading post-production company called Molinare, based in London, England.  For her work as dialogue editor on the documentary feature film My Beautiful Broken Brain (2014), Claire has received the Golden Reel Award for Best Sound Editing.  Other documentaries that feature her work include Listen to Me Marlon (2015), a film that won Best Documentary at the San Francisco Film Critics Circle Awards.  Claire spoke to me from her office in London, after another long day of editing dialogue. 

 

JH: How did you get into this line of work?
CE: I studied Media Science at University, but at that point I really didn’t know what area I planned to go into.  I started, like most people, as a runner—basically doing whatever anyone higher up doesn’t have time to do, making tea, moving tape machines.

JH: That sounds like the score coordinator I was interviewing, Charlene Huang (T5, §3)!

CE: It’s very common, yeah.  So, after a brief stint as head runner, I found myself drawn to the audio department.  In all honesty, it was because I found them the friendliest and most down to earth, and who doesn’t want to spend their career with friendly people?  I moved to another company as an audio assistant and quickly realised I enjoyed “fixing” dialogue edits.  The restoration side of things is something that I taught myself, mostly in a bid to make my job quicker, easier and more efficient.  

JH: As a dialogue editor, where exactly in the schedule of a film’s post-production do you enter the scene?

CE: I come in after the picture edit, meaning the offline picture edit—before the film is online, before it looks fancy.  After the picture the picture edit, I get the sound, and I’m working on that while the music edit is taking place.  The music edit might be tweaked to fit around the dialogue, but that would then be done, not by the music editor, but by the dubbing mixer, or re-recording mixer to use American terminology. 

JH: Can we talk about an example, let’s say Listen to Me Marlon (2015), which has amazing sound in terms of the dialogue?

CE: Yes, I’ve tried to repress that project after I finished, because it was so hard!  There’s another one, My Beautiful Brain (2014).  That was wicked, because it was shot a lot on iPhone.

JH: When you get a film like My Beautiful Brain or Listen to Me Marlon, what is the sound like at that point?

CE: It would have the editor’s version of all the dialogues, crudely edited.  My job, then, is to smooth over what the editor has done, and choose alternative words and phrases if I feel like it’s jarring too much.  Listen to Me Marlon is a weird example, because, given the nature of it, a lot of the sound was quite jarring.  There was no way of making that smooth, since Marlon Brando was no longer around!  But in a normal feature documentary, someone would speak for ages and we’d cut that down to ten seconds, I make that ten seconds sound like that was all they said. 

JH: Just to be clear, after you’re done with the dialogue, there’s no more work on it, is that right?

CE: More or less.  After me, it goes to the dubbing mixer (re-recording mixer) for him to balance that against the effects and the music.  He can equalize my stuff as well, to make it punch through the music or to take out anything that he doesn’t like to make it clearer.  But after it leaves me, I think all the problems have gone.  Of course, I would say that!  He’d probably say, “I had loads of work to do.”  But that’s my job, to make the dialogue sound natural and consistent. 

JH: Many people would associate what you do with ADR.  Tell us a bit about that.

CE: I will do some ADR—automatic dialogue replacement—when I’m doing drama.  It doesn’t happen in documentaries, because documentaries are supposed to be natural speech as it was happening during the shoot.  But drama is scripted.  Say, an actor fluffs their lines or someone drops a microphone or a crew member puts a footstep over something, you would then get the actor to come in and re-record that line, and then you sync it up to their mouth. 

JH: When you’re doing a project where you’ve got to do ADR, is Foley sound already in the mix at that point?

CE: No.  That tends to all come together at the final, re-recording mix stage, when dialogue, Foley and everything gets handed over.  Or, if there’s a supervising sound editor, they can combine the two and play the two, but that’s not the way we operate here at Molinare. 

JH: So, just to be clear, in terms of the timing of everything, you’re doing what you do while the Foley folks do what they do, and the musicians do what they do, all separately, and then later, it will all come together.

CE: Right.  And don’t forget the sound effects people also doing their thing separately.  The music editor and the composer tend to be a separate body of people in post-production.  They will likely get a really early rough cut, but then they’ll get all the updates and have to keep changing the composition to match these rough cuts.  Sometimes our re-recording mixer will be provided with stems from the music editor or the composer, so that if you’re playing through a scene and the drums clash with the dialogue, for example, they can bring down the drums and keep the other instrumentation as is.

JH: You use the term stem.  What is a stem?

CE: A stem would be percussion, guitar, vocals, drones—any individual element that goes into making the song.  The musical cue is split up into stems so as to be able to adjust the volume of any one separately.

JH: And the concern is whether or not any one of these instrument groups or stems conflicts with the dialogue?

CE:  Whether it goes over the voice, or whether the percussion, for example, is doing the same thing as a sound effect is doing.  Then you have an argument between the sound effects editor and a composer as to who gets front play, or whether the music gets dropped or the sound effect gets dropped.  Normally, unfortunately, the sound effect gets dropped.

JH: Really?

CE: Yeah.  The music edit is happening almost throughout, well before they finish cutting and arrive at the director’s cut.

JH: Music trumps sound effects usually?

CE: Well, it depends on how adamant the composer is, and whether he’s there in the mix.

JH: It sounds like things could get political in the final mix!

CE: If there’s a composer in the final mix, you know the mix is going to take a little bit longer.

JH: What’s general practice?  Is it half and half, sometimes the composer is in the final mix, sometimes not?

CE: Sometimes the composer is on to another project by the time of the final mix and they don’t mind—they’re like, “Here are my stems, do what you want.”  Sometimes they only hand over stereo mixes, at other times they hand over a five-one mix, that’s something you don’t much control over.

JH: A five-one mix?

CE: Yeah, five-one.  Instead of doing just stereo left-right, they’ll do left, right, left around, right around, top and center speakers.  My knowledge of music in the mix is limited to when I have attended mixes, which is not that often. 

JH: It’s possible that the composer would rather not be present at the final mix…  Mark Isham said that very often he preferred not to be there.

CE: Sure.  For each discipline, it can be quite upsetting when you’re sitting there, you’ve worked really hard on this one bit, and that bit gets drowned out, whether effects or music.  Everyone wants their bit to be the loudest!  Sometimes, things work better without music, but you only realize this when you get into the re-recording mixers room, you remove the music, and the footage still maintains its pace. People often assume they need to tell the story through music, and yet quite often they don’t need to.  It’s not until they get to that mix stage when everything’s clean in the sound, that they realize they can remove that music.

JH: That’s interesting.  Now, you work mainly with three types of sound: sync sound, wild track and then archive sound.  What are these and which one of the three do you work with most?

CE: I would say that I work with sync sounds the most.  I would define sync sounds as anything, any sound that goes with the picture that you are seeing.  Sync sound is basically the sound that is in sync with that picture.  That includes dialogue.

JH: That makes sense.  Sync sound is basically the bulk of what you get to work with, then?

CE: Right.  I might look to wild sounds to fill in spaces.  Wild track or wild sounds are any extra sounds recorded on the day of the shoot.  After the director shouts “cut,” the sound recorders will stand in that room and record another five minutes.  That’s the wild track.  It’s the atmosphere of that room.  Or if there’s a motor bike sound, they will do wild track passes, pull ups, stops, pull always of that motorbike.

JH: You use wild sounds sparingly, then.

CE: Yeah.  Often, if it’s a challenging environment or there’s a very strange atmosphere in the background, I will go to the wild track to fill in the gaps, just because I can’t fill the gaps with sync sound.  That’s the only time would use a wild track.

JH: And finally archive sound?

CE: Right.  Archive sound is the other thing I deal with, which is sound recorded before the film was made.  For Listen to Me Marlon (2015), that sound was one hundred percent archive sound.

JH: That makes sense, since it was about Marlon Brando who had died ten years earlier, in 2004.  Can you walk me through what you do?  How does the sound come to, what kind of file?

CE: I will get the sound as a digital file, usually a WAV file and open it with Pro Tools, usually. JH: You use the same program as musicians, Pro Tools?

CE: Yeah.

JH: And what kind of manipulations do you make?  Let’s take Marlon Brando’s voice in Listen to Me Marlon.  He has an extraordinary sounding voice with this lovely high, hoarse quality to it.

CE: Yes.  With Brando I was pitch shifting things by semitones up and down, just to either add some energy at some point.  With dialogue editing, I shouldn’t do a lot of pitch shifting, but I do!  What I do is quite involved, and I shift certain minute frequencies within the voice in another program, which is kind of like Photoshop for sound.

JH: Right.  Is this the iZotope RX?

CE: Yes, iZotope RX.  This program allows me to look, not at a wave form, but at the spectral analysis of a wave form.  It’s quite complicated, but let me try and explain.  Say I need to replace the end of someone’s word and they used a similar word later on in the recording.  Say it was an “s” sound at the end, but it was a slightly different pitched “s” than the one I need.  What I would do is grab the frequencies of the “s,” change the pitch, and then paste that over the top of the “s,” just the top end of it.

JH: Wow.  That sounds extraordinarily finicky.

CE: I think I am the only person in the world that does that!

JH: Why would you go to all this trouble to make such minute alterations in pitch?

CE: Well, if someone went up at the end of their sentence, for example, and does this every sentence, the listener will eventually notice.  It gets a bit annoying.  I just bring it down a bit to make it less annoying.  Or, to take another example, if you want a word to just be a bit longer if someone’s being thoughtful.  So, I’ll just stretch that word out a little bit to fit that gap. 

JH: Sounds a little manipulative.

CE: Yes.  I kind of recompose people’s sentences, in a way.

JH: Do you use any other tools, like equalization (EQ) to adjust frequencies in dialogue?

CE: Yeah.  I tend to use the EQs that are included in Pro Tools.  I am not that much of a kit junkie!  As long as it’s doing what I needed to do, that’s all that counts.  I’ll use the seven band EQ in Pro Tools, and then I’ll use the iZotope to view the frequencies spectrally, so if there was a whine, say, then I can pinpoint that to Hertz and then literally just knock it out.  EQ I’ll use globally, across a whole track, but the iZotope for smaller changes, like photoshopping for audio.

JH: With all these tools, are you able to completely remove a hum in the background or some other undesirable noise?

CE: Yes.  I’ve actually had some success with removing moving music out of people’s dialogue.  In one documentary I did, people were talking, and while they were talking, they had the radio on.  Come to find out, we couldn’t clear any of that music to get permission to use it.  So that’s one type of instance where you’d use archive sound, to get different music.  I can also make the background music unrecognizable for clearance issues generally, but then I tell production that it is up to them if they want to use or not.  As far as I’m concerned, if movie audiences don’t notice my work, then I’ve done a really good job.

JH: That’s a very good way of putting it! 

CE: It’s important that sound, and especially music, following the rhythm of the film cutting.  A film editor must also have a sense of sound rhythm, of musical rhythm.  Have you seen the film Baby Driver (2017)?

JH: Yes!  I was watching it just the other day.

CE: Well, every single cut in Baby Driver is made to the beat of the music or to a sound effect.  One of our guys here at Molinare was one of the sound editors on it.  He was editing sound on set alongside the soundtrack, so as to make sure that all the shots were working.

JH: In a film like Baby Driver, where the songs are integral to the plot, that would be really important to have the editing of the film shots match the beat of the songs.

CE: Yeah.  If you watch it, many of the scenes in the film play more like a music video than a feature film.  Like, the amount of times there are cuts, all the way down to car doors shutting and windscreen wipers moving.  None of this happened by accident; it was put together that way.

JH: One last question.  How would you say your job has changed over the years, since you’ve been working as a dialogue editor?  You mentioned to me that you got your start transporting tapes around?

CE: Yeah, that was my main job.  You had to have a really strong back to cart around lots of tapes?

JH: By tapes, what exactly do you mean?

CE: Betamax video tapes.  That would have been twelve years ago.

JH: Twelve years ago, so in the early aughts.  Betamax and VHS tapes, so video tapes, were still being used?

CE: Yep.  When I first started, for a voiceover commentary, you recorded to a DAT.  DATs are digital audio tapes, where the sound is recorded digitally even though it’s in a magnetic tape format.  This was basically the missing link between cassettes in the late twentieth century and digital files in the twenty-first.  A DAT was a tiny little tape that you played in a small player, the size of an answering machine.  And when you laid your sound back, all three parts of the soundtrack—music, effects and voice—were recorded as separate files on an eight-track tape.

JH: So, in the early aughts, the parts of the soundtrack were still recorded on tape for the final mix, not as digital files?

CE: Yeah, absolutely.  Digital files here at Molinare started around 2006.  We weren’t using Pro Tools yet, but we began using audio files with computers.  I got this job because I said I was familiar with computerized audio files.  After my first two years or so, so the late aughts, we transitioned to Pro Tools.  I was lucky that I had used Pro Tools a little bit when I was at university.  Back then, it hadn’t been adopted by the industry.

JH: After doing this for over a decade, you still enjoy your job?

CE: Yeah.  It takes a certain kind of person.  I do spend most of my time in a room alone.

JH: And you spend most of your time listening carefully to other people’s voices.  It must be interesting, learning new things and even new languages. 

CE: Actually, most of the time I can work on the minutiae of dialogue and tune the words out.  I have to, given the difficult content of most documentaries.  Even when I’m working on something in English, what they’re actually saying never actually computes in my brain.  It’s just words and rhythm of speech to me.  Quite a lot of the films I work on are heartbreaking.  If I was absorbing that all the time, it would be like watching the news for ten hours a day, the really bad side of the news.  No one wants that!  Remember, quite frequently I see and hear things before they are censored for audiences.  So, in my mind, when I’m working, I process dialogue as gibberish, the meaning of which I tune out.

JH: Well, I can tell you that I have not been tuning you out in this interview!  It’s been fascinating.  Thanks for taking time out of your busy schedule to speak with me, Claire.

CE: Thank you, John.  It’s been my pleasure.

 

 

Back to top