Emmy-nominated score mixer, recording engineer and producer Phil McGowan has become a recognised face in the music scoring industry, specialising in recording and mixing music for films, television shows and video games. Over the years, McGowan has collaborated with numerous prolific composers and is the score mixer on Netflix’s Ozark, for which he received an Emmy nomination in 2020. His other works includes the martial arts comedy-drama series Cobra Kai, Roku’s Weird: The Al Yankovic Story, Season 3 of Paramount+’s series Star Trek: Picard and owns a music production company McGowan Soundworks in Culver City, California.
As well as having extensive knowledge of audio production and surround mixing techniques, Phil is an expert in mixing in various formats from mono to Dolby Atmos, orchestral recording, Pro Tools, complex stemming, working with music to picture, and complex file management. We caught up with him to find out how he found his place in the industry and achieved such success, and why LiquidSonics reverbs have become such a staple in his set-up.
So, how did this all kick off? Did you sit in school thinking one day I want to mix?
Oddly enough I think I’m still maybe one of the only scoring mixers I’ve met who wanted to do this from the outset. My parents were both musicians (not professionally). My mum has a music education degree and taught piano lessons for a bit when I was a kid. My dad has been a hobbyist keyboard/organ player for a long time, so I grew up reading my dad’s keyboard magazines as a kid and just thinking in the nineties ‘oh my God, all these racks of synths and stuff, that looks really cool’. At the same time, I was a space nerd, so I always loved control rooms and lots of buttons and knobs.
Then in second grade I started piano lessons and got into music and then switched to saxophone so I could be in the junior high band. Around that time too, (I think I was 12 or so) I started playing around with my dad’s little Mackie mixer for his keyboards, read the whole manual front to back, learned how all the routing works and things like that. I then eventually started to get into music software in high school and realized that’s what I wanted to go into so that’s when I looked at going to Berklee, which is where I eventually went in 2006.
When I started out, I wanted to be a composer who mixed my own music, played my own music, I had this whole pipe dream of doing everything myself and then realised, when I got to a place like Berklee where there’s so much talent, that I was only a decent saxophone player. I was okay at writing but realised I didn’t want to be writing professionally with the deadlines and things like that and that really mixing and recording was my forte, what I was best at.
I remember as a kid, in the mid-2000s, poking around IMDb and looking at music credits and I was like, ‘oh, there’s an engineer who records and mixes film music, that sounds like a cool gig’. I wanted to be a score mixer and I tried to contact as many engineers as I could, but back then everybody wasn’t on social media so I could barely find anyone’s email. I think I emailed five mixers – two got back to me including Greg Townley who gave me a rundown of what I’d need to know if I wanted to go to LA and stressed that ‘LA is the mecca of mixers, it’s gonna be really tough when you come out here’, trying to put the fear of God in me, and then he said if you ever end up coming out, give me a call and we’ll meet up.
I drove out to LA in March 2010 and ended up going to Greg’s studio to grab a coffee and he happened to need an assistant at the time. I really lucked out. He sat me in front of one of his big dual-rig 300, 400 track trailer mix sessions and just had me tell him, ‘oh, this is routing to there, these stems are routed there etc etc’.
He felt like I knew Pro Tools well enough so he hired me on the spot as his assistant, which was really lucky. I worked with him for three months and then he didn’t have any projects for a period of time and couldn’t continue to hire me so I bounced around assisting various people here and there. Then, shortly afterwards I started with Trevor Morris as his composer assistant and it was after he had built a facility in Santa Monica that had a mix room and a bunch of composer studios that I eventually fully moved over to in-house engineering and mixing.
He also started renting the mix room out to other composers and this is really where I learned a ton. I assisted Jim Hill with a lot of Abel Korzeniowski’s music, Chris Beck booked the room a lot, so that’s how I got to know Casey Stone and I learned a ton from Casey assisting him in a bunch of Chris Beck movies. Shawn Murphy came by once or twice, so I got to assist him a couple times. I would manage the room, book the room and then run Pro Tools, run the console and get to watch them mix. I learned so much from those guys and they all work a little bit differently even though they’re all mixing scores. Then, in 2015, I decided to go freelance and I’ve been doing that ever since.
Do you think there was a moment when you were doing all those gigs that there was a certain person you met or a certain gig you worked on that got you that break that you needed to get your name out there and people were thinking about working with you?
I think that’s been happening for the last few years. I met Danny (Bensi) and Saunder (Jurriaans), the guys who do Ozark, a few months after I went freelance. They were doing a film with STX and the head of music at STX was an old friend of Trevor’s. I was wrapping up a synth session with Steve Tavaglione at Trevor’s studio and Jason Markey called Trevor and was like, ‘Hey man, I know you got that great mixing studio, we have these new composers on this, the first movie we’re doing, but it needs to be mixed and we don’t have a lot of money.’ Trevor let him use the studio for free, told them to hire me and that I’d give them a good rate and that ended up being for The Gift, which was a thriller that came out in 2015. I’ve now done 40+ projects with Danny and Saunder since then, I’m their main mixer. Of course, Ozark was, from a PR standpoint, what started to get people to start to notice me, especially after I got my first Emmy nomination for that.
So that was your breaking through the gravity moment?
The beginning of it, yeah. Danny and Saunder and then Zach (Robinson) and Leo (Birenberg), the guys that do Cobra Kai, are the two clients that probably make up 65% of my work these days. The rest is a scattering of other people that find me and hire me.
I often like to meet with people just getting out of college who want to get into post production and try to give them whatever advice I can come up with. I didn’t necessarily plan it this way, but I came out here and I decided to spearhead into one specialty. Score mixing really is a pretty niche gig. There’s probably only maybe around 150 of us globally that are doing most of the work between the folks at Abbey Road and Air and the freelance people in London and then all the freelance people here (in LA). There’s not really a ton of score mixers. So, once you become known as someone who can do the gig, the momentum keeps going because there’s not so many people that you can call or trust that can fill in.
Stylistically you have to know scores, which can be any genre these days, and then knowing how to technically set up your templates and be able to stem properly. You’ve got be able to work quickly. You’ve got to make sure that everything is correct before it’s sent to the dub because the last thing you want is a technical issue and you have to go back and fix stems for 50, 60, 70 cues. It’s hard to find people that can just get the gig done and also get things sounding good.
Does it take long for you to step out of that rhythm that you’ve got with your regular clients when you get one of the more ad hoc gigs? Do you find it easy to reset and come at something fresh?
It definitely can be nerve wracking sometimes. Some of my clients’ shows stylistically don’t change too much, but right now Zach and Leo are doing this new show called Obliterated which is from the creators of Cobra Kai and it’s like an action comedy set in Vegas. They’re trying to save the world and a lot of the score is EDM with orchestra.
I couldn’t just do the typical Cobra Kai template. I started with that, but they said we need it harder, we need it brighter. I put bus compressors on all my stems and then key inputted them from all of the source tracks so that I could try to get that bus compression sound on stems because we can never rely on a six mix, eight mix, whatever, master mix compressor with film because it’s all about the stems. So, I had to figure out this whole way and it ended up working really well.
So, you’re not only doing EDM in that score, you are also trying to get orchestra into it as well?
Yeah. There’s a really cool plugin that can basically mimic side chain compression, but on individual tracks. I was putting that on orchestra where they wanted it to really pump, so that was really pushing me. I love dance music. I love EDM so I’m relatively familiar with the genre, but I hadn’t mixed a lot of it myself and this wasn’t supposed to be straight up dance because it was still a score, but it was a really fun challenge and I’m excited for people to hear it.
I’m good friends with those guys but there was definitely a moment where I was a little nervous wondering if they were going to have to find someone else.
Did you feel a bit of imposter syndrome when you first started looking at it?
Yeah. I still get imposter syndrome all the time but it was worse than normal because with most clients, especially if they’re doing scores that are within the broad wheelhouse of what a lot of scores sound like stylistically, I’m pretty confident. I know how to mix that sound and make clients happy. But that one was really pushing me, I really had to think of a new way to approach it but it paid off in the end.
It sounds like you had to reinvent your mix template just for this gig?
Yeah. And it’s fun now. I’m probably going to do a YouTube video on that template because any time I get some scores where it’s basically like pop rock without vocals for the underscore it’ll work great for that to have a little more compressed, a little more punchy template.
We talked about your journey earlier, but what drew you to scoring? Was it just because you’d had that experience in an orchestra when you were playing an instrument or was it that you just loved that big orchestra thing?
Yeah, I was a huge film score nerd.
I wanted to be a composer for a while but I think it started with video games. The Myst and Riven games are the first time I really noticed music and those are seminal soundtracks, they’re one of the first times you’ve actually had more moody music for video games.
I listened to the soundtracks on repeat and then really started getting into JNH (James Newton Howard) and Hans Zimmer and all those sorts of people. I was a soundtrack junkie as a kid and also a DVD bonus feature junkie. This was in the time before YouTube had really started – there was no Pure Mix, no Mix With The Masters, none of that stuff, so for me, if I liked a movie, I would get the deluxe special edition two/three bonus disc edition. I wanted that eight-minute featurette on John Williams at Abbey Road for Star Wars or something like that. I would watch those over and over again and try to get whatever I could out of those and that was very inspirational. I had an interest too in sound design & post sound, I just loved Hollywood sound and music.
So I’m guessing you’re in the box these days. Was there ever a time you weren’t in the box?
Not in my career. I grew up in central Maine, so literally the opposite corner of the country from LA, and there was this guy in my town who did a lot of live sound for local gigs and things like that, and we would volunteer as a stagehands at concerts. I think I was 12 or 13 when I started helping him out. I did a lot of live sound, that’s really where I started in audio, and that was all analogue. The Yamaha PM1D had come out by then but I never touched it as I was just assisting at concerts and things like that. So I knew analogue decently well from a live sound perspective but it was an interesting time to grow up because I started at Berklee in 2006 and that was right in the middle of the big transition to home recording and DAWs were getting bigger. Oddly enough, my high school band director had a friend in town who was a recording enthusiast. He had just bought his first 192 and an HD1 system and he gifted me his old Audiomedia III card so I was 16 when I got Pro Tools for free.
I had that PCI card in a PC with Pro Tools 5 and then I eventually traded that in for a 002 rack with Pro Tools 6. That’s how I started out in high school so I was very fortunate that I had an opportunity to work with Pro Tools and things like that before I got to college. Once I got to Berklee, I’d had a little bit of a leg up at first because everyone else was just starting out and recording stuff where I had been playing around with it for a little while.
Another interesting moment is when I started the MP&E program in the spring of 2007, that was the very first semester when Berklee stopped requiring their students buy tape. Before then you had to buy a two inch reel and a quarter inch reel for all your projects. You would buy it from Quantegy and it would go in the Berklee tape vault and at that point, because Quantegy had gone out of business, students couldn’t find tape. I did end up taking the newly created analogue recording techniques class at Berklee with Susan Rogers, who is a legend, and it was just all about using tape which was really fun but that’s the last time I touched tape. I’ve never used it professionally.
So you’ve virtually been in the box all your career and all your life?
I experienced a lot of the hybrid workflows. I’ve done mixing with an analogue console and I still use some outboard Lexicon reverbs here, but that’s the only outboard gear I have. I would see them in studios around here but, yeah, my mixing is all in the box and for a while it was the System 5 console with Pro Tools feeding that and going back and forth, so it was all digital but with a console. Ever since I’ve gone freelance for the past eight years, it’s been all Pro Tools.
You touched on reverbs there, so let’s talk about them. How do you use a reverb?
The first way I would use a reverb is just putting something in a space, trying to get a sound away from me or get some sort of ambience around things and that’s a technique that I’ve only really discovered and refined later in my career as a mixer.
When you first start out, you just start dumping long reverbs on everything because it’s really fun. As I’ve got more experienced, it’s just trying to hear the space that things are recorded in. When I record things as well, I always have at least a stereo pair of room mics up because I want to be able to have some kind of an ambience on sounds and so when I use reverbs that way, you almost don’t want to even notice that the reverb is there because it should just sound like the room or the space that it’s being placed in. And then, yeah, the other half of reverb is the fun part, the more overt ‘I hear that big beautiful digital hall’ or whatever you’re going to put it in, or a plate or something like that.
So I use that combination a lot. When I do orchestra I usually have a reverb that’s just giving things a little more space. I get a lot of recordings from smaller rooms in Eastern Europe or sometimes smaller studios here, and I just want to give them a little size and then after that I’ll add that big hall reverb, the digital kind of overt verb for vibe.
Do you ever get stuff in that’s come from two or three sources? Let’s say you’ve had a violin recorded in one room and a cello in another – are you using verb to glue them all together?
Yeah, absolutely. Especially whatever reverb I use as a sort of room sound and trying to feed that at different levels depending on the size of the room that things are recorded in, to try to get it to match. We are often blending orchestral samples with live orchestra and if the orchestra is recorded in a much bigger room sometimes having a convolution verb or a room verb on the live orchestra to match really helps to blend those together.
Would you agree with the notion that reverb can take something that you can tell is fake or synthetic and do some of the trickery to fool the audience a bit more?
Absolutely because I think that’s really where a live orchestra comes together is just having all those people in the same room playing together. Obviously the performance of them actually hearing each other is the biggest part but from an engineering standpoint, I think just having that cohesive room sound that they’re all in together and having the room mics at an appropriate level and processed a certain way is how you get that big Hollywood orchestral sound, for sure.
So, let’s talk about specifics. Which of the LiquidSonics are you reaching for and how are you using them?
Cinematic Rooms is definitely a mainstay in my template. It’s on pretty much every stem because I have duplicates of reverbs as each stem has its own set of reverbs for stemming purposes.
As a side note, I think that setup kind of helps to get a little more variety in the mix because I have slightly different verbs for different stems, so it gives things different spaces and helps get some depth and dimension. I’ve started also playing with HD Cart now. I still use my outboard Lexicons – the surround PCM 96S because there’s no plugin version of those that’s surround. HD Cart is the closest I’ve come to trying to mimic what those can do. The medium random hall in the Lexicon is just a special thing which is why that’s the only outboard gear I have. And it’s just a verb. In HD Cart I have a sort of random hall setting that is getting mighty, mighty close to the hardware boxes but it’s not quite the same.
Cinematic Rooms is so flexible, it can do a number of different things. A lot of the time it’s there in my template and when I’m working on a sound either my template setting works or I have a bunch of presets I’ve made for Cinematic Rooms that I’ll try. Also the stock presets that are built in are phenomenal. It’s flexible enough that I don’t find myself needing to find a different plugin because it can do so much.
You made a selection of artist presets for Cinematic Rooms Pro, are those your go-tos?
I have presets that are already in my set-up, I have some synth ones that I use when I import a synth stem. Because every project has a slightly different stem layout, I’m usually piecing together parts of different templates from different projects I’ve worked on, at least as a starting place, and then I’ll tweak them from there. For my piano reverbs and for sounds like that, it’s definitely all Cinematic Rooms. There’s one preset in there called Taj Mahal, and that was a [Lexicon] 960 preset that I was told by Shawn Murphy is the verb used for that classic Tom Newman/JNH nineties piano sound. I was like, oh, I gotta use that if you want that super verby, nineties cinematic piano sound, so I have a preset I made based on that – I tried to mimic that sort of very bright, long four second piano reverb. So that’s in my template for pianos and things like that.
For percussion, I have a medium chamber that I made which is my usual go-to but if that’s not working then I’ll just tweak it, usually starting with the RT [reverb time] and then maybe the filters, which are right there, so if it’s too bright I can just filter a little high end off or something. I also might change the pre-delay if I want a slightly different effect there. So yeah those presets I have and that’s the starting place for most of the sort of big ‘food groups’ that I have in my template.
Why do you think that LiquidSonics has become the industry standard for reverb now in the post world?
I think it’s because it’s just easy, it just sounds great from the start. You don’t really have to work that hard to get them to sound good.
All of the presets that I’ve provided for Cinematic Rooms were just presets I found already in the verb and I made a few tweaks to them to suit me and then they often just work, they sound great, they blend into the mix well and they don’t build up too much. With a lot of reverbs you have to fight low end build-up or low-mid build-up, or it might be too sizzly. With Cinematic Rooms, I’ll just be thumbing through different presets while something’s on loop and nothing sounds bad per se, I can really just listen to the character and the size of the room, the shape of the room. Nothing usually gives me that ‘oh God, that’s going to muddy up the mix’ feeling or anything like that. Pretty much all the presets I come across, for different sounds, are really good starting places.
In post production, both for scoring and for sound design and re-recording mixing, we often aren’t given a lot of time to really fiddle around with sounds too much because we have a lot of material to get through in a short amount of time, so we need tools that are going to get us to where we want to be quickly.
And talking about going through the Rolodex of presets, do you use the parameter lock much?
Yeah, mostly just for the RT because I roughly know I want a one and a half second of this, or if it’s like a big cinematic vocal I’m going to want it to be four seconds or something, so I’ll just lock that and then everything else is changing with different presets so I can just shift the tone to my liking.
While we’re talking about larger format mixing, have you found the way that Cinematic Rooms is laid out helpful for Atmos?
Yeah. For me the biggest thing is that all the presets in different formats sound the same because I’m shifting between formats all the time. Every now and then I have to mix in stereo but if they’re going to be dubbing in a surround format, I like to mix the score in that same surround format. I’m in 5.1 and 7.1 a lot so I’m going back and forth between those two formats and then sometimes I’m mixing in Atmos and with Cinematic Rooms I can derive all my different templates into those formats using the same presets and I never really find myself having to tweak them too much between the formats.
If I made a template in 5.1 and then need to build a 7.1 version, all of the reverbs generally translate the same even though they’re in different formats. For a lot of projects being dubbed in Atmos, I’ll just do the score mix in 7.1 because it’s easier for the dub to handle those stems than delivering object stems and pan data. Cinematic Rooms will just recall the same preset in 7.1 or 5.1 or in 7.0.4 for Atmos and they pretty much just work the same way in all formats, which is incredibly useful.
LiquidSonics reverbs have been designed so they fold down absolutely phase accurately. Are you using fold down formats when you’re creating outputs?
Always. Yeah. And honestly, even before Covid, 90% plus of my clients were remote, even if they were in LA because usually, even if they’re near me, they’re too busy writing all the cues I’m going be mixing days later so often they don’t have time to come to my studio to review mixes. Therefore, many times I’m just delivering them fold downs of my mixes, putting them on Dropbox or whatever, and then they email me notes and we go back and forth. That’s how I often work with many of my clients. There’s obviously more complicated projects where it’s better to have them in the room next to me but if it’s not as complicated that system works OK.
But yeah, stereo compatibility is hugely important for me. Everything has to work in stereo and I’m also aware that even though I use my own little fold down algorithm for my client review stereos that is similar to what most home theatre receivers or Blu-ray players will do, I know that my surround mixes will eventually get folded down as a part of the whole dub mix to stereo and that’s what most people will hear on their devices. The streaming services might do their fold downs a bit differently than Blu-rays, and DVDs are sometimes using different algorithms as well so fold down compatibility is huge. It has to translate down because I can’t deliver a separate stereo mix that’s going to somehow be dubbed alongside the surround mixes because they’re going to use my surround stems in the final mix and those have to fold down to stereo properly.
Before we had a lot of surround reverbs available, I would have a front and back instance of the same stereo reverb. I would feed both the same input from a single stereo bus and then would get a sound in the front instance. After getting that vibe I’d copy those settings to the back one and maybe make the RT and the pre-delay slightly longer so that they’re a little bit different and usually that worked OK and sounded cool in surround. But then I always had to check the fold down meticulously because I didn’t always know if those were going to fold down together correctly as the back verb is essentially the same preset with the pre-delay and RT tweaked and back reverb is going to sum with the front at negative six or whatever the fold down algorithm is doing. This setup would often sound great in surround, but then I’d have to tweak the back one to try to get it to fold down properly with out sounding phasey in stereo.
I know a lot of score mixers were using basically quad reverbs with two stereo engines panned front and back because we just didn’t have many options. Now that we have a reverb that’s flexible between formats, sounds great, and has a lot of different character in surround natively, in many formats, that’s hugely beneficial for anyone mixing in surround.
Try LiquidSonics Reverbs For Yourself
For transparent music and post reverbs up to Atmos 9.1.6 nothing comes close to Cinematic Rooms. Not only does it sound wonderful, it was built from the ground up to support the modern workflow and fold down requirements of professional composers and mixers.
Don’t forget to check out Phil’s presets in Cinematic Rooms, or for something with a distinctly vintage vibe why not explore the heritage reverbs available in HD Cart? It is the only recreation of a classic vintage digital reverb with extensions to support contemporary Atmos mixers without needing to worry about stereo to surround workflow hacks!