Skip to main content
Artist interview

Ron Fish On Top Games Sound, Imagineering, And Reverb

By 16th January 2024No Comments

Ron Fish, a distinguished composer and accomplished sound designer, has earned many awards for his contributions to video games and theme park entertainment. Renowned for his work on blockbuster video game franchises like Batman Arkham Asylum, Batman Arkham City, and God of War 1, 2 & 3, Ron’s creative prowess extends across diverse mediums. His notable tenure at Walt Disney Imagineering saw him shaping the auditory experiences of theme park attractions around the world, where he skilfully composed music and designed immersive soundscapes for many of the featured rides.

We had the pleasure of spending some time with Ron to talk about his work, particularly in his role as an ‘Imagineer’ and why he chooses to use LiquidSonics Cinematic Rooms Professional as part of his set-up.

Talk to us about how this all started. Were your parents musicians? Did you watch movies and think, “I want to score them one day”? 

My mother was a classical pianist and she would be playing all day long. As a matter of fact, when I was in the crib, she would put me right next to her and be playing all the time so I was always hearing the classics; Brahms and Chopin, Beethoven, all those. Not so much twelve-tone music or anything more modern, but a lot of the classics. Then this band came along called The Beatles and I was absolutely stunned by what I was hearing. I went to see them live, which was just unbelievable, and I saw all these women screaming, falling down and fainting. I thought this music is great and so is this whole scene here, this is quite remarkable, so I started playing drums and then my mother insisted that if I play drums, I should also play an instrument that included harmony and melody.

I took piano lessons for many years and I was noodling all the time on the piano, I ended up playing drums professionally. Then I got accepted to Berklee College of Music and I decided that I needed to start learning more about the theory of why I was noodling on the piano, why I was playing drums, what I was doing.

I studied there for two and a half years and that was eye opening because I wanted to go and study with this guy called Mike Gibbs, who was an arranger for the Mahavishnu Orchestra. Mike Gibbs was amazing. He was stunning as far as arrangement and understanding how accepting music is really the word, not just moving chords and block formations or anything like that, but how we actually physically and psychologically understand sound. That was great. I learned a lot from him and then I decided it was time to leave and actually try to make a living doing it.

I moved to Los Angeles where I worked drumming in a band because I could make a living doing that, and then I studied computer programming on the side. I studied that for three years and it got me into a thing called sequencing, which at that time was in its infancy. There was this concept that you played using the MIDI language and it would actually record everything, which was great for me because I’m not a great keyboard player so that gave me the option of being able to move music around the way I wanted to hear it which was really eye opening for me.

Then, the leader of the band I was in, suddenly got this gig at someplace called Imagineering. I didn’t understand what it was and he said, ‘It’s the creative hub of all the Disney Theme Parks.They have an audio division’. I didn’t even know that theme parks had music!

He invited me in and I worked there for eight years. That was a great experience. I was getting my studio chops together, so I was still drumming and I was also working the Disney gig, so it was 80 hour work weeks, but I took that money and built my studio.

What was ‘studio’ then was not like what it is today with my Mac Pro tower, gigs of RAM, etc. You got one sampler, a Roland sampler, and you’re just in hog heaven, Emulator IV was a biggie. And then GIGA Studio came out and I went ‘this really opens the world, we’ve got to be able to get these bigger computers and then play them and control them’, so that was terrific. While I was in Imagineering, I started writing music. The position did not include music composition as there wasn’t a department for that. While I was there, my skills as a sound designer were put to good use, but I simultaneously started to write music for some of the shows. From there, I did something called Disney Quest, which was the very first interactive theme park that Disney decided to do under Michael Eisner’s regime.

I realised that I had some talent for writing music against more interactive experiences rather than linear only so I left the company and started working for a video game company called Atari. I did Green Army Men. I did Superman for 3DO when they were around and I really enjoyed the experience of writing on a blank canvas because, at that time, they didn’t give you the game to play before you scored it. It didn’t happen that way, somebody tells you ‘there’s this big guy and you’re fighting him in a defined location. Go write something’. 

I was at Disneyland Paris in June and when you walk around Disney, there’s always music playing, like incidental music, and as you transition around the park into each theme area you never feel the joins happening. Was that the kind of stuff you guys were involved in? 

Everything in theme parks is different in this way. You have to be aware of where you are. I’m talking about inside the ride, outside the building, inside the building (but not just in the ride) and also just the general land. Sounds or music that you’re hearing all need to work in concert. When it came to working on, let’s say Animal Kingdom, I was given this directive – ‘go put together music from these different areas in Africa. Go find some artists. Go research this thing for months on end and start putting together these lists in Pro Tools so we can listen to what might work against each other’.

So, what you do is choose different artists that complement each other in the specific area you are in. If I’m in the Serengeti, if I’m in the Congo, whatever part of Africa, you go and find what is discreet for that area with the thought of when you’re putting together this area against that area, how close are you? What’s the speaker placement? Does one, for instance, clash with another? Clashing is the biggest problem you can have because there’s always bleed of some sort so you have to look at the speakers, see how far those are going to reach and make sure, or try the best you can, to marry the two so they work within each other. What I try to do is consider how close you are in that speaker system to the actual ride in that world. Tower of Terror, for instance, had the spooky kind of music, obviously, but you don’t want that to be playing right next to Rock ‘n’ Roller Coaster, which is anything but spooky music!

Do you do things like match keys around certain areas so that they don’t clash with each other when you walk between one area and another?

I do. For me it was very important that we don’t have a key of A flat right next to A. So, choose the songs or the pieces or the orchestral, whatever it is, that match. 

Where does the music start? Is the hero of it the ride music and then you kind of come out in concentric circles?

That’s exactly it. Until you bump into the rest of the space that it’s occupying. So, usually, you’re not the very first piece of architecture to come up, you’re not calling all the shots, you’re fitting into something that’s already there most of the time so you have to know what’s been playing around there. Inevitably you’re bumping into something sonically. That same thought process really plays out in the creation of a soundtrack for the ride. So, where a ride is completely different to a video game – a ride occurs in space, a video game occurs in your house, so you’re controlling what you’re hearing in a video game. In the ride, there’s a lot less control. The word ‘bleed’ comes up and every time you have a conversation with the engineers and the director, you’re considering, ‘I’m this close to this room. Am I going to hear what’s playing here in that room?’ At some point when you’re crossing that area between one room and another; inevitably there is some bleed. 

Let’s bring this back to what you love doing, which is writing music. How much music would you have to write for a ride experience? Was it all running on loops or was it long form linear? 

Pacific Rim had linear media and then you moved on, in a moving vehicle. So you would have this experience, you’d have all the sound, you’d have the music playing, etc. and then you parked and you saw this incredible 3D movie, which was on a huge screen and so you experience that and then you move on, so that’s linear. When you’re moving through, first of all, time and space, which is not at all like a movie, you’re never sure exactly how long it takes for a vehicle to pass from point A to point B. And why is that? Because you can have three 300lb people sitting in the car or nobody sitting in the car as it’s going and that’s going to make a difference in how fast the transverse is from point A to point B. So, that’s an interesting challenge. You have to write the music so that almost all of the experience is written and then there might be a tag at the end of it, you’re not quite sure how long, so you have to make sure that this music is playing from this point to that point in exactly that amount of time, and for any ‘slop’ that might happen afterwards, you have to cover for that. So that’s one thing to know – the time expands and contracts that you’re in there, and I don’t mean by minutes, but in music, with BPM and all this stuff, if I know exactly what the BPM is, and I’ve missed my point, when do I leave? When do I get out of that track? So, generally speaking, every ride is different, they’re always a challenge. 

And the other one is, again, the bleed. So, if I’m in this particular point in time, I want to experience this music, here, now. If I’m in the next room, or the next area, next space, I’m again not going to be writing in A flat and go to A. You want relative keys, unless you want the break. As long as you understand what it is you’re trying to achieve, and you’re talking to the creative director, it’s almost like spotting. But here’s the difficulty, it’s not something you can see on your computer, although you can get a general ride-through if the team is cool and they know how to do it.

So you’re kind of scoring to an animation of sort?

Yeah, kind of, like that’s what I’m doing with a project I’m working on now. There’s things that are firing off in the room that are loops. Because you’re waiting on intent, you’re waiting on data input, so that needs to loop, and it needs to loop fine with what’s going on and what came before. Then there’s one-shots, maybe for a flourish or something really cool, again, that’s dependent on data input. And there’s the other one – once it fires off, then that experience can be a lot easier to predict. What’s non-predictable is data input from anybody. Now, to me I think we should head towards using the brilliant engines, the middleware for video games. In video games, we’re so far ahead of where we are in theme parks, generally speaking, because these programs can track exactly what the BPM is. You can tell it eighth notes, sixteenth notes, whatever. It’ll track the piece of music and at any point in time, if you need to switch something, it’ll transition in a transition state to the next piece. It’s just perfectly smooth, everything is working great. In video games, you don’t want the same thing to play every time if you can avoid it. In theme parks, they want the same thing to play every time.

Coming back around to the music, you’ve talked about some rides that you’ve done that are pretty bedded into things like movies or games or things like that that people will know of. Does that influence the composition side of it? Do you have to go back to that reference point? Let’s say it’s the Batman movie for example, or the Batman ride. People turning up generally will go, I know what the Batman theme is, or I know what the Star Wars theme is – is that informing your compositional decisions or are you kind of given free reign to just do what you like? 

I would say that it all really depends on who the creative director is and what’s the end goal. Let’s talk about Batman. I was working as an audio director at Disney and when that project ended I sent out a little flare over to David Cobb who was an amazingly creative guy, who was working at Thinkwell Entertainment.

He calls me up and says we’d like to talk to you about doing Batman but I want you to write up some concepts for this thing. I was always a fan of Danny Elfman but I’m not the kind of guy who’s going to transcribe exactly what was in a score, then just change a few notes and it’s now mine. So I started writing what I thought the score was. I don’t need micromanaging, but I’m somebody who has a clear vision of what I think the creative team would like. I asked him for a guide track so I know what they’re thinking and then I would filter that to my own interpretation. So I did the first 30 seconds or so and I walked in there with all the people in charge of Warner Brothers world, played it and asked ‘what do you think’?

And David turns to the other people in the room and goes, ‘and that’s why I asked this guy’, because I had captured what Elfman was about to a certain degree, but it wasn’t Elfman. It has the feeling like I’m working that area of tonality and that feeling, but I’m not dropping anything of his, and I was really careful not using any of his themes or not even touching on the theme, but it still gives you a feeling like it’s Batman.

As a composer, what do you think the role of reverb is in your work?

I have had the Lexicon native plugins ever since they developed it. So for me, those reverb plugins give you what is kind of an essential Lexicon sound. The Lexicon sound brings me back to earlier recordings that used reverbs with a modulated tail and a certain softness to them. So that reverb plugin, and its different presets tend to have some kind of ‘blur’ to them if you will. There’s a little mud in there that’s built in. It’s okay if you’re just going to send everything through it to be used as sub groups. On the other hand, if you use reverb plugins for different groups, like the first violin, second violins, violas, cellos, brass, so they’ve all got some kind of plugin into those, not as a group, but an individual instruments. The Lexicon plug in tends to blur these instruments and the individual characteristics of these instrument lose their individuality. I’ve found that situation to be similar to other reverbs in other reverb plug ins as well. 

Two things about LiquidSonics plugins that I really enjoy – one is it’s clean, very clean. The tails are clean. For instance, if I want to de-emphasise the discreet reflections on brass I can bring down the metallic reflections in the algorithm which is just wonderful for a clean tail, which gives me the option to use it multiple times in a project without it becoming overbearing in the reverb reflections that I don’t want.

That’s one thing. Second of all, I was talking to a group that was working with a composer, and they said, ‘over here we love LiquidSonics’, and I said, ‘I don’t even know who LiquidSonics is, unfortunately, you’ll have to forgive my stupidity here’. And they went, ‘oh, no, no, no, we just think that their Cinematic Rooms is beautiful, we use a lot of the plugins’ and I went, ‘Oh, I’ll check it out in the trial, right?’ And I pulled it down and I’m working on this 5.1 gig, and I have to install and mix and everything. It’s not just write the music and do the sound and get all the supervising, but I have to actually mix it into the space. It’s going to be 5.1 with five speakers across, not front to back, but it’s an array and I pulled up all the stuff that I have and none of it worked correctly. Now, what does correctly mean? It doesn’t assign it properly across the whole spectrum of the speakers.

The only one that did was LiquidSonics and I thought, at $400 I wonder if I can get around this? So I pulled up all my surround reverb plug-ins and not one of them did the job correctly with assigning across all channels. They can get around it by going, here’s two and here’s two. I don’t want to deal with that. I want to deal with one reverb plugin that gives me the depth of field that I want across all speakers, and I want to be able to control that depth of field, which the LiquidSonics Cinematic Rooms Professional does. I can keep the reverb nice and tight, or I can make it lengthy, and it doesn’t mucky up or murk the whole piece, so it’s fascinating. I’m a huge fan of Thomas Newman’s string sound and after much trial and tribulation the truth of the matter is, in order to get that luscious transparent sound, I’ve found an interesting solution – it’s EastWest Spaces 2 and LiquidSonicsCinematic Rooms. That combination is the closest I’ve ever got to while using con sordino string samples in combination with non-con sordino to regular arco – that combination with using both those reverbs and I achieved what I was looking for.

How long a reverb plays, how long it actually plays before it closes down, is so important. What I would love for this reverb [Cinematic Rooms] to have is an EQ, a real accurate multiband EQ that is time related. I would like to be able to do certain EQ patterns to be able to control that tail and how it’s dropping off in time.

Do you use reverbs both as rooms and effects or is it just one way you use reverbs as a composer? 

It depends what style I’m writing in. If I’m writing an ambient track, that’s ethereal in nature and I’m not trying to copy what is a real instrumentation, then I tend to go with really bizarre reverbs. Interestingly enough, I don’t necessarily consider Cinematic Rooms to be a bizarre reverb. It’s a room reverb. So, when I use reverbs, if I need something that’s just going to have a really wacky tail, and it needs to be talking back and forth to itself by delay lines and everything like that, I’ll use a reverb that doesn’t pretend to originate in the organic world, so it’s not trying to copy anything. 

One of the biggest issues I’ve had, and since I’m not just doing an orchestra, but often I’m also doing the dialogue and the sound design, and I’m mixing the whole thing, one of the hardest things to emulate is outdoors. Getting an accurate outdoor is tough, a backyard is tough, an alleyway, a street, pavement on the street, non-paved street, all these kind of things in a city, an open field. 

I don’t believe there’s been enough attention to those. If I’m doing production work, I’m actually trying to match what I’m seeing on the screen, somebody had to do a voiceover and now I have to match that. That’s pretty tough to do. 

In Cinematic Rooms, I’ve noticed that I can get a room really well. I can capture a room sound, which is really neat. Then controlling it is the biggest thing. I mean, you make a reverb tail shorter. Yeah. Anybody can do that. Controlling how quickly the sound comes at you and then dissipates, and at what frequency, etc, that to me is everything about the reverb. Your question was, am I using it as an effect? In short, it depends on the piece of music that’s being written. 

An Extraordinary Journey

Ron’s passion for music, from his early influences to his innovative work in video games and theme parks, is a testament to his immense talent and dedication. His approach, blending traditional methods with cutting-edge technology like LiquidSonics Cinematic Rooms Professional, showcases a relentless pursuit of sonic excellence. Ron’s story is an inspiration, vividly illustrating how a blend of talent, hard work, and a keen ear can lead to remarkable achievements in the world of sound.

Try LiquidSonics Reverbs For Yourself

Whether you’re scoring professionally or just starting out learning the craft, you’ll need a transparent and flexible contemporary room reverb for the job. Cinematic Rooms comes in two editions, standard and professional – both are powered by the same stunning algorithm that has now become a true industry standard in the world of post production and score mixing.

All of the LiquidSonics reverbs are available to try for free for 14 days, so to hear them for yourself just head to our demos page to drop a code into your license manager and pick up the installers from the downloads page.