Skip to main content
Artist interview

The Art of Spatial Orchestration: Inside Justin Gray’s Groundbreaking ‘Immersed’ Album

By 22nd September 2025
Composer Justin Gray leaning on a bass guitar in front of a brown wood background.

Justin Gray isn’t just mixing in Dolby Atmos; he’s composing for it. As one of the industry’s leading immersive audio engineers and educators, Justin has mixed thousands of songs in spatial formats. But his latest project, “Immersed,” represents a fundamental shift in how we might think about music creation: an album conceived, composed, and produced from the ground up for 9.1.6 speaker playback. This is the ultimate “Atmos-first” production, where spatial positioning isn’t an afterthought but the genesis of the creative process itself.

Justin pioneers what he calls “spatial orchestration,” a compositional approach that elevates three-dimensional placement from a mixing technique to a core musical element, as fundamental as melody, harmony, or rhythm. In this new paradigm, the location of instruments in space becomes part of the score itself, with spatial relationships between musical elements carefully composed rather than simply mixed. It’s a radical reimagining of what orchestration means in the 21st century, where the concert hall exists not as a physical space but as a creative canvas that extends in every direction around the listener.

For Justin, LiquidSonics plugins have been essential to realizing this musical vision. From Cinematic Rooms Professional‘s ability to create hyperrealistic spaces to HD Cart‘s role in adding sparkle without muddying dense arrangements, these tools have enabled him to bridge the gap between captured acoustics and imagined spaces. Working with immersive recording pioneer Morten Lindberg as an immersive producer, Justin has pushed the boundaries of what’s possible when composers think spatially from the very first note. In this conversation, Justin reveals how the future of music production might not be about mixing for more speakers, but rather composing with space itself as an instrument.

Q: Justin, you’ve been quite visible in the world of immersive music mixing lately, but your background is much broader than that. Would you like to give us a quick intro?

A: I’m a Toronto, Canada-based engineer and educator, but also an artist. These days, I’m primarily focused on stereo mastering and immersive audio music production, mixing & mastering. As a composer and musician, I’ve just finished and released a long-term project called “Immersed”: an album composed with immersive audio in mind. It’s about bringing spatial orchestration into the entire production process, covering composition, arranging, and studio recording and production approaches.

Q: You’ve had such a multifaceted career. How did spatial audio first capture your imagination?

A: The moment that changed everything for me was hearing Howard Shore’s Lord of the Rings scores in their native 5.1 mix on a proper surround system. I knew this music intimately, having studied the scores extensively, but hearing it that way felt like going from black and white to colour. That was my moment.

I then followed the development of Auro3D and Dolby Atmos releases on Blu-ray, such as those by Morten Lindberg and Stefan Bock. As soon as I understood that Dolby was going to engage in music and binaural delivery around 2015, I knew this could fix the limitations of discrete channel delivery for the consumer. The idea that this would translate from speakers to headphones just made sense to me, even before I’d heard it! So I went all in and set up a 7.1.4 system. There was no music department for Atmos back then. It wasn’t even being advertised. So, I alluded to starting a cinema dub stage so I could get Dolby to actually respond to me.

The Evolution of Spatial Thinking

Q: Was there a moment when you made a conscious decision to move from stereo to immersive for your own music?

A: The first thing I mixed in Atmos using the Dolby Renderer was my 2017 record “New Horizons.” I went back and remixed it from the ground up. Two things happened: First, I realized this is what I’d been hearing all along. It immediately felt more true to how the music feels to me as a creator. Second, I knew how many things I would have done differently if I’d started at the beginning.

The limitation wasn’t just technical; it was in the arrangement itself, the limits I’d put on the music to fit it into two channels. That realization led me to start conceiving the music I’m releasing now as “Immersed.”

Q: Most immersive projects still begin as stereo productions that are later adapted. But “Immersed” was composed from the ground up for immersive formats. How does that change your creative approach?

A: Let me give some context first, because the concept of spatiality in composition isn’t new. There are academic papers that identify that choir music, as far back as the 12th century, used spatial approaches. In the Renaissance, there’s documentation of putting the choir in the rafters or the brass in the back of the concert environment.

Walt Disney built quadraphonic systems across the United States in the 1940s for the release of Fantasia. So we’re building on a long tradition here.

But to answer your question directly: where spatiality really comes in is orchestration. I’m personally fond of density and complexity in music, and the ability to orchestrate in space completely opens up a new parameter that I would describe as spatial counterpoint. When you can distribute sounds in space, the conversations, the antiphony can evolve. The sense of how to build a groove or harmony based on psychoacoustics changes completely.

For this record, I composed mockups using MIDI instruments in my 9.1.6 setup, not even in Atmos initially, just discrete channels. I put that spatial information directly in the scores. Then, when I went into the studio, I already knew where I wanted the instruments and could record them that way.

If I have trombones to the side and trumpets in the back, it’s not just about instrument placement. It’s about the fact that there are fifths across from each other or fourths behind me. I’m trying to find a logic to it. When I’m building a brass part to surround me in the rears, if something’s not thick enough harmonically, I can change it. I can put the unison part across from me and the harmony in the back. The problem’s solved at the compositional level, not in the mix. It is then supported by implementing immersive recording techniques based on the intended spatiality of the music. 

Spatial Orchestration in Practice

Q: Does spatial placement become an additional orchestration tool, like dynamics or timbre?

A: Everything affects everything. When I’m mixing other people’s music, I often use timbre as a clue for where something wants to sit. If something is bright and punchy, it should probably be in front because if I put it behind me, I’m fighting its tonality. It will get darker, be perceived as quieter, and my brain will prioritize it less.

A nice dark, deep brass pad? I put it in the back. It almost feels more natural there because I’m working with psychoacoustic principles. There’s a great engineer named Will Howie who has done amazing postdoc work in a 22.2 format in Japan. He was a McGill graduate and also a teacher there for a while. He and I have had conversations about spatial orchestration, and we both agree on this: low frequencies like to be low, high frequencies like to be high. It’s how we hear the world. When you walk outside, you rarely hear anything with energy below 3kHz above you unless it’s something like an airplane or thunder.

I had pianissimo markings in my scores, but when we’re recording with the musicians positioned for rear speakers, I could hear it in real time in my 9.1.4 setup and communicate very specifically: “We need to be even quieter,” or “That was too quiet, actually, because it’s behind me, you need to punch a little harder.” These are decisions you can only make when you’re thinking spatially from the start and throughout the entire production process.

Q: Did you find it necessary to communicate your immersive vision to the musicians?

A: The short answer is yes, but here’s what’s remarkable: it takes about 30 seconds for a musician of high calibre to sit in the control room, press play, and synthesize an entire universe of information. These are musicians who listen on a deep level, who feel music on a deep level, and are used to reacting to music in live spaces.

I had percussionist Naghmeh Farahmand playing daf and udu on the project. I wanted two daf players having a conversation across the left and right speakers. After she recorded the layers and heard them in the immersive space, without any discussion about technology, she immediately said, “Can you put me back out there?” She went out and instructed me to create a quad arrangement, one in each corner. She internally created a four-part orchestration, imagining what she wanted if there were four players. She came back in, listened, and said, “Yeah, that’s what I would do.” I was jaw-dropped. It was a reminder that it’s not complicated; it’s just an emotional tool.

Immersive engineer standing in front of a microphone array at Humber College, Toronto, where he recorded his "Immersed" album.
Immersive engineer standing in front of a microphone array at Humber College, Toronto, where he recorded his "Immersed" album.

The LiquidSonics Workflow

Q: As an owner of several hardware Bricasti M7s, what led you to incorporate LiquidSonics’ plugin reverbs for immersive mixing?

A: I use my hardware Bricasti M7s on close mics largely, really at the source to try to push the instrument back in the speaker just a little bit without creating any more sustain. Then I’ll use Seventh Heaven Professional on, say, the 9.0.6 string array. Now I can take all the quartet layers and push the entire ensemble into a bigger environment.

The tonality differences are crucial. What a Bricasti does so well is those rich early reflections and then absolutely smooth sustain that you want to hear till the very end. I think it responds well to brighter instruments, such as strings. LiquidSonics’ Seventh Heaven captures that perfectly in the plugin. I use it often on acoustic sources when I want to place the sound in a realistic hall with a grand sense of ambience and spatial detail.

I cannot think of going through the thousands of songs I’ve mixed and not finding Cinematic Rooms Professional used somewhere. It is the reverb that I reach for first.

When Cinematic Rooms came out, it solved a fundamental mixing problem. Exponential Audio was already available, and that’s a great reverb; I use it on this project, too, but it has its character. LiquidSonics gave me a reverb for contemporary music, a tool that enabled me to create chambers and halls with a sense of realism but also the parameters to create the surreal, to create warm, dense environments with all the control I desired.

I prefer Cinematic Rooms on drums or vocals. I find it to be warmer overall, more focused and capable of creating much more dense, mid-range oriented reflections. It sounds more cinematic, hence the name. It sounds like a film score reverb, where Seventh Heaven sounds like an acoustic classical reverb.

Sometimes Tai Chi is perfect for adding chorused reverb to organs or Fender Rhodes. It can make weird reverbs and ambiences. For the Rhodes specifically, we tried recording it in quad, but it didn’t sound right. So instead, I used Tai Chi after the fact to decorrelate it and create a three-dimensional version of the instrument. The multiband decay controls in Tai Chi are incredible for this. I can have the low end decay faster while letting the highs ring out, which prevents muddiness while maintaining sparkle.

I use Lustrous Plates in pop mixing all the time. So many pop songs use plate reverbs on vocals and for good reason: they’re so awesome. I’ve become pretty good at matching the reverb that I hear in the track or stem in the original production. This record didn’t really use it; it wasn’t a plate reverb kind of record.

Q: Which LiquidSonics plugin surprised you the most in your immersive work?

A: HD Cart surprised me from a sonic standpoint. There’s a way to create surreal, super-modern-sounding, yet really beautiful reflections and tails with that plugin. The depth is incredible, and the tone is just beautiful.

On this record, I didn’t expect to use it as much as I did. Because I have so much capture at the source, the mid-range is dense on this record; HD Cart gets out of the way. It does what the Lexicon does: it gets up into the 12kHz and above with a stunning sparkle. You can create a tail that’s really quite magical with enough detail that you can hear it even in a dense arrangement.

I’d specifically use HD Cart on percussion, guitar layers, and kalimbas when I wanted them to shine. 

Q: Were there any “aha” moments during mixing where a plugin completely unlocked a challenge?

A: Every day, but here’s a big one. The string quartet sounded great with my 11-channel Schoeps array, but if I could have moved the entire production to different halls on different days, I would have needed a million dollars. The string quartet needed to feel like a film score.

So I overdubbed the quartet, moved them around the array to create different impressions from different places, then fed it into Seventh Heaven Professional. I experimented with different presets, and when I hit the right setting – boom, suddenly I had all the detail I wanted, all the control over balance, and I had the cathedral.

The Morten Lindberg Collaboration

Q: You mentioned working with Morten Lindberg on this project. How did that collaboration shape the final production?

A: Morten Lindberg was an immersive producer for this project. Drew Jurecka is the primary co-producer who was there with me the whole time, but I approached Morten as a colleague and friend to provide mix notes and explore the spatial mixes. We adjusted a couple of things that were subtle but highly impactful to the success of the final production.

One of the biggest was about the center channel. I’d very consciously put the main melodic storyteller, the soloist, in the center channel. It’s designed for cinema. If you don’t have something in the center channel in a 60-foot-wide room, it’ll fall apart for anyone sitting slightly to the left or right. But Morten’s notes were that some of those sources still felt disconnected. It felt like there was this beautiful, full, rich, connected ensemble and then this person stepping out.

We went back and forth. I tried all the reverbs to get that instrument into the space in a way that felt totally real, and I couldn’t do it. Something was missing. I tried divergence, spreading it across LCR. I tried stereo reverbs to decorrelate it.

Q: So how did you ultimately solve that challenge?

A: This is where it gets a little nuts. I went to the Aga Khan Museum here in Toronto. They have a beautiful hall with 100-foot ceilings. I took every soloist track from the record, played them through a speaker into the hall, and captured them with a 15-channel 9.0.6 Schoeps microphone array on the main layer, hung omnis from the rafters 100 feet up, and a five-channel DPA array on the balcony. This process is often referred to as “worldizing.”

When I listen to that recording paired with the close mics, I can literally place the artist on that stage and give the listener a pretty accurate experience of sitting in the 10th row. It’s not just about the reflections; it’s about the fact that all the timing cues are real. Suddenly, the soloist felt like I got to track them in a different room.

This experience taught me something important about the next level of immersive reverb tools. It’s precision in localization. My ability to pan something and have the reverb create reflections that are entirely intelligent about that localization, that’s the frontier.

The Philosophy of Musical Space

Q: You’ve mentioned combining real immersive recordings with artificial reverb design. How do you decide when to stay true to the captured space versus reshaping it?

A: It’s an area where production and composition start to blur. The studio at Humber College, where I recorded, is a gorgeous, perfect-sized room with very clean, tight reflections. Exactly what you need for a project with this many layers. What it doesn’t have is an expansive reverb tail. That’s no problem because we have these tools.

I believe I have every immersive plugin available, and I certainly have every LiquidSonics plugin. They’re all in my template, already mapped to their maximum channel width, because in any mix at any time, I have a relationship with all of them. I had to get a second computer for this record because, at times, there are 15 immersive reverb plugins running simultaneously, plus 700 open tracks. It was absurd. I had to freeze a lot of tracks to free up processing power.

When I listen to the production and think, “I love how the sarangi sounds with these gorgeous height channels positioned in the left wide surround, but I wish it lasted another second as the bow leaves the string,” that’s when I work to imagine what that second sounds like and design the spatial reverb around it.

So, I use late reflections. I might even EQ it a little bit to get rid of the mid-range to not get in the way of some of the natural reflections that were captured. Also really getting the pre-delay set just right.

Q: Let’s talk about your approach to using stereo reverbs in an immersive context. Do you still lean on them?

A: Absolutely, mono and stereo reverbs still have their place. If I’ve got a UAD reverb like Capitol Chambers and I love the tone, I’m going to use it. I’m thinking of stereo reverbs more in terms of tone design. They’re acting as much as an EQ as a spatial tool. 

I own three Bricasti M7s, and they’re amazing. Do I use the Seventh Heaven Professional immersive version? All the time, but the Berliner Hall setting on my hardware M7 is a very special thing and still has its place. I control all my M7s with the M7 Link plugin by LiquidSonics.

If I’ve already got a spatial capture filling out the entire space, I might not have room for the tail to go out like a sphere. I might just want the tail in the upper channels. So I’ll use a stereo instance of Seventh Heaven, route it just to the height channels, and suddenly you get this lift without cluttering the main soundfield. I would say that, in this project, I’m leaning on immersive reverbs much more than I am on mono or stereo ones.

Q: What makes a spatial effect or reverb feel “musical” to you?

A: It’s all about tone. What draws me to a plugin is when it manipulates tonality in a musical way, and the reverb has to react authentically to the source material. The reverb is not just about reflections; it’s a tone tool. Depending on the volume level, and especially in immersive when it’s so exposed in its own speaker, it can warm up a sound or brighten it. It can solve problems that EQ can’t.

The second thing is control over decorrelation. LiquidSonics clearly understands at a deep level how to decorrelate audio as well as anybody in the world.

But more importantly, they’ve given me the ability to control it, because it’s always contextual. My ability to change the tonality of the rears versus the sides versus the heights, to change the levels intelligently, that’s crucial.

I never want to know as a listener that there’s a reverb plugin. Even if you’re washing something out, you still want to just be in the song. You don’t want to be thinking about plugins. So the ability to control that relationship to the source sound, it’s about amplitude first, but also about being able to say, “I love how this sounds, but let me just dip the mid-range, brighten the heights, or have less in the rear channel.” Not with a fader, but with an intelligent tool that understands the spatial relationship.

Immersive engineer Justin Gray wearing Audeze LCD-5 headphones

Q: How does modulation in immersive reverbs help “bring instruments to life in 3D”?

A: Modulation is intended to help create spatial clarity. It’s not just about making something sound weird; it’s a problem-solving tool. If I have a discrete capture and I want it to feel immersive, spreading it around with reverb can make it difficult to provide a precise, localized experience. We get the source, then reverberation, and the reverberation can be glorious, but we’re immediately putting the thing into a relatively big space.

Pitching the left and right by three cents, the front and back by five cents, and the top and bottom by 10 cents, that’s not enough to make it feel like a weird wobbly chorus, but it is enough for our brain to experience it as “different.” Therefore, it actually brings clarity. You might have been trying to EQ something for clarity when, in fact, you just needed to differentiate it from its original source.

Three-dimensional chorus is what we’ve all been hoping for all along. When you’re making an epic, reverberant chorus vibe, you’re trying to immerse someone, envelop them. When it comes from all around you, it’s an instant win. It’s what you hoped to achieve with a mono Roland chorus all along. LiquidSonics’ Tai Chi would be a way to do it in a more contemporary way, where you’re not fussed about a little bit of detuning to create a sense of depth. Really good for keyboards, guitars, or background vocal choirs.

Immersive engineer Justin Gray wearing Audeze LCD-5 headphones

The Future of Immersive Production

Q: What’s one thing you believe many people still misunderstand about immersive music production?

A: The big misconception is that we’re recording in Dolby Atmos. We’re not. We’re recording immersively, or honestly, we’re just recording. Atmos is a production, encoding, and delivery software. It’s my favourite, I love it, but accurate language is important.

Another thing: as engineers, it’s important to remember that we owe the speakers nothing. I owe nothing to this technology. All I owe is the art. Whatever I can do to use the technology to augment the music is the correct decision. Sometimes that might mean making a relatively simple spatial orchestration, sometimes it might mean blowing the thing up and making it absolutely huge. I am obsessed with technology, but I am much more obsessed with music.

Q: Where do you think immersive music is headed next?

A: It needs to get to the creative level. Everything is currently targeted at mixers: you should mix in Atmos, remix someone’s song. That’s awesome, I do it all the time. But it cannot and will not sustain itself as an auxiliary arm of stereo forever.

When we transitioned from mono to stereo, eventually the artists got it. Eventually, the recording engineers got it. This is not a technological movement; it’s a creative movement. As soon as it becomes accessible for an artist to think this way, our job as engineers won’t be to remix their songs. It’ll be facilitating their creativity, which is what an engineer’s job is in the first place.

The potential here is for artists to be in the studio thinking, “I’m going to do my background vocals this way because that’s how it sounds.” When an artist of high stature has an experience like I had, where they realize “this is what I’ve been imagining,” they won’t tell their audience to check out my Dolby Atmos mix. They’ll say, “Check out my art!”

That’s the inevitability for any technological invention: it either becomes hyper-practical or creative. ProTools isn’t just a DAW anymore; it’s a creative workstation. No one cares about the microphone at the listener level, but they appreciate what the microphone facilitated. That’s what immersive audio needs to become.

Justin Gray's Immersed project album cover

Q: What do you hope will surprise listeners when they experience “Immersed” in full Atmos?

A: I want the listener to feel like they are in the center of a 360-degree orchestra. When I take this production from my studio to a movie theatre, I can stretch the image to that size. I can make a frame drum that I’ve recorded with eight microphones, be 50 feet wide, and you are in the middle of a 50-foot instrument. That’s not something that could happen in the real world with real musicians positioned in the room.

But deep down, I want people to experience it as music, not as technology. As much as it uses cutting-edge technology, I don’t think of this only as an immersive production. I think of it all as the music itself. The goal is this hyperreal immersive orchestra: a kalimba four feet from you, a string orchestra 20 feet away, a drum set 15 feet out, and an electric bass right in your face.

That’s what I’m trying to create: the feeling of being inside something. The “Immersed” name refers to that feeling of being truly inside the music. And LiquidSonics plugins have been absolutely essential to achieving that vision, bridging the gap between what I can capture and what I can imagine.

Justin Gray's Immersed project album cover

A massive thank you to Justin Gray for sharing his immense experience and fascinating insights into the making of “Immersed.”

Justin Gray’s web and social links:

Web | Facebook | Instagram | Buy and Stream “Immersed”

Photography credits: Sean O’Neill