
Eric Horstmann is a German immersive audio pioneer who has been shaping the landscape of spatial music production since the earliest days of Dolby Atmos. As the founder of Immersive Lab, he has established himself as one of Europe’s leading specialists in immersive music mixing, with over 1100 Dolby Atmos mixes spanning electronic music, hip-hop, and experimental genres for artists including Moderat, Rodriguez Jr., Robin Schulz, and Sevdaliza. Horstmann’s unique background combines over a decade in high-end film post-production with deep technical expertise gained through his work as a product specialist for both Avid and Genelec, positioning him perfectly to bridge the gap between cinematic immersive storytelling and music production.
In this first part of our two-part interview, Eric reveals the technical foundations behind his award-winning immersive mixing workflow. Find out why he relies on a purely object-based Pro Tools routing structure, how he strategically uses ready-to-go reverb instances of Tai Chi, Cinematic Rooms, and Lustrous Plates Surround in his template, and how he translates stereo productions into compelling spatial experiences.
Professional immersive mixing engineers will particularly value Eric’s practical advice on efficient session building, the importance of speaker-based monitoring for future-proofing mixes, and how his film background informs his philosophy of filling immersive space meaningfully rather than simply for technical novelty.
Part two, focusing on his groundbreaking work mixing extreme metal in Dolby Atmos with the Changeling project, will follow soon, exploring how immersive techniques can transform even the most demanding musical genres.
Immersive Lab is Eric Horstmann’s Dolby Atmos-certified mix room in Berlin
Q) I’m curious about your educational background and how you got into audio in the first place.
A) Many, many years back—25 years ago, I’m 45 now—I grew up in Heidelberg, which is a small town in southwest Germany. I was always doing music with my bands, mainly punk rock and fusion, then moving into funk and stuff. I wanted to become a professional guitarist, so my goal was to go to university for jazz and popular music, but I found out I’m simply not good enough for that. Then I thought, okay, what else can I do?
I definitely wanted to work in music or with music in some way. I met a guy who was the mix engineer for the Stieber Twins, good old German hip-hop from Heidelberg. He told me he went to a sound school called SAE in Frankfurt. I was like, what is that? So, I informed myself and thought, “Okay, let’s try this.” I moved to Berlin in 2001 because it was the city farthest away from Heidelberg, and that was a good choice at that time, being in my early 20s. Try to get away, as far as possible! I came to Berlin to go to SAE in 2001. That’s basically how I started.
I had very big visions of what this industry was like, but I quickly got a reality check that the big studios no longer exist in that way. However, I was lucky enough to get an internship at a well-known Berlin studio called TRIXX. They’ve been around for many years since the late 80s. I did fantastic recording sessions at that time, but it was just an internship, and I knew it wouldn’t last forever. I found my way into film post-production very quickly in 2003, working in the technical department of a cinema mixing stage, Elektrofilm at that time. The company doesn’t exist anymore, unfortunately.
But that’s where I learnt the craft, and I learnt it the hard way because very good people were training me on everything—analogue film workflows, working on magnetic tape with 35mm film, and that stuff. So I learnt film sound in the good and proper way. I worked in film post-production for more than 10 years at several different studios. The last one where I worked was Rotor Film in Babelsberg, Potsdam. During that time, I served as both technical director and re-recording mixer, and I found myself in a fortunate position to refurbish a smaller theatrical mixing stage, known as Studio E.
It was during a time when Dolby was in a beta phase under NDA for the upcoming Atmos format. Of course, I got in touch with them for Dolby licensing that room in regular 5.1/7.1 as it was planned, but then they said, “Hey, wait, if you don’t mind, please sign this paper—we might have something to tell you.” Then we got into the discussion, and, long story short, we were one of the first Atmos studios to open during the transition phase. Atmos was publicly introduced in April 2013 when we were kind of ready to go with a studio up and running. So I got into the Atmos world from the post-production angle, obviously, like everybody in Atmos at that time.
A bit later, I also worked for various pro audio manufacturers as a product specialist. At that time, I worked as a product specialist for the Pro Tools immersive ecosystem at Avid from 2014 to 2018, and subsequently joined Genelec as a business development manager. But working in the industry for a manufacturer doesn’t mean that you’re not also a mix engineer, which I originally am. So I started my own Immersive Lab in 2019, recognising the increased demand for immersive music production.
The goal was actually to combine my passion for music, which I always wanted to do, with the technical knowledge of a high-end film post environment—theatrical film post environment—and combine that into something I then called Immersive Lab, where I focused only on immersive music production. Now it’s expanding a bit into 3D live environments, spatial experiences, and so on. But everything is in immersive.
I rarely do stereo, and I try to avoid stereo as much as I can. I simply can’t put all the musical elements into two channels. I need more!
That’s where we are now. With Immersive Lab, so far, I have created more than 1100 mixes in Dolby Atmos, mainly in electronic music. Also a good portion is German rap and hip-hop. And just lately, a bit of death metal, which is a somewhat exotic genre compared to the ones I come from.
Q) With your background in film post-production, how would you say that has shaped your understanding of immersive storytelling through sound, the way you’re practising it now?
A) That’s a very good question because that is the fundamental difference of approaching immersive audio compared to when you come from a purely stereo background. In film sound, we’ve known immersive mixing since the mid-late 70s, with Dolby Stereo coming in. So having multiple channels and three channels in the front for the screen is nothing new to us. However, for many music mix engineers, that is very different, and you need to guide them back from the 60s to modern times.
Immersive storytelling in that way is interesting because you get an understanding of what you can do with all those channels by default. When you create a cinema movie—back in the days when it was only 5.1 or 7.1—you always had to do sound design and compose music and do the foley and everything, even reverbs of dialogue and stuff that fills the room, because you’re working for an audience that is probably a couple of hundred people sitting there in a multiplex cinema. That’s the best case. You gain an excellent understanding of how immersive storytelling works by simply filling those channels with relevant content. This is something I try to incorporate into music. I tend to joke —”Hey, I paid so much money for all those speakers in my studio, so I want to hear them!”
But the reality is, if you create an immersive music mix that relies too heavily on the left and right channel only, then there’s no point in doing immersive. Either you do it right or you just leave it, because there are many things around that sound very good in stereo as well. But in cinema post, you have to fill the room because otherwise you’re also losing the people. Because of the sheer distance between the seating area and the screen, you need to bring them with you; you have to take them to the movie, even though the dialogue is usually 95% in the centre.
Q) What made you believe in Atmos for music when it was still primarily a cinema technology?
A) One of the main moments that I had in that transition phase from film to music was that, next to all my working adventures, I also have my own music project where I do ambient downtempo chill-out music, called Electrix. It’s nothing dramatic, but over the years, I’ve released four albums and a couple of other projects. I’ve been doing that for almost 20 years now—I’m getting old!
But having my own music content available, and in a genre where I think immersive totally makes sense, that was obviously the logical step to try out stuff and say, “Okay, hey, what can I do with music only without a picture?” I was also doing a couple of score mixes during my post-production time, where I always thought I needed more space, I needed more dedicated points in the room where I could place instruments to make them audible. Most of the scores were obviously done in 7.1, but that’s a different thing. My own music project served me as a good research field to try out how sound works without a picture, but still immersive. Then I developed a couple of workflows that I still use today in a better way, but that was my starting point.
Then I started to show a couple of test mixes that I did for certain projects without any commercial reason to people and say, “Hey, listen to this. What do you think?” And they were like, “Wow, it totally makes sense.” I also tried to convince many people, “Hey, let’s just try something. Let’s do a test of a song of an artist or producer that you represent”—talking to labels—”and see what we can do.”
I genuinely believe that immersive audio is a better way to convey emotions than stereo. It’s just more, and it engages people totally differently with a piece of music, no matter the genre.
Obviously, some work better than others, but still, that was basically my initial idea.
Q) What was it like in those early days in terms of the tools that you had available? It must have been so challenging to get started adopting this for music compared to today.
A) When I started with music, the whole Atmos world was already a bit developed. I can very well remember the first movie that we did in 2012/2013—tools-wise, that was… hallelujah. The movie’s called Lost Place, a German indie movie where actually there’s a threat of invisible waves. That was a good story because something that’s not visible can still be audible. We did stuff that when you listen to that mix today, you would say, “Okay, this doesn’t work”—trying to move the whole orchestra by 360° because the camera was rotating. Yeah, the tools are there, you can do that, but I think over time, something like a common taste develops, and some best practices develop. Okay, you can do that, but you should not do this.
But with music in the first place, the tools were okay. It was still not as well integrated as it is today. Reverbs have always been an issue at that time because everything relied on stereo or up to 7.1 at best, but that was it. But the most significant change was not necessarily the tools but the creative approach to things. Initially, you were more open to experimentation. “Hey, what happens if I have the kick drum in the front and the snare in the rear?” Yes, you can—I mean, it’s a question to everybody if it sounds good or not, but technically it’s possible. With these new technical possibilities, obviously, you always try to push the boundaries and see what you can do. Then, after some time, you feel, “Okay, that’s not how it should be. Maybe we go back to some best practices of stereo music production.”
Q) You were one of the first ones in the world to work in Dolby Atmos and because of the long time that you’ve spent on this, working on immersive mixes both as a creative and on a technical level working for manufacturers—do you feel like your work has had an impact on shaping how Dolby Atmos is used today, especially when it comes to best practices?
A) Maybe a little bit. One particular point was that in 2017, when Atmos was introduced natively into Pro Tools, I was with Avid at that time in the product specialist team, and obviously was also part of the beta testing team. With my little bit of knowledge of how Atmos worked before, I hope that I was able to provide feedback on how I think it should be implemented. Obviously, the developers had done it very well at that time, so there was nothing to criticise, but still. I’m not sure how much of an impact I made, but I was definitely part of a team that shaped this and made this really be successful.
What I also actively do is educating artists. That has always been my first goal, because they need to believe in this. They need to listen to Dolby Atmos to understand what this very technical format can do creatively for them. So I did so many demos for artists in an Atmos room to showcase what can be done. I’m grateful that I happen to have excellent content with me that you can showcase, rather than the Dolby trailers or something from a cinema that doesn’t translate to a musician.
I give a lot of workshops, and I’m very vocal about spreading the word about how I do things. Just because somebody’s using my Pro Tools template doesn’t mean that they can instantly create an amazing Atmos mix. It still requires knowledge about how to use that template, and the experience is something that nobody can take away from me. But still, we are all in the same boat to make this format work and to make it even bigger than it is now and more widely adopted than it is today. That’s why I continue to spread the word as much as I can.
Q) How did you initially spread the word about releasing music in Dolby Atmos?
A) I tried to get in touch with people to let me do a mix on spec. I was doing a lot of test mixes for people to make them understand how this works. I’m actually quite grateful that Ralf Kollmann of Mobilee Records, the label of Rodriguez Jr., was here next to me in my studio. He was a very early believer in that format. In 2020, I mixed Rodriguez Jr.’s first Atmos album, “Blisss”, and that was something that got me into electronic music even more. My own music is more like ambient downtempo, and I never had anything to do with techno or house or stuff. But with that project, I got much more interested in that genre as well. I thought, “This genre is just made for Atmos.” That was the door opener, and I used it wherever I could to spread the word.
Luckily, at that time, I had some very good connections to Universal Music here in Germany, so I happened to do bulk mixes for them for quite a bit. I got to meet a couple of exciting electronic music artists from the underground scene or the Berlin techno scene. I was happy to work on the last Moderat album in Atmos, which also happened to be a big commercial success, and that obviously opened many doors.
Q) Could you share how you have your Dolby Atmos mixing template set up, what your preferred workflow is, and how it differs from a film session? What are your go-to routing and processing choices?
A) So I’m working in Pro Tools 90% of the time, unless somebody sends me a Logic session, then I have to export it to Pro Tools. But I work purely object-based. I’m not using the Atmos bed, for instance. This is the biggest difference from a film session. In a film session, you typically work with three, maybe four beds because of your stems for dialogue, music, effects, and then some split objects. But in music, we don’t have this fight between the departments, between the foley department, the ADR department, the sound design, and so on. We can essentially utilise the entire bandwidth that Atmos allows us to. And I do that quite heavily.
It’s nothing new — I picked up the object bed technique from Steve Genewick at Capitol Studios in the US. Initially, it was used to bypass the Atmos limitation of the bed in Pro Tools at the time. Today, there are no technical reasons for it, but there are very good other reasons why you should consider going completely object-based.
I have a generic object bed that is static, split into stereo feeds for speaker pairs. That’s the concept of Steve and others at that time. Then I have a couple more object beds where I bring in all the effect returns, for instance: They go into a dedicated object bed. Any kind of spatial plugin that you’re using, such as Sound Particles or anything that creates movement within a bus structure, not as a dedicated object, will be placed in its own object bed. Then all the rest are free assignable objects that I have split into binaural groups with different binaural distance settings. So I have a group of near-rendered objects, I have a group of mids, and I have a group of far. During the mixing process, I then decide which element goes into which group. I don’t care about the content, but as long as I know how I want it to be rendered on headphones later, I just throw it into a binaural group.
The good thing about this technique is that in my particular case, because I work mainly in electronic music with independent labels, we do compilations at the end of the year. Then we might have 20 songs of 20 different artists, all in Dolby Atmos. To create a continuous mix, you just reimport the ADM files into Pro Tools, and you know exactly that all your binaural metadata settings per object are consistent throughout the whole playlist, no matter what instrument it was.
Q) Why do the effect returns go into a separate object bed?
A) Because I have a pretty severe mastering chain in place where I do a lot of sidechain saturation, compression, ducking, multiband, you name it. However, I want to keep the effect returns out of that squashy, punchy mix because I want them to be flying around and have real dynamics. So I have a dedicated object bed for the effect returns that is not squashed by my mastering chain.
There are some songs where you have transitions that go into a very quiet part of a song, and even those things go into my effect return bed because I know that there’s no sidechain compression coming up at that moment. Otherwise, it would be pretty complicated to remove it from my chain. I know many plugins on the market do these mastering processes, but I’m still more flexible doing everything manually in my Pro Tools session. The only downside is that I have 59 stereo master tracks in my session that basically do the mastering process for all objects at the same time, all linked together. So it’s definitely resource-hungry. The Changeling death metal album cannot be played back on my laptop on the go. It’s just too much! But it plays fine on my Mac Studio in the studio.
Q) How did you first come across LiquidSonics?
A) Good question. Can’t remember, it’s been a very long time. But at the beginning of Atmos music, you’re kind of searching the whole internet twice to see if you can find something that allows you to use a native immersive reverb environment. And in 2019 or 2020, there was not much around, literally nothing that allows you to at least fill the bed completely.
Then, by accident, I think I found an ad on Instagram or something, most likely. I’m definitely a target audience for you! So yeah, the ad got served in my timeline, and then I said, “Okay, what is that?” I checked it out and was immediately pretty impressed, especially by the capabilities that go straight to 9.1.6. This was exactly what I then incorporated in my template.
So my object-based template started with 7.1.4, but three years ago I expanded it to 9.1.6, and it definitely came in handy that the LiquidSonics plugins support the channel format. Plus, they sound good! So that’s how I discovered Cinematic Rooms. Maybe also because Cinematic Rooms is popular in film post, which appeals to my two hearts. But then I very quickly got the whole bundle and discovered all the other plugins.
Q) Could you provide an overview of the reverbs in your template and explain what you’re using each one for?
A) One thing I have to be very clear about is that the usage of reverb in an Atmos music mix varies very much on the content and the genre, and also very much on the stems or the material that I get from the producer or stereo mixing engineer. I’m sending out a PDF at the beginning of the process just to make sure, “Hey, this is how I’d like you to export your files so I can make a good mix out of it.” And I also get, of course, the printed reverb stems of the original production, i.e. the stereo production when there is an existing one. So obviously, this is my first thing to take as a reference because that’s what the artist and the producer were probably working on for a while. So I’m not there to flip this instantly, but what I usually do, I also ask for screenshots of plugins so I can rebuild it in Immersive. And more and more plugin screenshots that I get are actually LiquidSonics reverbs from the stereo production! And that’s very useful because I can just copy and paste the values into my immersive template and I have the same thing. So that works nicely.
In my template, I have three instances of Cinematic Rooms, always in different flavours, which is a medium hall, a large hall, and a small room—that’s basically what I have as a standard on an aux track, each individually. I have Lustrous Plates that usually gets used when I have, I don’t know, Rhodes, piano, and vocals. That’s one of my go-to things.
But I fell completely in love with Tai Chi. That is my main reverb for everything that is synthesisers.
All my synths in electronic music, whatever that is. The starting point of my Tai Chi adventure was a preset called “Arp in Space”. Then from there, I kind of went into multiple iterations of my own presets. But that was the starting point, and it felt so perfect. It just made total sense, and it fit my music beautifully.
Q) What do you like most about Tai Chi for synths?
A) It’s a good question. I mean, yes, there are tons of knobs that you can turn. But I’m not a big fan of fiddling around with everything all the time just to prove a point. Sometimes you just instantiate a reverb, and then it feels good. And I don’t know, it just sounds perfect to me. I mean, the bit crusher is definitely something great that I haven’t seen on many other plugins. So that sometimes works quite well to make the reverb even more analogue-ish, edgy, grainy, or however you want to describe it. That is definitely something I really like about it. The multiband reverb multipliers—I’ve never used that. It’s always off, but I know that I have to check it out. Every reverb has its justification out of the bundle, but that’s basically just what’s in my template.
What I love with a good template is that you just turn on the knob, you run an aux track into Tai Chi and say, “Okay, cool. Yeah, good. 90% we’re there.” To me, often it’s just the amount of reverb in Atmos. I don’t often need to do crazy tweaks of the reverb, unless I’m doing technical death metal. That’s a different topic that we might touch on later.
But I’m also using Illusion in the stereo version, and I have that as a reverb that is only coming out in the rear channels. Consequently, I’ve named it “rear verb.”
Q) When do you go for something that’s just a stereo reverb, and when do you want something that’s really 9.1.6?
A) So I always have the 9.1.6 reverbs, and I always have them set in Pro Tools to “follow main pan”. So, depending on the source, where it is located in the 3D space, it activates the reverb from that position. But the “rear verb” reverb is something I often use when I have very intimate elements or intimate moments in the song. Let’s say vocal and acoustic guitar, super mono-esque in the front, but still you want to have a bit of space but not with typical reverb space, if that makes sense. So then it’s really cool to have a stereo reverb somewhere else, just adding it to the dry signal.
And in binaural playback, it still works very well because you hear that it’s not completely connected to each other. You hear that there is space in between the reverb and the source. And that works nicely.
Q) Where do you think things are going at the moment with Atmos in terms of the creative end, and where would you like things to go from a storytelling perspective?
A) That’s a very philosophical question because I have very clear ideas about what Atmos should be and what it should not be. And where do I start? Okay, I think how Atmos is perceived today in the industry, or let’s say in the consumer industry, it’s still a super niche. People outside of our little sound and creative music bubbles, let’s say the regular consumer of a piece of music who has nothing to do with sound at all, they might have never heard of Atmos.
We are still in a position to spread the word and let people experience how it works and what the difference is to whatever we heard before in stereo. And now comes the biggest problem – there is a lot of content available that does not suit this proper experience for someone who has no clue, because there is a lot of content that is not okay. I mean, I don’t want to judge the work of colleagues, but a lot of stuff is not well-made in Atmos. There is stuff that is just without any emotion, merely lip service. It’s just like, “Look, there is a delivery line on the Excel sheet, so okay, somebody needs an Atmos mix, boom, let’s do that. Done.” Some don’t care about the creative and emotional impact that such a piece of music and also this technical asset can have.
And my wish would be that we educate more young producers in the music industry about how to use that format creatively and what they can achieve with it, and hopefully, they don’t deliver a mix that is only mixed on headphones. Technically, it’s possible, but content-wise, I think it’s a bit questionable. I’ll share my main reason why I think this is challenging: If you create a mix on headphones today, because you think, “Hey, everybody’s listening on headphones anyway, so why should I use speakers?” The problem is that the headphone systems in the future will develop drastically and might sound very different to today. They already have changed in the last five years or four years since Apple Music joined. I mean, the sound has changed dramatically since then. And who says that it’s not going to be at the same pace in the future?
Therefore, we have to find a reference where we can be confident that on future playback systems it’s translating, it’s adequate, and I truly believe that this is a speaker-based system as of today. Yes, speakers will also change, but not the way headphones will develop. So my wish would be that more people actually appreciate speaker-based systems for mixing and approvals. No, I’m not saying that everybody should have 16 speakers in their studio, but get in touch with a studio that has a properly aligned system and just play it back and check it. Do a QC, or send it for mastering or something like that, just to try to raise the quality bar altogether on Dolby Atmos.
This is definitely a format that newbies who might have something to say in the music industry really think about it like, “Okay, that’s something good, let’s do that for my next artist.” But the moment they hear something that is not perfect, then they say, “Okay, what am I supposed to do? That doesn’t make sense, it sounds bad, and nobody can listen to it.” This is something I try to be very vocal about and emphasise: “Hey, we have an amazing asset here. We have a huge creative potential. Please, let’s all collectively work on that to make it good and keep it good.”
Q) With that in mind, is that one of the motivations for you to take on a project like Changeling with Tom Fountainhead?
A) I mean, death metal is not my genre by default, but I thought, “Hey, we have a huge potential here with that stuff to create something that will make a difference, and that people, when they listen to it, say, ‘Wow, okay, that is immersive. I understand. I want that.” Obviously, that’s the plan with every project, because in electronic music we have a lot of good stuff available. This morning, I had a chat with Emre Ramazanoglu who does Brian Eno, Lana Del Rey, and stuff like this in the UK. He is also a LiquidSonics user, and he does amazing stuff. Really high-quality mixes.
So that were the reasons why Tom Fountainhead, the mastermind artist behind Changeling, and I said, “Okay, let’s change the way people listen to hard and extreme metal music with Atmos.”
To be continued…
Stay in the loop and sign up for the LiquidSonics newsletter.
A huge thanks to Eric Horstmann for sharing his extensive experience and pioneering vision.
Eric Horstmann’s web and social links:
immersive-lab.com | Facebook | IG | LinkedIn
Photography credits:
Eric Horstmann photography: René Löffler
Additional photography: Sonic Lighthouse