Skip to content

Game Music in Dolby Atmos

October 29, 2021 by Kristoffer Larson
Placeholder

This first article will cover general aspects and benefits of Dolby Atmos for music as well as considerations for planning your workflow.  In the next article we’ll review composition and mixing recommendations for Dolby Atmos and point out anything specific or unique to games.  In the final article I’ll cover exporting content and implementing into game audio engines.

For a more detailed look at this process, check out my series of videos here.  In these videos I take a few demo tracks through the whole mixing, exporting, implementing pipeline and provide tips and recommendations along the way.

What does Dolby Atmos mean for music in games?

First off, let’s look at the overall benefits to game music and audio in general when presented in Dolby Atmos.  From a technical angle, the Dolby Atmos format allows for future-proof expandability because your content will automatically adapt as users change, add, or improve their playback system.  Whether it’s downmixing to 5.1 or stereo, or adding more speakers to a home theater system, the Dolby Atmos Renderer will create the best mix possible for that specific output device.  This also includes soundbars and virtualization renderers, which opens accessibility to spatial sound for every user!

GMDA Fig.1.png

Just a few examples of Dolby Atmos rendering endpoints

Of course the most obvious and tangible aspect of Dolby Atmos is the addition of the height axis so we can pan sounds directly overhead.  Not only does this open up the space directly above the listener, it creates the ability to pan internally into the listening space and dramatically improves spatial resolution across the entire soundstage.  The object-based audio fundamentals of Dolby Atmos bring pin-point accuracy and run-time flexibility so you can place sounds precisely where you need them, even if you don’t know where they’re going to be before handing a game controller to your audience.

GMDA Fig. 2.png

Dolby Atmos provides for extensible spatial fidelity

All of these benefits are great in and of themselves, but they would be meaningless if the end results didn’t sound as good if not better than traditional channel-based deliveries.  No worries there because the end results sound amazing.  Whether you’re creating a massive tableau or an intimate portrayal, spatial audio allows for a very natural and open sound that brings immersion to the next level.  What that means to you as a content creator is that your artistic intention is preserved so that your audience can hear your work as intended, replicated in the best way possible on the device of their choice.

GMDA Fig. 3.png

Dolby Atmos maintains artistic intention from mix to listener playback

Considerations before you begin

The phrase “music for games” is incredibly broad since we all know that each game is unique, and the role and function of music varies on a per-project basis.  However, the implications of composing, mixing, or implementing music for a game in Dolby Atmos are essentially the same regardless of whether we’re talking an RPG or FPS.  The most significant decisions will be whether an element will be mixed into the bed or as an audio object.  We’ll get into the details of beds and objects and this decision making process in the next articles.

Tools of the trade

First let’s look at the toolset for creating content.  For the most part this won’t look any different from your regular software tools: DAW and monitoring system.  Currently, two of the largest DAWs natively support Dolby Atmos mixing; Pro Tools and Nuendo.  They both feature native panners that work in bed and object mode and write metadata to a Dolby Atmos Renderer.  But even DAWs like Logic, Reaper, Live, etc. can use the Dolby Atmos Music Panner as a VST, AAX, or Audio Unit plugin to pan and write metadata to a Dolby Atmos Renderer.

GMDA DAW2.pngGMDA DAW3.pngGMDA DAW1.png

Some of the more common DAWs currently in use

“What is this Dolby Atmos Renderer of which you speak?”, I hear you asking.  This is a software program that runs in parallel with your DAW to monitor and master Dolby Atmos mixes.  The Dolby Atmos Renderer can run on the same Mac as your DAW or on a separate Mac or PC as a mastering machine.  Nuendo also has an official renderer for Dolby Atmos that is natively integrated into the DAW on both Mac and PC.  More information on the different configurations, video tutorials, and in-depth help articles can be found here.

GMDA Nuendo.png

The Dolby Atmos Renderer and the Nuendo native Renderer for Dolby Atmos

Monitoring

Naturally you’re going to want to monitor what you are composing or mixing in Dolby Atmos.  While it’s certainly possible to compose or create components in stereo or 5.1, you’ll quickly realize the benefit of working in a monitoring environment that allows you to directly pan and mix as intended.  Dolby recommends a discrete speaker array in a 7.1.4 configuration.  This is your traditional 7 speakers on the horizontal plane plus a subwoofer (.1) and the inclusion of 4 ceiling speakers for front and back, left and right.  This configuration provides all of the spatial fidelity required for mixing to a home entertainment specification.

GMDA Fig 6.jpg

A 7.1.4 mixing environment for home entertainment

What if you don’t have consistent or frequent access to a 7.1.4 mixing environment?  The binaural virtualization of Dolby Atmos for Headphones is an excellent way to monitor your content while composing or doing a rough mix.  This is a feature of the Dolby Atmos Renderer which allows you to monitor a virtualized Dolby Atmos render in real-time as you mix.  Even though this virtual renderer does sound very good and allows you to get an idea of how your music will sound in a spatial output, it’s always recommended to do your final mix in a 7.1.4 mix room with discrete speakers.

Now that we’ve covered some of the general topics, in the next article we’ll dive into some recommendations for composition and mixing.

Decorative image