Skip to content

Game Music in Dolby Atmos - Part 3

December 21, 2021 by Kristoffer Larson
Placeholder

In the previous articles we have covered creating and mixing music for games presented in Dolby Atmos, from high level concepts to specific practical recommendations for creating your mix.   In this last article we will discuss mastering and exporting your content so it’s ready to be implemented in a game audio engine.  As a reminder, check out my video series found here that walks through most of this content with tangible examples.

The final stretch

Once you have your instruments panned accordingly and you’re starting to dial in the final mix, there are a few things that you’ll want to consider in order to prepare for recording a master.  If you have a DSP chain that you would typically use on your “master” bus, this can instead be placed on the bed bus.  However, keep in mind that any audio objects will not be affected by this chain because they are individual audio streams that do not go through the bed signal flow.  Objects may have send returns that go into the bed and those will be affected by your bed master chain, just not the original object audio.  If you need the objects to have an effect on the overall bed, you can use DSP that takes a sidechain input and then route a send from the object into the sidechain input.

Another thing to consider regarding the separate paths of the bed and objects is that they do come together in the Dolby Atmos Renderer, whether it’s the mastering application or the output of your game.  So, make sure to monitor your headroom and loudness and try to address it before the renderer input.  The Dolby Atmos Renderer application does provide loudness metering as well as a limiter to catch peaks, but you’ll have more control with your preferred tools in your DAW.

GMDAFigure11_MasterMeters.png

Use your mastering DSP chain on the bed bus. 

When you are monitoring the output of the Dolby Atmos Renderer, it’s a good idea to turn on the Spatial Coding monitoring option in the preferences.  Spatial Coding is used to reduce the dataset of a full Dolby Atmos stream to a size that is appropriate for the current spec of HDMI.  Essentially the algorithm will “cluster” objects that are in close proximity to each other without a perceivable change.  This is a process that happens in the transmission or media authoring stage, so your Dolby Atmos master will not be affected.  It’s also worth noting that this affects objects only, so if your music has been mixed using just the bed channels, Spatial Coding will not apply.

Exporting

Without the ability to import Dolby Atmos masters or ADM files, your content will be imported as a collection of independent files; the bed as a 10 or 12 channel interleaved file and each object as a mono file.  It is still recommended to create a Dolby Atmos Master because they can be quite useful for a variety of reasons.  You can derive a 7.1.4 bed re-render for importing into your audio engine, or you can re-render any number of different channel-based full mixes for trailers or soundtrack exports.  Plus, it’s an excellent archival format for long-term storage or transferring to other facilities.

Bed

As mentioned above, you can create your bed export by re-rendering out of the Dolby Atmos Renderer either live or from an existing master.  You can also opt to export directly out of your DAW as long as you are able to achieve the correct channel count for your desired implementation.  For example, Nuendo can export 7.1.4 tracks, however Pro Tools can only go up to 7.1.2.  So if you are authoring in Pro Tools but you want your bed to be 7.1.4, you will need to create that via a re-render out of the Dolby Atmos Renderer.

GMDAFigure12_DAR_Rerenders2.png

Configuring your re-render options

One thing to keep an eye out for is maintaining the correct channel order when importing interleaved files into your game audio engine.  The Dolby Atmos Renderer uses the SMPTE channel order standard, but the Microsoft convention swaps the rear and side surrounds.  I tend to export as multi-mono and then use the Wwise Multi-channel Creator to create an interleaved file that is channeled to Microsoft’s format.

Objects

Since object metadata is currently not available to import directly into game audio engines, your audio objects will be exported directly from your DAW.  Dolby Atmos objects are mono by nature, so you will not be exporting any panning done in the DAW.  You can use stereo or multi-channel content to import into your game audio engine, however the engine will either collapse that content to mono or assign an object for every channel of content.  This may or may not be what you want, so just make sure you understand how your engine treats object content that is greater than mono.  Also remember that any send returns were captured in the bed, so you’re just exporting the output of that object track.

GMDAFigure13_NuendoExports.pngExporting individual objects from Nuendo

Game audio engine resources

The nitty-gritty of implementing your music into your game will depend on a number of variables including the game audio engine and the game engine itself.  Keeping the scope of this article to a high-level overview, here are some links with far more detailed information:

  • Audiokinetic Wwise – This is a great blog that covers the current state of implementing object-based audio in Wwise.
  • Firelight FMOD – Here is their main documentation page where you can find the latest information regarding object-based audio authoring.
  • Proprietary audio engines – If you are using (or creating) a proprietary audio engine, your first stop should be here to get the basics of Microsoft Spatial Sound. Then head over to dolby.com for further resources or to connect with the game team for more info.
  • Lastly, here’s another plug for my video series where I walk step by step through the music mixing and implementation process using most of the tools described here.

GMDAFigure14_FMOD.png

 

GMDAFigure14_Wwise.png

FMOD and Wwise game audio authoring tools

A few parting thoughts

Channel-based music tracks

This series of articles was written mostly from the perspective of creating bespoke music content for your game.  But what if you are implementing already created music, such as licensed tracks or legacy content, that is not in Dolby Atmos?  I just wanted to go over your options for content that has already been composed or mixed.  Naturally, it would be ideal if you could get your hands on the original sessions and remix for Dolby Atmos.  The next best option may be to upmix the stereo source using a upmixer plugin that supports 7.1.4, such as Nugen Halo or PerfectSurround Penteo 16 Pro.  Of course, there’s nothing to keep you from implementing an already great mix in whatever channel format you have.  Stereo or surround channel-based mixes still sound great in Dolby Atmos and you can just assign the channels to the appropriate bed channels and you’re good to go!

Implementation Implications

It’s worth noting that compared to a stereo or surround game soundtrack your overall media count and disk space will be affected.  If your music is mixed all in the bed, you’re dealing with an effective channel count of 12 for a 7.1.4 bed, which may be a significant increase in data.  On the other hand, if your music relies mostly on objects and/or small channel-based sections that are dynamically mixed in-engine, you may end up with less data.  Interactive music approaches are not directly affected or limited by working in Dolby Atmos but will factor into your media data calculus.

 

GMDAFigure15_WwiseIMH.png

Interactive Music editor in Wwise

I hope this has been a helpful and informative series of articles, please visit games.dolby.com for a ton of other great resources about creating game audio and video content in Dolby Atmos and Dolby Vision, as well as contacting the game developer support team.

Decorative image