'The audio industry needs to move forward': the rise and rise of immersive sound

'The audio industry needs to move forward': the rise and rise of immersive sound

Phil Ward prepares for the arrival of 360° audio techniques in the live sector, as the rise of immersive sound continues…

The emerging techniques of ‘object-based’ audio, quite apart from encouraging a swathe of fresh claims about who slept with Fraunhöfer or IRCAM first, are really going to screw up Nipper the dog. He’s the one who sits at the HMV gramophone horn, convinced of the signal’s authenticity. The signal travels through a cable to a transducer, and emerges from some kind of flare right where you and Nipper would expect. But if you introduce a means of separating that source from its aperture, and Nipper will be spinning round just like he does when you pretend to throw the ball without actually letting go.

That’s pretty much what the current explorations into so-called 360° audio are doing, albeit to enrich listeners’ experiences and not to confuse household pets. Just as stereo creates a phantom image somewhere between the two sources, multiples of this effect can be used to create potentially hundreds of audio apparitions in three dimensions – and move them around, too.

Live Aid

All of which is just fabulous for VR, AR and other transporting multimedia thrills – as well as the brave new world of headphone-melting audio mixes for the spaced-out connoisseur. It’s also a boon for interactive experiences in art galleries, funfairs and reconstructions of Medieval Winchester. However, milking a live performance into these buckets is a different truckle of cheese. The phenomenon of the open microphone is enough of a challenge to start with, quite apart from the mood-swings of sensitive talent and neurotic architecture. Somehow the live sound industry is going to have to make all this work.

The good news, according to those at the front line, is that the sound engineer is not going to have to reinvent the wheel. Roger Wood is head of software at Digico, the console pioneer that has just completed a joint project with French loudspeaker powerhouse L-Acoustics to integrate the SD range with L-ISA, L-Acoustics’ groundbreaking speaker processing that introduces ‘immersive hyperrealism’. Taking advantage of SD consoles’ use of OSC for remote control, Digico has found a means to plot a course between OSC and L-ISA; the result is about to be launched as a software upgrade called DeskLink.

“We have provided a customised on-screen panel that displays the main L-ISA controllers and a touch sensitive, graphical representation of the pan position on a per input channel basis,” explains Wood. “The controls are automatically mapped to our console’s under-screen work surface rotaries to provide a simple, intuitive method for control of the sources in the L-ISA eco-system. Communication between the console and L-ISA uses OSC commands and the setup is accessible from the SD console’s standard External Control panel.”

‘DeskLink is very straightforward. The L-ISA processor does all of the hard work under the hood’: Scott Willsallen

Users are excited about this prospect. Scott Willsallen is an Emmy Award-winning sound designer for major international events, and a director of Australian consultancy Auditoria Ltd as well as UK-based Remarkable Projects. “DeskLink is very straightforward,” he says. “The L-ISA Processor does all the hard work under the hood, and all the console is doing is providing touchscreen control via the network – they have to be on the same Ethernet network. Audio connectivity is MADI, either ‘in line’ or as an insert: the processor has three MADI cards and the live configuration provides 96 inputs and 32 outputs at 96kHz. Input 1, for example, becomes an object, say on channel strip 1, and you activate L-ISA within that channel strip. You remove the normal Pan function on that strip and the L-ISA one replaces it – a square with a dot in the middle.

“Open up the full screen and you get five controls: Pan; Distance; Width; Elevation; and Auxiliary Send – just as you would, with different parameters, if you selected and opened up the full EQ for that channel. And you do that on a channel-by-channel basis in the normal way.

“The ‘matrixing’ is inside the processor; all you’re thinking about as the engineer is spatial placement. As far as workflow is concerned you don’t have to think about a delay matrix, just physically where should any specific sound come from. You can adjust it in the horizontal plane, the vertical plane, how close or far away, how wide or narrow that source is – whether it’s a spot percussion sound or a choir, for example – and the processor does the rest. Really, it’s an extension of the traditional techniques of panning left and right. The difference is that the placement is much more precise, because it’s emerging from a single array rather than having to consider amplitude and time difference, plus you get the added three-dimensional elements of height and depth.”

Willsallen fully expects other console manufacturers to follow suit, not least because of the minimal impact these parameters will have on the engineer’s working day. “It shouldn’t be any different to any other control of external devices, be it a Waves rack or whatever,” he points out. “You treat it like a plug-in, and the key point is that it doesn’t add to the console DSP load as the processing occurs inside the 2U L-ISA box.”

Sound and vision

Guillaume Le Nost is a key member of L-Acoustics founder Christian Heil’s L-ISA team based in North London. As head of R&D, he’s been shaping L-ISA into a flexible format for live sound, studio recording, re-mixing and re-mastering, and is convinced that pro audio needs this kind of treatment to keep up with rising production values. “We’ve seen major improvements in video and lighting design,” he says, “and we want sound back in the forefront of live events. Without sound, there is no show – and we can now achieve a better fusion between what you see and what you hear.”

L-ISA proposes vertical solutions for every point in the sound design chain, from speaker allocation to interface tools and from simulation to show time. It replaces traditional L-R configurations with a minimum of five frontal speaker arrays to cover the ‘Performance Zone’. More can be added to the ‘Extension Zone’ to immerse the audience in a wider soundscape, as with sidefills, with the exact deployment tweaked according to taste and budget. Typically, live sound L-ISA systems – L-ISA Live is the branded product range – will feature fewer arrays than those for fixed installation, which have already reached 24 in some cases to envelop the audience completely.

“It could change the design of consoles and speakers if, in the future, more and more people use object-based mixing for live shows,” reflects Le Nost, “but it’s really achievable right now. At the moment we’ve integrated the control of objects, rather than the mixing of objects, so the workflow has not changed dramatically. It’s what good digital control surfaces are designed to do! You can still use your existing speaker and console inventory to implement a quite revolutionary change. You only need to add the processor, which is just another box in the audio chain.”

The limits to arraying for live sound explain the doubts surrounding the use of ‘360’ as a label for these applications, but for some this misses the point. Sound designers are already realising the benefits of the way that these processors can ‘point-source’ multiple signals and fix the uneven time arrival of wavefronts inherent in plain stereo, dramatically increasing the percentage of the audience that can enjoy the so-called sweet spots. It also attenuates overall levels, while maintaining coherence and impact. Even with five arrays for a basic rock and roll show, it may not be ‘immersive’ but the sound will certainly be better for more people.

You can use the word ‘soundscape’ when talking about L-ISA, because that’s what it is, provided you pronounce it with a lower case ‘s’. If you capitalise the initial letter you are then talking about Soundscape, the competing immersive audio system made by global rival d&b audiotechnik. The D-100 hardware processor has its own algorithm, and is bolted on to a live rig in much the same way and with an equally minimal impact on workflow. Again, the arrays need to increase by around 30% to begin to achieve the results, but most agree that each array – and each element in the array – can be smaller and lighter almost in inverse proportion to the benefits produced.

“Soundscape is about changing the entire idea of sound reinforcement,” says Ralf Zuleeg, head of sales services and application engineering at d&b audiotechnik. “Things have been done in the same way for decades, and it’s time for something new. The real challenge is not to do with the technology but with education – re-learning. Soundscape allows us to rethink the role sound plays, both in terms of its social and cultural context, but also the relationship between the listener and their environment.”

Something a little spatial

The Soundscape interface is via two software modules: d&b En-Scene and d&b En-Space, each of which opens up within d&b’s existing R1 remote control software. Basic configuration is as normal in d&b’s prediction software ArrayCalc. Now that the processing platform has been released officially, it has also been revealed that two third-party developers in show control and theatre have already integrated it: Canadian specialist Figure 53 had incorporated a link within its ubiquitous show control software QLab; and Norwegian protagonist TTA’s Stagetracker II FX tracking and localisation system is also supporting it.

Tony David Cray, head of sound at Sydney Opera House

Tony David Cray, a sound designer for large, outdoor events; a live sound and recording engineer; and currently head of recording and broadcast at Sydney Opera House, has been at d&b’s HQ in Germany recently to examine Soundscape more closely. “Soundscape allows us to create a virtual environment in which we can place close-mic’d sources across a broad space,” he comments, “and more than anything it increases our empathy with the music and enables us to deliver it to the audience in a much richer way. When I first heard it used, on the opera Die Tote Stadt in Sydney Harbour, I wept. The illusion of spatialisation was overwhelming.”

There is another kid on the block. Bjorn Van Munster, formerly at Salzbrenner Stagetec, is now the prime mover behind Dutch manufacturer Astro Spatial Audio (ASA). “For some,” he says, “it’s getting a little unclear about what 3D live sound is, whether it’s stage localisation, immersive or surround audio, or basically 2D solutions for re-distributing the sweet-spots or some other kind of imaging. For us, it is clear: it’s about using the principles of object-based audio to give sound professionals an entirely new tool, to unlock the creative potential of audio.”

Uniquely among the manufacturers of algorithm-filled boxes for plug-and-play ‘360’ live sound, ASA does not also market loudspeakers and is agnostic in this regard; SARA II is ASA’s ‘premium rendering engine’, a 3U rackmount hardware application of the SpatialSound Wave algorithm optimised for most types of digital sound reinforcement system compatible with MADI, Dante or AES67.

“Everybody’s suddenly talking about ‘object-based audio’, but there’s no clear definition of what an ‘object’ is,” continues Van Munster. “For us it’s a virtual speaker behind the physical speaker, in a vertical and horizontal mesh or dome around the whole area of the system. Each one can be moved anywhere, automatically re-calculated for specific positions. It has data about spatial positioning, acoustic characteristics, diffusion, reverberation… everything needed to re-scale it convincingly in a different environment, no matter where you are.”

Algorithm and blues

Ah, there’s the rub: the system does it for you. The L-ISA Processor does all the number crunching and the summing, as do the D-100 and the SARA II, but for Robin Whittaker – co-founder of Out Board, home of the TiMax2 SoundHub delay-matrix, audio show controller and spatial audio processor – there are levels of subtlety only achievable if you are willing and able to roll up your delay matrix sleeves and do the math.

“There’s more to this than a plug-in,” he states. “When we used TiMax on Alan Ayckbourn’s ‘narrative for voices’ called The Divide at The Old Vic in London, it also had a very thick underscore from an upstage band and choir. We radar-tracked the actors, time-aligning and localising every voice, and left a clear space in the middle of the mix for these to come through. We did this by timing the band and choir back to where they were, but massaging the levels so they were kept out of that middle space, and timing the music into the surround speakers according to the cues. It created a really immersive soundtrack while keeping crystal-clear voices from the stage. The point is: if you throw your mix entirely into the hands of the algorithm, you’d be hard pressed to achieve those two ends simultaneously.”

So time will tell, in more ways than one. But however granular we get, few can argue that if we can implement any greater 3D control of the live mix we are – sorry, Nipper – at least barking up the right tree. Last word to Guillaume Le Nost: “The audio industry needs to move forward with this, so that sound remains creative and does not become just a commodity.”

read more

Source: mi-pro.co.uk