
Almanac
Answer to Yourself
Stereo Rendering Environments - An inquiry into the application of virtual acoustics encoders in music post-production, for stereo formatting.
Within the field of immersive audio, a unified consensus to adoption and application has been eluded. This research inquiry focuses on a gap of applied technological context, which has brought about an alternative method to rendering panoramic sound fields. Through the implementation of an audio encoding framework, virtual generation of acoustic enhancement encapsulates a proposed post-production approach to immersive audio within the two-channel capacity of standardised & accessible stereo audio.
VA Impulse Response Generation and Observation
During the process of rendering audio stems, to visualise and observe the frequency response of various VAS models and placements of HRTF listener / sound sources, impulse responses have been rendered through for this purpose. Observing the frequency responses against one another will draw an insightful distinction between VAS models and their intrinsic timbre similarly to natural acoustic environments.
VA Mixing Session - Guitars
The rendering sessions for Track 2 encoded the guitar tracks within the following source array and virtual environment model. The use of sources in each diagonal direction, combined with the two vertical sources has been done to maximise the ray distribution in the upper hemisphere of the listener perspective. The six sources all contain ambisonic components to allow for stereo signals to be outputting from each individual source, generating raycast cues which maintain this stereo image on reflections from surrounding geometry.
VA Mixing Session - Sound Collage
For the purpose of generating additional texture to accompany the tape cassette EP which has been produced creatively alongside these sessions, a sound collage has been created to texture the noise between track transitions. This has been done to explore the abstract creative potential of VA use for soundscaping and ambient effects processing.
VA Mixing Session - Virtual Microphones
With considering directions this virtual rendering environment implementation could be taken for further post-production uses, one that comes to mind would be as a reamping setup where stems can be rerecorded with additional timbre and dynamics. To investigate the implementation of this practically, this session will investigate into the setup and configuration of applying virtual acoustic encoding in emulation of the re-amp process.
VA Mixing Session - Multi-Track Mix
This documentation covers production of a fully encoded multi-track implementation for mixing, using a set of back catalogue guitar / bass / kit / synth stems as material for use in renders. Stems had then been exported from Logic Pro and imported over into Unity Engine as audio assets, with an additional material component asset being created specifically for use in this session.
VA Mixing Session - What Will Happen ?
In this session some novel placements of audio sources will be utilised to split single stems of instrumentation up into varying positions around the stereo field. For this I will be first using a piano stem from an in-progress production remix track - What Will Happen? - there is the aim to discover some distinct placements working off of new experimentation informed by previous implementation research sessions.
VA Mixing Session - Record Store
Going into this iteration of implementation with the aim of utilising tested setups of VA rendering to incorporate into the mixing process, the session began with the 6-Ch source arrangement utilised in the Session 6 audio project. Again working with Logic Pro and Unity Engine bridged via Blackhole software for on the fly rendering during the mixing process, this project differs with the context of the track itself being a VA remix therefore allowing for more creative flexibility.
VA Mastering Session
Given that the focus of this project’s creative application lands on mixing in the post-production stage of audio production, mastering tracks with incorporation of VA encoding is something that would be beneficial to enquire into as it falls within the same production stage. Mastering full tracks with VA may present some different applications and also investigate whether existing stereo image phase relationships can be taken further into exocentric space.
VA Mixing Session
Starting this session, the aim of integrating VA rendering into a post-production mixing workflow was the focus of research. Going into this session, it was focused on gaining an understanding for how this encoding process may integrate alongside mixing to create immersive sound from mono / stereo stems in a typical stereo mix session.
VA Mixing Implementation in Virtual Acoustic Space
This session aims to further develop understanding of the interactions between receiver positioning relative to audio sources within VAS. For this session, an arrangement of string instruments will be positioned in a theatre model to synthesise the listening experience of orchestral music. Whilst these sources will remain static, the listening position will be moved for the purpose of testing microphone placement and pickup within the VAS.
VA Virtual Acoustic Space
As a core part of the generation of virtual acoustics comes from the geometric structure the sound sources propagate within, this session will run iterations on three models with varying size to determine what distinctions may be apparent between their VAS.
VA Acoustic Geometry Materials
Within the concept of VA there is the function to define material properties of rendered environment geometry. This is done via Steam Audio’s geometry tag component, which can assign properties of absorption / scattering / transmission to surfaces to adjust how incoming rays from sound sources interact with surfaces.
VA Stereo Transferability Testing
As a key aspect of this type of surround encoded audio is that it is done with only 2-channel PCM waveforms, this testing session will investigate whether this is digital dependent or whether it is compatible and able to be reproduced over analogue stereo tape. The purpose of this is to understand whether the encoded audio on reproduction should be consistent regardless of 2-channel playback medium, to understand whether playback speed has any effect to the encoded audio’s spatial properties, as well as to demonstrate that the spatial and binaural cue properties are fundamental to the waveform with no additional decoding / processing / playback setup requirements.
VA Pilot Session - Mixing Implementation of VA into Logic Pro X with Unity Engine + Steam Audio
Running through the first implementation gained valuable insights on the recording process of the Steam Audio implementation in Unity engine. With the initial setup across Logic Pro X and Unity Engine sessions, the internal routing software Blackhole had led to a mismatch between the two project sessions due to unconfigured sample rate settings, leading to slowed audio in the engine and heavily distorted recordings in the DAW. Despite this the initial setup itself was quick and straightforward, with the CPU and memory demand being reasonable.
Virtual Acoustic Space Models
The following VAS model rooms have been created to generate varying audio rendering environments which the signal stems can propagate, encoding them with the virtual acoustic properties of each. These examples use different stems for each source, and vary in VAS sizes to investigate the breadth of variety which could be possible with this encoding process.
Steam Audio integration into Unity Engine for recording and mixing in Logic Pro X
As the aim of this project is focused on encoding virtual acoustic properties into PCM stereo, there is a need to bring the encoding processes together within a virtual rendering environment in such a way that allows for generation and recording of virtual acoustic space. Also summarised in the virtual rendering inquiry as the component functions concept involving the encoding / rendering functions :
Mixing Implementation Development of Virtual Acoustics
Within the following pages is the iterative session documentation that has been produced alongside the primary practice-based research inquiry into practical and creative application of the encoding method itself. The iterative process has spanned across both development and optimization of the rendering process and encoding functionality, as well as the structure and approach to the sessions themselves.
Brief Production Writeup - Ideation Process
In our first week of the module we were tasked to produce an asset pack for a film, this consisted of ADR recordings, sound design packs, foley backgrounds, walla / loop groups, and music for four scenes. Working as a production team we divided the assets up, meaning I was responsible for producing the alternatives for foley backgrounds and sound design. The ADR was then scheduled to be recorded in the Neve / Audient studios along with the loop group assets.
Game Audio Implementation Critical Evaluation and Reflection
For my game audio project for this module we were tasked with creating an accompaniment piece to implement into the game environment using FMOD middleware and Unreal 4 game engine. For this I used Logic Pro to create the audio files I would need, the sounds which I produced for the backing ambience I found were focused primarily to create the overall atmosphere to the piece.