Sound Capture & Design For Animation

Within the projects I have produced there have been techniques undertaken to record and process the audio present. Sound design has been a focus for me with this module, so much of the audio processing done falls into this category, however it goes alongside the audio capturing sessions which is required to get the audio samples in the first place.

Beginning with the animation audio the approach to recording started with the three categories of DME, Dialogue / Music / Effects. Dialogue was recorded with an se2200a condenser microphone in the audient studio, this was routed into the audient desk for pre-gain and balancing, this then was recorded in the Pro-Tools session where it was organized, with unwanted takes deleted and best takes pushed to the top of comp playlists recorded where the audio came in matching the visual timing.

The timing was coordinated with three beeps to indicate the start of the phrase. With there being two characters in the animation, and two video-game characters, we had two people voice one of each. Part of the sound design with this dialogue when it came to the mixing stage of this project, we needed to distinguish the vocals and to do this the game character dialogue audio was processed to be bit-crushed to emphasize the sound source, as well as de-tuned to differ further from the two other characters and to add more character to the in-game troll, being none-human and much larger than the in-game character.

With the music for the animation project, much of the sound was recorded using midi instruments, a keyboard routed into Logic Pro sent the input to record as well as trigger the software instrument module Alchemy to generate the audio. Using this patch I was able to load preset and original audio samples which I could use for the composition. With the aim of the composition being to highlight the unease in the scene and develop on the eerie atmosphere and setting which the characters where in, I aimed to create sounds which were airy in quality and subdued to blend well with the backing none-musical ambience and allow it to creep out of it in an ambiguous and misty aesthetic. This soundscape style made up the opening scene until the spaceship encounter, and resumed once the characters were back indoors until the final composition sequence. To achieve some of this in the sound design itself, I utilized lots of white-noise either straight from noise generators and blended with samples with harmonic and tonal qualities for pitch which then were built up with echoes and reverbs with high feedback and density amounts to really blend the sound with motion to fit with the on-screen.

PD software was also used to create the samples for the spaceship, with the software instrument I designed in this portfolios supplementary pieces, I recreated some classic heavily modulating and analogue sci-fi sound which worked well to match the futuristic propulsion I’d imagine would be coming from the spaceship.

Lots of the backing ambience when it came to the designing of these audio elements needed to provide the space that I was after in the final production. To achieve this I utilized plugins which could produce wide stereo images with the audio sent into them, for example with the footsteps in the spacious and desolate concrete environment I aimed to get the echo of them fluttering in the wide stereo field to emulate the large far walls, this used the valhalla Super Massive reverb plugin.

To add distance in the layers of wind I also used reverbs with mostly long reflections to contrast with the closer dry signal and increase the depth of the overall sound, this was similarly done to the synthesizer sound used in the final compositional sequence to really enhance the movement going on with the arpeggio folding out of the soundscape over sounding flat and strictly stereo. Another example of post-processing to the spaceship to really exaggerate this section and increase the overall dynamic with it being the climax of the audio in the sequence volume and therefore also impact where pushed. To thicken the low-end and add to the growl of the rocket boosters I layered in a high-feedback delay with no delay time itself, widening the wet signal with spread to fill the sonic-field with the rocket sound as would be expected in the on-screen.

Finally there was acousmatic elements of the audio as well as aspects which needed proper placement to match in the scene. Two examples of used imaging in the scene would be the distant and off-screen dialogue cry out of one of the characters as they go missing at the end, this had no original dry signal and used those widening techniques I implemented in to throw the sound as far off to the left as possible to emphasize the effect. This was also similarly done with the right-side audio sample of the fridge being opened and shut, similarly widening was used for this spot audio clip, however this clip was also the result of a happy accident as I couldn’t record my fridge with anything other than my phone due to lack of cabling, and with it having a stereo microphone I positioned it to the correct angling where the right-side fridge was matched on screen and recorded the sample, this then got the stereo image I needed right away at the source in the capturing section of the foley, therefore it required much less of the far thrown positioning.

Before finishing with the foley recordings themselves, there is one important aspect of the positioning of all the foley elements in the mixing stage of this production, and this was the utilization of the directional imaging module in Logic Pro over standard stereo balancing. With the directional mixer the audio processed is kept in its original true-stereo image and therefore when it is panned it does not lose any balance from either the left or right side audio channel, and instead moves as its full imagine to the specified location. This true panning ensured that the full sound of the recordings was moved and without any smeering taking place due to inconsistencies with the stereo balance of each sample. I was also able to set the width of the sound source with the spread settings of the directional mixer, this would be done with the stereo spread set on one imager with another following in the channel strip to set the direction post-spread processing, providing width adjustment. The spread setting here was also used in the method described earlier alongside delay modules to throw sound very wide.

Finally, to get the audio samples of foley recording I undertook numerous sessions with another SE condenser microphone, using props which matched on-screen (books / bottles / cans / drawers / doors) I acted out the impacts and motions until I was satisfied with the accompaniment. With many of these elements I was aiming to match up as closely as possible to the on-screen and keep them embedded in such a way as to not draw too much attention to them but to still fill the soundscapes space.

Most were recorded using the mono condenser straight into a motu 8pre audio interface and recorded with Logic Pro, however some also had stereo processing done through a vocal pedal where appropriate if the sound was less directional from the on-screen cue.

Previous
Previous

Roles and Analysis of Audio for Game Production

Next
Next

Animation Research / Analysis