Here can be seen a snapshot of the semi-automatic sound triggering mechanism I created exclusively for this this project. The concept is simple but implementation was not and I refined it over the course of 2 months (while also working on other projects).
My starting point was that I needed 91 audio files, to animate 13x7 squares in the After Effects project. I also wanted to use unique sounds made for this project.
I could have made a normal composition and used sounds from it to control the squares, but there are several reasons for not doing that.
It would have been harder for the sound to be perceived as part of the image if it had the qualities of a pre-composed piece of music.
Avoiding that issue required a certain level of control and determinism over the sounds, but also a great amount of chance.
I enforced my will over the shape of each sound, but allowed a certain degree of freedom in the development of events. Here are a few examples of good looking sounds.
In the current implementation each of the 7 rows represents an octave with 13 pitches, from C to C one octave above. For each pitch (note on the octave, or frequency) I made several sounds and there are equal opportunities for each of them to be played back. This offers some diversity while keeping the overall sound uniform.
The images at the top of the page are examples of rows.
I will also refer to sounds as “samples” - that is not to be confused with samples that represent a digital audio signal (as in 44100 samples per second).
There is a global control mechanism that changes the duration of events for each row, but also the speed with which the samples are triggered, the gaps between the notes on a row or the focus on a specific group of sounds on each row.
An important piece of the Synchresis 3D video is the interaural time and intensity (ITD) plugin. This places all the sounds in an exact point in space, which will be the same for every other occurrence of the same sound. Hopefully this will sustain the illusion of morphing sound and image.