Thursday, 20 March 2014

LiveLight | Collaboration with Oz Collective (Part One)

I collaborated with Oz Collective (Alexandre Pachiaudi and Quck Zhong Yi) as sound designer for their interactive audio-visual installation LiveLight. The work is one of 28 installations at i Light Festival 2014, Marina Bay (Singapore), showing from 7th – 30th March.

Situated underneath the seating by the Marina Bay float (B2), the piece gives the audience a chance to 'paint' using any light source. Every light gesture or movement across the space leaves a visual trace on the 'canvas' infront of the participant. This in turn generates soft audible sine tones whose feeling and intensity fluctuate depending on both the position and amount of light activity that is happening.

Process / Methodology

This collaboration was predominantly a distant one being that Zhong Yi, Alexandre and myself were in Singapore, Paris and London respectively! Linked by our mutual friend, Danny Kok, ZY invited me into the project in Dec / Jan, at which point the visual element of the programming had already taken shape within Max

The brief was 'free and easy'; to create a stimulating sound component to the installation that sat somewhere between being a fixed, non-interactive looped soundscape on one end of the scale, to an evolving, interactive soundscape at the other. The timeline was tight (due to our schedules), but I was confident I could at least acheive something halfway along the scale: It'd be an enjoyable challenge to see how far it could develop within the time available!

Prototype 1

First thing first: I had to look into visual element of the patch (program) that ZY and Alexandre had created and see where I could siphon data from. They had been using Vizzie, a set of ready-to-go modules for processing video within Max.

It seemed the easiest way for me to work would be to tap the final 'output' data (from 4MIXR): Using this post-processed data, might make it easier to develop a closer correlation between the sound and image, I thought.

I wanted to begin with something simple - create multiple audio loops that trigger at different levels of video luminosity. So as the amount of light activity increases, more layers of sound are introduced:

After searching through Max help I found the Jit.3m object, which allows you to analyse and average RGB / luminosity levels from a source:

Next I had to find out how I could set different RGB thresholds for each audio loop, allowing each loop to fade in / out at a specific value. The past object (and some math functions) provided the solution for this:

When a certain threshold - "720" - is exceeded the volume fades in. When it drops below 720 the volume fades out.

Once complete, the 'front end' of prototype 1 looked like this:

Prototype 2

After the success of prototype 1, I had the confidence to tackle something a little more challenging - building a polyphonic synth that would respond to the light data created by the audience. Having (very) limited experience creating synths in Max, I hunted down a few tutorials:

In order to program a deeper level of interactivity with the sound I would need to find objects that would enable Max to analyse the position / size of things in front of the camera - motion tracking, essentially. After some googling I came across this tutorial for Jean-Marc Pelletier's cv.jit objects, which gave me a better idea of how to work with them:

 It's long.... but informative!


When used in conjunction with one another, the cv.jit objects can detect distinct regions of video data in a frame. These are known as 'blobs':

The red crosshair is the x/y co-ordinate and the green circle is the blob size

You are able to obtain the x and y position of each blob, as well as its size (in pixels). You are also able to set a threshold which determins, for example, the minimum and maximum size of a blob that can be read.

Once I had understood how I could make use of the blob data, I proceeded to make an 8-voice synth that would play a certain note (depending on y position of a blob) at a certain volume (depending on the blobs size). Later I would integrate panning as well, whereby the x position of a blob would determin the triggered notes position in the stereo field.

Here y-values of blobs are 'unpacked', scaled and sent to another patch:

The note number (scaled y value) and volume (scaled blob size value) are received here, which then trigger the synth.

After reviewing both prototypes with ZY, we decided to proceed with the latter: We both felt the sound palette and interactive element was more aligned with the visual experience and also more condusive to 'play'.

*Part Two coming soon!*

No comments:

Post a Comment