Saturday 22 March 2014

LiveLight | Collaboration with Oz Collective (Part Two)



Demonstrating the final version of LiveLight

In order to progress from Prototype Two to the final version (above), I had to find solutions to a few things.

16-note Polyphony

To accomodate more users, I set out to increase the amount of notes that could be played at the same time. I tried 24 and 32 instances of my synth patch - synthjuan - but it seemed that only 16 would work smoothly!


Stuck Notes

Some triggered midi notes would not receive a note off message, continuing to play as a constant tone even as new notes were triggered. This was probably due to the large amount of notes that were being triggered. After hunting through the Max library, I finally found the solution in the flush object, which sends a 'note off' message to all midi notes that have been triggered. I integrated it on a timed switch (metro 2000), so that it would 'flush' every 2 seconds:


Quick Fix Filter!

I used a low-pass filter to remove the audible 'clicks' that would occasionally be created due to the intervals between triggered notes being very short. Naturally the LPF softened the final sound output by reducing its overall brightness, as well as reducing the audible distortion that would occur when multiples of the same synth note were trigger concurrently:


Preventative Measures

One design request made by ZY was to find a solution to the potential 'problem' of the audience shining light right into the camera lens from a close distant. Doing this would prevent other people within the space from playing, since the whole image would be over-exposed. Prototype 1 had the solution! I integrated the jit.3m object (plus a few operations) to calculate the total RGB value at regular intervals:


Next, using the past object (mentioned previously), a soft crescendo of pink noise would be produced whenever the RGB values exceeded a certain threshold.


Variations

In order to create more structure to the experience of sound within the piece, I decided to integrate variations on the tempo, note scales and delay effects, decided by randomly produced numbers at regular timed intervals (see / listen to video above):


The resulting composition - although somewhat simplistic in its treatment of materials - aspires towards being somewhere between generative and interactive music. Though these variations are programmed there are still unique moments to be heard within the piece, since each audible state is dependent on both how the user plays with it and the random numbers produced at each interval.
 
Conclusion + Future Development

I feel this project was a success, especially given the amount of time available to execute the work. I very much enjoyed working with Oz collective although ideally it would have been better to work closer together - in the physical sense. Hopefully we can do this if / when we work together again!

It was certainly a challenge to delve deeper into Max and expand my vocabularly, incrementally. As with any language (programming or otherwise), in taking steps to becoming more fluent in it, one's ability to generate ideas and find solutions to problems encountered increases steadily.


This was a back-up solution for 'stuck notes' - producing 1's and 0's to turn on / off signal gates for each triggered synth note.

Although the overall feeling from this project is a positive one, there are still some things that could be improved in future versions of it. On the sonorous side of things I would certainly look at going deeper in on the 'variation' aspect of the work, thinking about how to make dynamic changes dependent on user input, as opposed to randomly generated numbers. I would also like to broaden the sound palette so that instead of solely using staccato notes at varying tempos & pitches, there would also be variation on note durations (perhaps in relation to how long the visual feedback lasts) and more harmonic development (considering chord progression, etc). It would also be worth considering how RGB data could be used independently to form a tighter relationship between sound and image (i.e. each colour could have a corresponding sound palette).

Peripheral light from road (white / yellow-ish blobs either side of the blue lattice)

In terms of space, one problem encountered was that the road parallel to the installation produced light that was continually read by the camera. This meant the 'canvas' was never blank and likewise, there was always a few audible tones present, even before any user input. 

During the testing phase (within the space itself), it would have been good to measure the approximate height and width of the cameras view at different depths within the space. This would provide a more accurate way to demarcate the space for the audience, and likewise provide precise limits for the y values, as determined by the floor and or the height reachable by the audience.

It's all a learning process!

* Installation Documentation Video to follow*

Thursday 20 March 2014

LiveLight | Collaboration with Oz Collective (Part One)




I collaborated with Oz Collective (Alexandre Pachiaudi and Quck Zhong Yi) as sound designer for their interactive audio-visual installation LiveLight. The work is one of 28 installations at i Light Festival 2014, Marina Bay (Singapore), showing from 7th – 30th March.


Situated underneath the seating by the Marina Bay float (B2), the piece gives the audience a chance to 'paint' using any light source. Every light gesture or movement across the space leaves a visual trace on the 'canvas' infront of the participant. This in turn generates soft audible sine tones whose feeling and intensity fluctuate depending on both the position and amount of light activity that is happening.




Process / Methodology

This collaboration was predominantly a distant one being that Zhong Yi, Alexandre and myself were in Singapore, Paris and London respectively! Linked by our mutual friend, Danny Kok, ZY invited me into the project in Dec / Jan, at which point the visual element of the programming had already taken shape within Max


The brief was 'free and easy'; to create a stimulating sound component to the installation that sat somewhere between being a fixed, non-interactive looped soundscape on one end of the scale, to an evolving, interactive soundscape at the other. The timeline was tight (due to our schedules), but I was confident I could at least acheive something halfway along the scale: It'd be an enjoyable challenge to see how far it could develop within the time available!

Prototype 1


First thing first: I had to look into visual element of the patch (program) that ZY and Alexandre had created and see where I could siphon data from. They had been using Vizzie, a set of ready-to-go modules for processing video within Max.


It seemed the easiest way for me to work would be to tap the final 'output' data (from 4MIXR): Using this post-processed data, might make it easier to develop a closer correlation between the sound and image, I thought.

I wanted to begin with something simple - create multiple audio loops that trigger at different levels of video luminosity. So as the amount of light activity increases, more layers of sound are introduced:



After searching through Max help I found the Jit.3m object, which allows you to analyse and average RGB / luminosity levels from a source:


Next I had to find out how I could set different RGB thresholds for each audio loop, allowing each loop to fade in / out at a specific value. The past object (and some math functions) provided the solution for this:


When a certain threshold - "720" - is exceeded the volume fades in. When it drops below 720 the volume fades out.

Once complete, the 'front end' of prototype 1 looked like this:




Prototype 2

After the success of prototype 1, I had the confidence to tackle something a little more challenging - building a polyphonic synth that would respond to the light data created by the audience. Having (very) limited experience creating synths in Max, I hunted down a few tutorials:


In order to program a deeper level of interactivity with the sound I would need to find objects that would enable Max to analyse the position / size of things in front of the camera - motion tracking, essentially. After some googling I came across this tutorial for Jean-Marc Pelletier's cv.jit objects, which gave me a better idea of how to work with them:




 It's long.... but informative!


Blobs!

When used in conjunction with one another, the cv.jit objects can detect distinct regions of video data in a frame. These are known as 'blobs':

 
The red crosshair is the x/y co-ordinate and the green circle is the blob size

You are able to obtain the x and y position of each blob, as well as its size (in pixels). You are also able to set a threshold which determins, for example, the minimum and maximum size of a blob that can be read.

Once I had understood how I could make use of the blob data, I proceeded to make an 8-voice synth that would play a certain note (depending on y position of a blob) at a certain volume (depending on the blobs size). Later I would integrate panning as well, whereby the x position of a blob would determin the triggered notes position in the stereo field.

Here y-values of blobs are 'unpacked', scaled and sent to another patch:

The note number (scaled y value) and volume (scaled blob size value) are received here, which then trigger the synth.

After reviewing both prototypes with ZY, we decided to proceed with the latter: We both felt the sound palette and interactive element was more aligned with the visual experience and also more condusive to 'play'.


*Part Two coming soon!*

Wednesday 19 March 2014

Flickering Shard | Collaboration with Simon Ball

 "We live in the flicker"
Joseph Conrad, The Heart of Darkness, 1902.

Simon Ball and I will be screening our new work, Flickering Shard, as part of the final event of UCL Urban Lab's DEMOLITION season this evening:

 

Flickering Shard marks our third collaboration, following Olympic Dreams (2012) and Exploit (Bukit Brown Cemetery II) (2013). Without revealing too much (yet): The work is a digitally animated audio-visual composition, exploring notions of progress within the city in relation to 'construction' and ' demolition'.

2*

The work is derived from Simon's photographic material - collected during a series of walks between The Shard (London's tallest skyscraper) and Canary Wharf (formerly London's tallest building) - and field recordings I made during journeys stemming from the Thames (plus an additional recording of a construction site in Singapore).

 3*

Tonight's programme also includes Demolishing and Building up the Star Theatre (1901) by Frederick S. Armitage, the Lumiere Brothers’ Demolition d’un Mur (1896), Nathan Eddy’s The Absent Column (2013), Dan Edelstyn’s Breaking It Big in Burnley (2013) and an episode of the Channel 4 series Demolition (2005).

Come join us 7pm tonight (Thursday 20th March) at the White Building in Hackney Wick – entry is free.

Thanks to Hilary Powell and UCL Urban Laboratory for organising this event! I wish I could be there in person.
 
4*


* Still images (2,3 and 4) in this post are from Flickering Shard (2014) - copyright Simon Ball & Zai Tang.