This work is inspired by snapshots of the gutters near my home after the late snowfall in March of 2017. The audio is from samples recorded in my kitchen processed with a custom granular patch made in Max7. The visual is pixel sorting algorithms applied to the orginal snapshot.Read More
I had a wonderful time performing in a couple of concerts last week. These shows were the initail concerts of the New Media + Sound Art performance series, which is in turn part of the Time Light Sound at Emily Carr University of Art + Design. The NMSA series highlights the works of student and faculty from the NMSA program at ECU.
The Friday night included performances by NMSA faculty Peter Bussigel, Julie Andreyev and Caroline Park as Tiny Disasters, and myself as well as two performances by student enselmbles.
My work on Friday was a further debelpment of a piece created and first performed with Ava Grayson at Third Space's Sound Room. It is a performed audio-visual piece with visuals developed in Max7/Jitter, custom granular processing and noise instruments also made in Max7. For this iteration I have started to explore an idea I have to work with video reactive audio. In this case the luminosity of a slected portion of the screen controlled the noise instruments.
Here is an nice article by Alexander Varty published in Musicworks about Biophilia, a work I made in collaboration with Julie Andreyev, premiered at ISCM 2017 "World New Music Days," November 2017, Vancouver BC, Canada.
I'll publish a post about the work as we finish preparing the documentation. There is more information on Julie's site.
A project I am currently working on is "I Am Afraid" — a multiplayer VR interactive artwork that involves playing with sounds and words, with others, in a virtual space. The artwork provides an opportunity for people to play, be curious, and explore as a counter to fear, either personal or general. The project is the work of Dr. Maria Lantin from the S3D Centre at Emily Carr University, Vancouver, BC.
In the VR space the participants can record any sound (usually produced with their voice) and then interact with the sound through gesture, scrubbing through the sounds or looping playback. Gestures can be recorded and set to loop, so patterns and composition can be improvised. Spoken words are recorded and converted to text using Watson so that they can be rendered in scene and interacted with. My role has been to contribute to the design of the sound interaction and implement the audio technology, in particular the live granular processing in Unity that allows for the scrubbing and other effects.
The project is built for Google's Daydream VR system, in Unity and can work with a motion capture system to allow in game mobility. It is very complex with the multiplayer (think networking), motion capture, and live audio processing. Although the components built with their service are not currently part o the project I would like to thank Enzien Audio for their tremendous support in trying to get things working on the Pixel phone. One version of the granular audio processing was built in Pure Data and converted to a Unity plugin with their Heavy Audio tool. Currently we are using Unitly's Native Audio Plugin SDK demo which we managed to build for Android.
More news when it happens.
A couple of snippets from a recording December 28, 2015.
Location: 54°39'12.5"N 127°10'18.1"W — At the end of Averling Coalmine Road on the Telkwa River near Telkwa, BC, Canada.
Audio is trimmed, gain increased, high pass filtered.
The river is subdued by the ice, but still burbles and slaps as it is exposed and then enveloped again. The blue in the images is just the colour of the winter dusk.