Next Laura, Phil and I travelled to Plymouth for the 2013 Media Innovation Awards where dS had been nominated for Best Installation/Exhibition/Live Event. Not only did we scoop this award but we were also delighted to win the judge’s award for Outstanding Contribution to Innovation. This makes the fifth award this year for dS! Here’s a little snap of us at the ceremony, crying and thanking our mums.
Last weekend we attended Jo Hyde’s (dS composer) Seeing Sound symposium at Bath Spa University where I presented a paper on dS and in the evening we performed Hidden Fields. Since Dave’s move to Stanford I will probably be doing a few more talks on dS and Seeing Sound was a nice relaxed environment to start out. The dance acts of the evening opened with a work-in-progress showing of Unfold to Centre by Yorke Dance Project and closed with Hidden Fields. This was quite a cunning bit of programming by Jo as Unfold to Centre included visuals from 1978 by computer art pioneer Larry Cuba. This computer generated video piece called 3/78 (Objects and Transformations) is one of the earliest particle-based visual simulations to have been created, with Hidden Fields being one of the latest and most sophisticated this was a neat juxtaposition. Larry Cuba was delivering a keynote at the meeting and also attended the performance; apparently he really loved Hidden Fields, which is huge complement. Our photographer Paul Blakemore took some amazing photographs: here is a great one of Emma and Lisa with more here.
Finally, on Saturday morning I was on Bristol Community Radio talking about dS with Andrew Parsonage, audio clip below.]]>
On monday, a piece of music that I helped to create will feature within a talk by Jess Thom at TEDxAlbertopolis at the Royal Albert Hall. Jess will be presenting to a 4000 strong audience about Tourettes Hero, a wonderful project that she set up to raise awareness of Tourettes by celebrating and sharing its creativity and humour.
The piece of music is a sonification of data that Jess has carefully collected on short (sometimes long) periods of intense ticcing episodes that she experiences on a daily basis. During these ‘ticcing fits’ every part of Jess’s body may move, shake, contort or lock into painful positions. The well documented data represents a great deal of discomfort for Jess, and in the spirit of Tourettes Hero, she asked whether it would be possible for me to convert a year of this data into a piece of music, to transform it into something beautiful.
Sonification is the use of non-speech audio to convey information, and for this particular project the priority was to create a piece of music that was derived exclusively from the data that Jess had shared with me. Working closely with Professor Jo Hyde from Bath Spa University, we developed specialised software and carefully designed sounds to sonify the fits, producing a composition that forms a true representation of the original data: the time structure is accurately preserved and the chosen sounds and their processing communicate the duration, location and intensity of each fit. It’s been a fantastic project to work on and I am really looking forward to developing the piece further in future.
Jess’s talk will be streamed live from the TEDxAlbertopolis website at approximately 15.45 on Monday 23rd September.]]>
I have been working on dS over the last year with an amazing team of artists, choreographers, programmers, composers and dancers headed up by the brainbox chemical physicist Dr David Glowacki (Bristol) (who was recently awarded a research fellowship from of the Royal Society!). I work on the musical aspects of dS with Professor Joseph Hyde (Bath Spa) which was particularly important last night as we used the live audio to control the parameters of the system in real-time. We were really pleased with how it turned out, I hope we get to work on more performances like this in future.
The dS homepage is a good place to find out more information about the project. At some point I will add a project page here to expand on some of the work that I do for dS.]]>
December’s Soft Computing is a special issue on evolutionary music and includes an article that I have written on the use of optimisation algorithms to help automate the process of sound design.
When designing sounds using audio synthesisers, musicians require a degree of scientific or experiential knowledge to realise their intentions directly and quickly. Often the relationship between the configuration of buttons/sliders on the synthesiser interface and sound that is produced can be unpredictable and unintuitive.
If a musician has a sound that they would like to play on their synthesiser, how can we go about searching for the configuration of buttons/sliders that most accurately reproduce this target sound? As a computer scientist this is a fascinating problem as the topology of the problem space varies dramatically from sound to sound and from one synthesiser to the next.
Using specialised algorithms inspired by the processes of Darwinian evolution and, in particular, the conditions that lead to speciation, I tried to automate this process with some really encouraging results.
It won’t be long before computers are able to run this search in a matter of seconds, which might help musicians to easily explore the diverse sound spaces lurking within their synthesisers…]]>