Optophono at V&A: Digital Design Weekend 2017

Optophono is proud to take part in the 2017 Digital Design Weekend at the V&A Museum in London. We will be showing projects by:

Una Lee: OAZE (2017) interactive sound map

Helena Hamilton: Butterflies (2017) for overhead projector and electronics

Pasquale Totaro: Pyramid Synth (2017): the snythesizer that turns objects into sound

Pyramid Synth (2017) by Pasquale Totaro. The synthesizer that turns objects into sound.

Pyramid Synth (2017) by Pasquale Totaro. The synthesizer that turns objects into sound.

Gascia Ouzounian, Christopher Haworth and Julian Stein: Long For This World (2013): create your own music for sleeping

Peter Bennett, Christopher Haworth and Gascia Ouzounian: various apps from our AHRC-funded "musical selfies" project Pet Sounds (2016)

Screenshot from OĀZE (2017) interactive sound map by Una Lee.

Screenshot from OĀZE (2017) interactive sound map by Una Lee.

Where to find us at the V&A:

Seminar Room 5, Learning Centre, Victoria & Albert Museum

10 AM-5.30 PM, Saturday 23 and Sunday 24 2017.

Free entry.

We are grateful to the V&A and to the AHRC Digital Transformations Scheme for their support of this project.

Manifesto for Digital Design Weekend

Optophono: Making Music Interactive

As listeners we have come to think of recordings as something we passively consume. Music is something we press ‘Play’ on—and then do other things while the music ‘happens’. With Optophono we wanted to trouble this dynamic. We wanted to move away from the idea that recordings should be fixed or static objects, and that listeners should have so little creative input into the musical process. Therefore, instead of issuing fixed recordings on CD, tape or vinyl, we created interactive works that we published on hard drives and USBs. These compositions put the listener at the heart of the creative process. Instead of simply pressing ‘Play’ listeners could create their own music using our software and sounds.

Some Optophono projects have practical applications in addition to being musical compositions. Long for this World, for example, doubles as a sleep app. Using our software, listeners can choose how long they want to sleep for—say, twenty minutes or eight hours—and then create their own music for sleeping. They can do this by mixing dozens of audio tracks contributed by various artists, choosing various randomizing functions, or selecting options from a bank of acoustic effects. The complexity and variety of the music that arises means that not only will every listener have an entirely different experience of the music upon each hearing, but that the listeners themselves will determine, to a large extent, what that music comprises.

From this brief example we can already see that the concept of the musical work starts to break down. There is no fixed composition, and it’s unclear who the composer is, or whether there is a composer at all. Is the listener the composer? The software designers? The artists who contributed the audio tracks? Is this a collective composition? The usual categories of ‘composer’ and ‘listener’ simply do not apply. In our project Music for Sleeping and Waking Minds (2010-12) we also questioned the idea of the musical performer. In this composition music is generated by four people who wear EEG sensors and simply go to sleep. As they sleep and awaken over the course of one night their brainwave activity generates an eight-channel audio composition and visual imagery. Audiences are invited to experience the work in various states of attention including sleep and dreaming.

From this brief example we can already see that the concept of the musical work starts to break down. There is no fixed composition, and it’s unclear who the composer is, or whether there is a composer at all. Is the listener the composer? The software designers? The artists who contributed the audio tracks? Is this a collective composition? The usual categories of ‘composer’ and ‘listener’ simply do not apply.

In our project Music for Sleeping and Waking Minds (2010-12) we also questioned the idea of the musical performer. In this composition music is generated by four people who wear EEG sensors and simply go to sleep. As they sleep and awaken over the course of one night their brainwave activity generates an eight-channel audio composition and visual imagery. Audiences are invited to experience the work in various states of attention including sleep and dreaming.

Other Optophono projects have stemmed from notated scores but have evolved in interactive formats. For example in 2013 we released an interactive version of the British composer Cornelius Cardew’s seminal work The Great Learning (Paragraph 7). This work is scored for choir, and recordings normally consist of a single performance by a choir. For our edition, however, we released three versions of ‘Paragraph 7’, including a two-channel video version and an interactive, software-based version. With the software-based version listeners could select which voices in the choir they wanted to hear, position these voices in the stereo field, determine the volume of each voice, and so on. If we have the capability of allowing listeners to create their own versions of recordings—which we do—and if we are able to release multiple versions of the same work, then why don’t we do this? The music industry has to catch up to the digital age. We don’t believe the answer to this lies in digital streaming or other means of delivering static recordings to passive audiences. Listeners today are musically sophisticated. An average listener has probably heard more music by the age of ten than the world’s most celebrated composers would have heard over the course of their lifetimes a century ago. Why don’t we tap into this vast musical intelligence?

Other Optophono projects have stemmed from notated scores but have evolved in interactive formats. For example in 2013 we released an interactive version of the British composer Cornelius Cardew’s seminal work The Great Learning (Paragraph 7). This work is scored for choir, and recordings normally consist of a single performance by a choir. For our edition, however, we released three versions of ‘Paragraph 7’, including a two-channel video version and an interactive, software-based version. With the software-based version listeners could select which voices in the choir they wanted to hear, position these voices in the stereo field, determine the volume of each voice, and so on. If we have the capability of allowing listeners to create their own versions of recordings—which we do—and if we are able to release multiple versions of the same work, then why don’t we do this?

The music industry has to catch up to the digital age. We don’t believe the answer to this lies in digital streaming or other means of delivering static recordings to passive audiences. Listeners today are musically sophisticated. An average listener has probably heard more music by the age of ten than the world’s most celebrated composers would have heard over the course of their lifetimes a century ago. Why don’t we tap into this vast musical intelligence?

Scanner module from Pyramid Synth (2017) by Pasquale Totaro. © Pasquale Totaro.

Scanner module from Pyramid Synth (2017) by Pasquale Totaro. © Pasquale Totaro.

Music could more meaningfully come into dialogue with digital design, software design and object design. Some musicians and engineers are already thinking this way. Pasquale Totaro, for example, has created a synthesizer that senses the properties of small objects—their colour, transparency, weight, texture, etc.—and modulates sound accordingly. Helena Hamilton, a visual artist and sound artist based in Belfast, is creating a new work for Optophono that will enable people to create their own music by drawing.

Optophono would like to support the work of such artist-designers. At the V&A Digital Design Weekend we will also showcase work that we recently developed as part of our AHRC-funded project ‘Pet Sounds’, which explored the possibilities of music making using social media. More generally, with Optophono we would like to put pressure on the idea of what a music recording is. In the digital era recordings don’t have to be inert. Recordings can also be alive.

Our logo

1003980_488356297905420_363901533_n.jpg

The Optophono logo (above) was designed by Ryan O'Reilly, the enormously talented designer behind Rinky Design. The 'O' in Optophono hints at an eye (as in opto/optical), while the 'P' hints at an ear (as in phono/audial).           

Our design was inspired by this: a cosmic amplifier by NASA. An actual cosmic amplifier!

After completing our logo design and publishing our first edition we discovered this: a print entitled 'Optophone I' by Francis Picabia, from 1922. Coincidence? Or kismet?

Music and Design

Untitled, Charles Frederick Richards 2017 Marble and onyx 550mm x 400mm | Photographer: C.Richards

Untitled, Charles Frederick Richards 2017
Marble and onyx
550mm x 400mm | Photographer: C.Richards

One of the highlights for us at the End-of-Year Show at the Royal College of Art was Charles Richards' A.T.O.M.S. (Acoustic Tonalities of Mineral Sound), for which Richard sonifies the material properties of stones & minerals.

As part of his project Richards designed a turntable that resonates slabs of marble, travertine, Onyx and clays. He also made album of original compositions derived from the sonified properties of these materials.

Richards writes in the album notes that he wishes to 'inspire new ways of experiencing the materials that make up our landscapes and built [environments]. A.T.O.M.S. is the magnification of the micro sound worlds that lay semi-dormant within many forms of our earth's DNA, but that, as structural surfaces, also have profound effects in acoustically shaping the sound of our environments'.

Optophono at V&A Digital Design Weekend

Optophono is thrilled to announce that we'll be taking part in the Victoria & Albert Museum's Digital Design Weekend 2017, part of the London Design Festival. The exhibition will take place over the course of two days, 23-24 September. Organisers are expecting over 25,000 visitors, so arrive early! We will show a number of Optophono projects including apps we recently developed as part of our AHRC-funded project 'Pet Sounds'. We hope to see you there!