Skip to content

By Chris Bell on

Future Technologies in Music

A panel chaired by Jarvis Cocker discussed the advancing technology in music and where artificial intelligence might lead us. Chris Bell explores more.

On Wednesday 30 January, the Science Museum welcomed an esteemed panel of musicians, artists and technologists to the IMAX Theatre for a discussion chaired by former Pulp frontman and all-round musical icon Jarvis Cocker, to discuss the many possible futures for music. With technology advancing at an ever-increasing pace, where might things like Artificial Intelligence and Machine Learning lead us? And with ever more sophisticated means by which to measure the human body’s physical state, how might these be utilised to create the optimum sonic experience?

Jarvis Cocker, Robert Thomas, Hannah Peel, Sam Potter and Dr Kelly Snook at Future Technologies of Music at the Science Museum

The discussion centred around Ecstatic Data Sets: The Apeiron Chorismos Scanner, newly published on Rough Trade Books and authored by panelist Sam Potter of cult band Late of the Pier. Joining him were two of the technologists whose work directly influenced the book, music producer and ex-NASA scientist Dr Kelly Snook, adaptive/algorithmic composer Robert Thomas, and artist Hannah Peel, whose studies into the effects of music on dementia patients were the inspiration for her 2016 album Awake but Always Dreaming.

Ecstatic Data Sets is a pamphlet containing a speculative design for an imagined machine that “listens” to the user, taking detailed measurements of the body’s functions and neural activity, to provide a tailored soundtrack to elevate the user’s mood and alleviate “adverse” states of mind such as anxiety or paranoia. Access to a cloud-based store of archive music and machine learning-enabled composition-on-the-fly allow it to provide the user with a sort of musically-assisted mental equilibrium. With increased regular use by a great enough number of people, it can eventually work out how to stimulate states of ecstasy – tapping into its ecstatic data sets.

Jarvis Cocker speaking at Future Technologies in Music at the Science Museum

The proposed machine doesn’t yet exist in real-life, but as the pamphlet assures us, the technology for it does; and the likes of Robert Thomas, Dr Kelly Snook and Hannah Peel are amongst those working at the forefront of the technology and ideas that inspire the speculative design. So exactly how close are we to having a device like the Apeiron Chorismos Scanner?

As Robert Thomas discusses, there are numerous examples of this technology already. The Hear and Now mindful breathing app, developed in part with Thomas, monitors the user’s heart rate during meditation – the algorithmically-generated music will adapt to the user and create a different arrangement to direct them towards a meditative state. There are even, apparently, ways to cheat the system by imagining the music that might bring about this state; ‘What it does is set a feedback loop between you and the system so you’re affecting the music; that moves the music to a different position, and because the music has an emotional effect on you, it goes around’. Thomas has also produced algorithmic music for a similar meditation app which, rather than measuring heart rate via a wristband, uses an EEG headset to assess the meditative state.

Artificial Intelligence in music, meanwhile, is all the rage these days. Nick Cave might have recently proclaimed that ‘an AI will never write a great song‘, but you can bet your bottom dollar they won’t stop trying. One could make a compelling case that machine learning already has its fingerprints on mainstream listening habits, with Spotify’s recommended tracks based on subscribers’ prior listening, and on the cutting edge of AI-generated music, companies like Australian startup Popgun are ‘using Deep Learning to develop an Artificial Intelligence to create awesome pop songs’.

With technology advancing at an ever-increasing pace, where might things like Artificial Intelligence and Machine Learning lead us?

More recently Siri co-founder Tom Gruber announced his new venture LifeScore, a new AI with the potential to conjure up musical compositions based on the behaviours and actions of listeners, from how they’re feeling to the pace that they walk. Not unlike the Apeiron Chorismos Scanner.

The quest to create autonomous music isn’t a new one of course. In 1987 David Cope developed Experiments in Music Intelligence, or “Emi”, a programme capable of reading music in the form of musical code. Fed any number of classical compositions, Emi could then regurgitate a new composition based on the scores it had previously read. Go even further back and you find the mechanical music boxes of 19th century European clockmakers, or the autonomous Flute Player built by Jacques de Vaucanson in 1738. Heck, you have to rewind all the way to 9th Century Baghdad to find the earliest example of a mechanical musical instrument – the hydropower organ invented by the Banu Musa brothers and described in their Book of Ingenious Devices.

So, we’ve been thinking about autonomous music for literally millennia, and for time immemorial striving to create machines and software that can play something for us to listen to. The intriguing thing now – as Jarvis and co uncover in their fascinating discussion – is that we’re on the cusp of making machine music that listens to us.

Listen to the full discussion here: