The future of music image
Image: Composite by VICE Staff
Music

Brain Chips and Biometrics: The Future of How We’ll Consume Music

Inside music tech's battle for your brain, your ears, and your emotions.

You wake up. Biometric indicators determine that your mood is ‘Sad’, so the new Blue Ivy Carter album starts to play to help boost your serotonin levels. Music is streamed directly to your brain through your NeuralinkChip™ and it is with you all day, helping to manage negative emotions and automatically changing cadence to match your activity – increasing for your gym workout, lowering when you get a taxi home. You go to sleep. You have barely had to make a conscious decision about the music you have listened to all day.

Advertisement

No, this is not my cliché-ridden pitch for the next series of Black Mirror (that said, Charlie, give me a call), but rather a glimpse into the potential near future of music consumption. I may have been lugging around my 160GB iPod Classic packed full of Rakes b-sides until last year, but tech companies, through the Internet of Things, have been working to hyper-personalise our relationship with music.

In 2019 Warner Brothers signed an algorithm (the German app Endel) to a 20-album deal, the app Weav Run adapts music in real time to your running cadence, Neuralink (a neuroscience start-up founded by Elon Musk) is currently developing an audio chip that will enable users to stream music directly into their brain, Spotify want to suggest songs based on your emotions, and Noveto recently announced SoundBeaming™ technology which beams, er, sound directly into your head. 

“It works using non-audible acoustic waves and beamforming, placing the audio just outside the users ears,” Eric Conyers at Noveto explains. “Move your head in any direction and the two sound pockets will magically follow you… without disturbing people nearby.” Magic sound pockets! Seems like the future I’d been told about. But depending on your point of view, all this is either the long-promised utopia of personalisation, or a dystopian nightmare in which choice is removed from music discovery entirely.

Advertisement

By all accounts, the next wave of music consumption will see us ditch our devices in favour of voice interfaces and other screen-less technologies. “Once we thought that the mobile phone was the future of music and those apps would go with us wherever we go, but today it’s about personalised music profiles that rest in the cloud,” explains Juliet Shavit, Director of MusiComms, an association that brings together leaders from companies in the music and technology industries.

You might feel that the way we consume music now is sophisticated enough. We get a personalised playlist delivered to us every Friday, home assistants like Alexa mean we can simply call out for the music we want, and Airpods and other ‘wearables’ have accelerated things further. All these advancements have helped make sure that we’ll see a doubling in overall recorded music revenues in the next ten years

Even so, Spotify, Apple Music and Amazon Music are investing heavily in biometric recommendation technology that would enable users to stream music based on their mood and environment. Recently it was revealed that Spotify had been granted a patent for speech recognition tech, which would listen for background noises to see whether you’re alone or at a party, as well as traffic and bird noises to figure out whether you’re at home or out for a walk. It’ll even determine what your emotional state is. Using all that data, it’ll automatically generate a playlist. So if you’re crying, alone, with printer noises in the background, it’ll know that you’re sad and at work and play you Lorde. If you’re excited and there’s a lot of shouting, it’ll know that you’re at a party and play Lorde (Lorde is just right for any emotion, okay?).

Advertisement

This is currently just a patent – Spotify say they’ve ‘filed patent applications for hundreds of inventions', only some of which ever become products. But speaking to Nick Seaver, Assistant Professor of Anthropology at Tufts University, who's writing a book about music recommendation algorithms, it seems that if it does go ahead it’s likely to run into problems. 

“There's two sides to this,” he explains. “On the one hand is how well you can use emotions to recommend music, and that's been in a fairly goofy position for a while because there's never been an obvious way to do anything with that information. If someone's sad, do they want sad or happy music? What counts as ‘sad’ or ‘happy’ music for a given person?”

He’s right: if you’re feeling shit, can you imagine anything worse than being sent Spotify's Sad Songs playlist?

On the other hand, Seaver says, is how well this system could deduce your emotional state from your voice. Step forward Beth Semel, who has a PhD in the anthropology of labs working on speech analysis. “I don't think it's possible to draw definitive connections between specific vocal qualities and something as fuzzy, varied and nuanced as emotion,” she tells me. “There's no one singular way that ‘anger’ sounds.”

Advertisement

Seaver and Semel share the view that algorithmic recommendation is a lot more human than you think – and that’s the issue. As Semel says, “the relationship between speech sounds and emotional labels that is central to the patent is a highly subjective design choice” – made by the humans who designed it.

But if voice tech isn’t reliable, what about ‘mind-reading’ headsets that match music to your mood? Or a chip that connects your brain waves to an app to create a jukebox controlled by your thoughts? In January, Musk said that Neuralink had successfully implanted a chip into a monkey’s brain, enabling him to play video games using only his mind. He claims they plan to begin human trials sometime this year, with a view to creating much more than a music interface. Neuralink ultimately aims to “solve” neurological conditions such as dementia, Alzheimer’s, and spinal cord injuries.

Neuroscience experts have said that while Neuralink’s mission to stimulate brain activity in humans is feasible, their timeline appears overly ambitious (not Elon Musk making wild claims?). However, for Alex Haagaard, Director of Research and Development at disability-led self-advocacy organisation The Disabled List, there is a wider problem with Neuralink.

Advertisement

“For neurodivergent people, our neurodivergence is an integral part of who we are,” they say. “To ‘solve’ our neurodivergence is, fundamentally, to eliminate us as people.”

Haagaard explains that “within mainstream culture, disability is still understood through a primarily medicalised, therapeutic lens, and the ways in which disabled communities have their own vibrant cultures and identities are quite poorly recognised”. That’s why The Disabled List is “interested in ways of thinking about access that supports and amplifies disabled identities and cultural practices.”

For Haagaard and The Disabled List, it’s vital that disabled people are involved in every phase of the design process to develop new technology, otherwise you just create a “disability dongle” – something that feels worthy, but is ultimately pointless. All this innovation feels futile unless it’s developed in the right way and involves the right people. For example Attitude is Everything, an organisation which works to improve Deaf and disabled people's access to live music, awarded SoundSense with a prize in 2019 for its research into a headphone system which can be teamed with vibration pads and rumble vests.

Reflecting on their own experiences as an Autistic person and someone who struggles with auditory processing, Haagaard explains that listening to music is a stim (the repetition of physical movements, sounds, or words and is common in people with autism) and gives an example of how technology could help improve their experience. “I could easily see a device playing a role in making stimming to music more enjoyable.” 

Advertisement

Back in March, Apple Music and Warner Music launched "Saylists" to help young people with speech-sound disorders. The project uses algorithms to find song lyrics that repeat challenging sounds and, using that algorithm, Apple Music analysed the lyrics of the 70 million tracks in its catalogue to choose those that repeated them most often. Haagaard says they would “would love to see something like Apple Music or Spotify collaborating with disabled artists to curate the kinds of playlists we share collaboratively within our own communities.” For example, “chronically ill folks sometimes share playlists that help them get through symptom flares, while Autistic folks may share particularly ‘stimmy’ songs with each other.” 

That’s the key. The music we love is a vital part of our identity, and automation risks removing the most meaningful aspect of it: discovery, by the humans and the communities involved in it. The issue with ‘hyper-personalisation’ right now is that it feels, well, not very personal at all. Algorithms have diluted the concept of music discovery, a horizon on which crate digging is replaced by data mining, and it’s very difficult for an algorithm to replicate the joy of hearing a record your mate told you about or that song you heard at a festival.

Rather than whittling emotion away from music discovery, though, Shavit believes that technology could actually enhance things. “It’s meant to give us better access and a better experience,” she says. “It should not, and I don’t think it can, replace people sharing their favourite songs and artists, but I will argue that new technology can empower the music industry to think of itself in new ways.”

From Spotify’s “highly subjective design choices” to The Disabled List’s focus on collaboration, human choices and insight are still key to our advancement, even amid the acceleration of algorithms and automation. It’s clear that a completely customised future is coming, but as technology races ahead we need to make sure we use its limitless potential for good. Though admittedly that doesn’t make for a very good Black Mirror script.

@dethink2survive