Between 2017 and 2018, Bob Sturm, an associate professor of computer science at KTH Royal Institute of Technology in Stockholm was involved in the creation of an album of Irish folk music, Let’s Have Another Gan Ainm. “For six months, we let the album be reviewed by experts in the field,” he says. “During that time, it received radio play, an online review was posted that praised the work, and in general we received a very positive response.”

But there was something that the critics didn't know: two thirds of the songs had in fact been written by deep learning algorithms. The album had an invented backstory (the ethics of which was approved by the University in the name of science), to obscure its true origins.


After six months, the team approached the critics and told them the truth, enquiring as to whether they'd like to change their stance on the work. Surprisingly, not many wished to revise their original opinion. "One guy was very positive,” says Sturm. “He was a bit sad that the backstory behind the music wasn’t true, but he said, ‘I still love the music, and will still play it on my radio show.’” 

So why the obfuscation? This decision was taken following a previous media mishap involving the team, where the Daily Mail featured a human-created clip on their site wrongly attributed to an algorithm. People on the page reacted by pointing out that the piece was cold, soulless and blatantly robotic. It occurred to Sturm and his team that a white lie might be the only way to gauge true reactions to the album.

However, it's important not to overstate the robotic input to the album: one third of the material was created purely by humans, and the algorithm-produced elements also received curation from a professional musician and composer. His role involved combing through thousands of tunes created by the system to cherry pick the best sequences.

"Some of the computer generated material, appears just as the computer wrote it, with maybe one change or two changes, some of it appears quite differently,” says Sturm. “Like in an effort to form one tune, he took two halves of two separately generated tunes, and put them together." 

The album was also played by skilled Irish musicians who were free to add their own interpretation and ornamentation. However, the project does indicate the growing attainability of AI-created music.

But how does it work? The team used an off-the-shelf machine learning structure, and data that exists online in the form of crowdsourced sets of songs. "What the algorithm does is it generates a bunch of symbols, and these symbols can be converted into music notation,” says Sturm. “Then that music can be read and played." 


But Stum and his team are not the only ones capable of producing AI-created music. London-based startup, Jukedeck has also created over one million songs with AI algorithms. The startup was created by two Cambridge University alumni (one a former Googler), who are also musicians.  

Co-founder Patrick Stobbs told Techworld that the idea had come from the founder and CEO, Ed Newton-Rex. "He studied a lot of the scientific theory behind what it is we as humans like about music, and why,” said Stobbs. “And he had often thought music is the most mathematical of all the arts, and therefore could computer systems play a role in the creative process?"  

Stobbs says it started as an intellectual endeavour, questioning whether machines could be involved in the creative process, and what this might look like. "I suppose what we've realised over time is that the answer to the first question is yes, and the answer to the second question is that it enables many more people to create music in different ways, and also experience music and listen to music in different ways," he says. 

Aside from creating complete songs, Jukedeck's platform can also write short snippets of melodies, harmonise with pre-written melodies, turn compositions into audio and suggest ideas.

Stobbs says that the aim is to democratise music. "Most people in the world can't make music. They haven't had the privilege of a musical education, or the time to teach themselves how to make music. And we see the role of this as opening up music creation to those people."

But for those who can create their own music, they see Jukedeck as potentially offering a suite of useful tools. "Much in the same way that Photoshop speeds up the workflow of graphic designers, we see this as speeding up the workflow of composers and musicians and creators," says Stobbs.

The biggest client base for the platform right now is made up of creators of YouTube videos looking for ambient music to play in the background without getting into a fight over royalties. So far, their music has been used in over 50,000 YouTube videos, and they suspect in plenty more whose creators have neglected to credit them.

But the role of the platform is evolving, with an increasing number of amateur musicians contacting them to help create backing tracks. K-pop artists have also collaborated with them on songs that are now in the charts in South Korea.

So far though, the startup is focused purely on the creation of melodies, which alone is a very hard problem to crack. "Music is multi-layered,” says Stobbs. “To get an AI to write a book, that really happens on one level: you've got one word coming after the next. But in music, you've got many different parts being played at once. They're all interacting with each other on different levels, and that just multiplies the problem, or the difficulty of creating an effective system."


Another US-based startup has decided to tackle the problem of lyrics as well as melody, resulting in the creation of the first EP where both the lyrics and composition were written with Alysia, an AI platform, called Invisible Tides. This project, from startup WaveAI was born out of the personal dilemma of the founder, Dr Maya Ackerman. While studying computer science she began developing a hobby as a semi-professional singer, yet struggled to write her own music.

Just as she was beginning to accept that this might never be possible, she stumbled across the field of computational creativity, a small sub-field of artificial intelligence. After attending a conference on the topic, she heard a life-altering sentence: "A computer can be a co-creative partner."

Buoyed up by her research, she began to develop a musical AI platform. After a prototype was produced and a scientific paper published, media attention and interested prospective users soon followed.

Now, after three years of work, the interface is ready for commercial use. Users simply have to select a background track which are produced by human artists, and select which topics they’d like to write about. The AI will then offer suggestions for lyrics and vocal melodies which the user can either accept or reject, while perhaps contributing their own.

"It's this sort of co-creative process where you work together with the AI," says Ackerman.  

The lyrics writer was trained on about a million songs and melody creator was trained on a database of thousands. In terms of the latter, the system creates about 250 different options and then ranks them, presenting the ten best to the user.

What next?

Of course, the elephant in the room when it comes to musical AI production is the question of whether these algorithms will one day replace human artists entirely - something that the people involved tend to deny.

Ackerman says that for her, the platform let's people do what they couldn't do before, and creates joy in this fact. "When you experience it, you feel that you wrote the song,” she says. “And what's really amazing is that, when I use it to write a song, I feel that I wrote the song. Despite the fact that I know exactly how it works."

Sturm has also struggled with accusations of interfering in a sacred form of human expression. "In our research, we're dealing with folk and traditional music that is hundreds of years old,” he says. “It's been a part of the fabric of national identities and of culture for a very long time. And people can be very sensitive about this."

He says that people have asked him why he would train computers to create a type of music many people make their living off playing and teaching. This line of questioning can sometimes make him uneasy. At one event, someone directed a particularly pointed one: "'Tell me what have you contributed to Irish traditional music?'” recounts Sturm. “And that's a question that's haunted me for a while - what are we doing that is bringing value back to the community?"

It's a question that all the creators in this space will have to contend with at one point or another.