Could AI change the music you listen to?

Google is developing AI, called Magenta, as part of its new research division focused on music and art. File picture: Charles Platiau Reuters

Google is developing AI, called Magenta, as part of its new research division focused on music and art. File picture: Charles Platiau Reuters

Published Feb 2, 2023

Share

Google’s research team has developed a machine-learning artificial intelligence algorithm to generate musical pieces from text inputs.

The AI, called Magenta, is being developed as part of Google’s new research division focused on music and art. The researchers hope to use Magenta to create new musical compositions that are interesting and accessible to listeners.

“AI tools increasingly shape how we discover, make and experience music. While these tools can potentially empower creativity, they may fundamentally redefine relationships between stakeholders, to the benefit of some and the detriment of others,” Google stated in a post on the project.

Magenta uses neural networks similar to AI systems like AlphaGo and DeepDream. But it also uses additional components that allow it to generate music without having access to existing data sets or models of how music works.

“We initiated conversations with creative teams on the rapidly changing relationship between AI technology and creativity. In the creative writing space, Google’s PAIR and Magenta teams developed a novel prototype for creative writing and facilitated a writers’ workshop to explore the potential and limits of AI to assist creativity,” the company said in a blog post.

The team started by feeding Magenta a large amount of classical music and taught it what made one piece sound more pleasing than another piece of music. Then they fed the system some text prompts like “happy” and “sad” and asked it to create songs based on those words alone.

The music is created by combining segments of existing songs. It can be challenging to differentiate between AI-generated music and human work. In one test, researchers played the piece back for musicians asked to guess whether they had heard it before. Around half of them couldn’t tell which was which.

The researchers hope that their work could lead to new ways of creating music, as well as helping people who are unable to compose their music due to disabilities or other conditions.

The system, called Flow Machines, was developed by a team of researchers at Alphabet's DeepMind research lab in London and Paris. The team fed the AI thousands of classical compositions, pop songs, lyrics and scores from Disney movies.

They then asked it to compose music based on prompts like “I’m feeling happy” or “I'm feeling sad”. It then produced a piece of music that matched those emotions.

One example given in a paper presented at an AI research conference this week is a song inspired by Lewis Carroll's “Alice in Wonderland” poem “Jabberwocky”. When asked to create something sad, it produced a piece that sounded like an orchestrated version of Adele’s “Someone Like You”.

Another example was composed using lyrics from the Beatles’ song “Here Comes the Sun”.

IOL Tech