Could Your Next Favourite Artist Be A Robot?


We’ve discovered that AI can perform methodically and analytically, but does it have the potential to mimic the human mind’s creativity in order to produce a unique work of art such as a musical score? The devising process for music takes imagination; it is a challenge for even the most right-brained people to compose something brilliant that a listener has never heard before. It is even more of a challenge for that music to hold feeling and soul – so how could a computer possibly manage such a task when it ultimately has neither?

Due to film and TV production steadily increasing, musical composition is in high demand. It can take composers years to craft a single score, and yet with new technology like AI, it seems this process can be perfected within a week, saving time and money for an equally brilliant finished product. One example of this new technology is Luxembourg based AIVA – Artificial Intelligence Virtual Artist – which has mastered just that! She, as AIVA is referred to, has the ability to compose unique, emotive music for all types of entertainment content. The system is based on stochastic algorithms, meaning compositions are never duplicated. Yet it also works on patterns: AIVA has read and analysed 30,000 of history’s greatest scores, from which, via machine learning, she has learned to predict melody movement, harmony arrangement and sequences in rhythm. These predictions have allowed her to build the mathematical formula to create the perfect piece. You can find an example of one of her compositions here.

However, music is subjective, and it needs to be appropriate. For example, ‘Lion King’ wouldn’t have the same sentimental effect on an audience if it was serenaded by death metal! So AIVA is programmed to respond to 30 category labels such as genre of music, mood and style, further enhancing the algorithm. Whilst AIVA can compose for content creators with little musical knowledge, for example, YouTubers requiring backing music, she can also inspire composers and their own work, or design beautiful compositions that musicians can bring to life in performance. Either way, this advance in technology can be seen as a wonderful intersection of musical creativity and science.

Another recent example of AI enhancing humans’ experience with music is AI music tutors. These could be extremely beneficial: a cheaper, more reliable and as helpful solution to learning music, compared to a human teacher. The AI music practice Kena.AI, based in San Francisco, is launching a personal, artificial music tutor application in the future, designed to teach people how to pick up and master musical instruments. It would be a revolutionary platform that could change the way we learn to acquire skills, “bridging the gap between learning from human-tutors and being self-taught". Not only is Kena described as being able to educate tutees with clear instructions, it is designed to offer an individual coaching experience by tracking students’ progress via “listening” to their playing, creating their personalised learning paths and recommending music tailored to their taste.

The increasing abilities of AI lead us to question whether it could one day replace human skills. However, in the music industry, human musicians and AI are collaborating to experiment with new sounds and produce inspirational work, therefore broadening our perspective on the relationship we have with contemporary technology. In 2018, the “AI DJ Project” by the Tokyo based AI company Qosmo held live performances during which an AI DJ and a human DJ collaborated on stage. Described as “a dialogue between human and AI through music”, these events were a fascinating opportunity to see how man and machine could perform under very similar conditions: for example, the AI used the same vinyl records and turntables as the human DJ. To ensure that either party were cooperating, they played alternately one track at a time, each tasked with the process of selecting an appropriate song and mixing it into the music so that it flowed smoothly from the previous track. The software was trained to become proficient in three features: music selection, beat-matching and crowd-reading. To select music, neural networks analyse what a human DJ is playing, extract auditory features from that track such as beat or instrumentation, and choose another track of similar style. For beat-matching, via reinforcement learning, the AI DJ determines how to manipulate its turntable’s speed using robotic fingers to align rhythms. Finally, crowd-reading means that the software is designed with a “deep learning-based tracking technique” that infers which tracks encourage the audience to dance the most, helping it with future music selection.

These examples of AI’s involvement in music production all lead to the same observation: that music and the way we assemble it is being rapidly transformed by the emergence of new, intelligent technology. From breakthroughs like these, we will be able to learn, compose and mix music with the assistance of software, or allow a computer to independently compose music for our own enjoyment, opening up more opportunities for the creativity and expression that the arts endorse.

Next week, following this blog, there will be a part two discussing AI’s role in music performance: how robots are learning to read, play and conduct their own music and what the live concerts of the future could look like.

Written by Florence Grist

See Florence Grist's blog

Based in the UK, Florence Grist is a freelance writer who enjoys writing on technology and sustainability issues and especially how AI has the potential to both transform our understanding of the environment and help protect fragile ecosystems.

Related blogs

Beyond the Box at NVIDIA GTC: The Evolution of the AI Data Centre

There can be no question artificial intelligence (AI) is beginning to proliferate all types of compute. The enterprise use of AI is growing at an exponential rate. Fueled by the need to deliver better customer experiences, increase financial performance, streamline operations, improve clinical outcomes, or push the boundaries of research and development, organisations are investing in AI infrastructure to get insights faster.

Read more

Explainable AI

SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Read more

What's Lang Lang Got That AI Hasn't?

In my first blog discussing the role of AI in the music industry, which you can check out here, I explored how software is transforming the way music is produced behind the scenes. It led me to further explore how AI technology is learning to perform music itself, and to share my own opinions, speaking as a musician, on the exceeding potential but also definitive limitations of AI.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.