L’intelligence artificielle au service de la création musicale
L’intelligence artificielle au service de la création musicale
  • Multidisciplinary research

Artificial Intelligence at the Service of Musical Creation

Creating new sounds, generating original content, developing learning aids... far from replacing musicians, artificial intelligence (AI) is opening up new horizons for artists. Ninon Devis, a doctoral student at the Institut de recherche et coordination acoustique/musique (Ircam) and at the Sorbonne Center for Artificial Intelligence (SCAI), outlines the hopes and limitations of AI in the world of music. 

What is your research based on?

Ninon Devis: I am currently working on the development of AI-based musical instruments. The challenge is to design instruments that are independent of computer power, but capable of generating sounds in real time, in the same way as traditional instruments.

With Philippe Esling, we have completed the very first prototype of an AI-based synthesizer: the "Neurorack," which I’m continuing to improve and develop with several variants. Later on, I would like to develop others based on the same conceptual model, and eventually extend our ideas to other interfaces that could, for example, take into account the musician's physical movements. I am also thinking about the musicality and the expressive interest of these synthesizers as well as the development of a controllable and accessible interface for artists.

Can you define AI and how it works in a few words?

N. D.: AI is a term that encompasses many things today. In reality, it describes the development of machines capable of imitating certain human intellectual capacities to accomplish a task. In my field of study, the more apt term is machine learning. This branch of AI relies on mathematics, and more particularly on statistics, to program algorithms to solve problems using databases. It’s about trying to understand and model the relationship between a generally complex input and the associated result.

The most impressive results in AI today come from deep learning. What is deep learning?

N. D.: Deep learning is a subfield of machine learning. It enables us to model these functions with a very high level of abstraction, based on non-linear transformations, similar to biological neurons. These are very powerful techniques. Our team used them in a collaboration with the composer Alexander Schubert on the piece "Convergence," which has just been awarded the Golden Nica prize in the "Digital Music and Sound Art" category of the Ars Electronica 2021 festival.

When did AI make its entry into music composition?

N. D.: If we stick to the strict definition of AI, the first piece written by a computer dates back to 1957: the string quartet "Illiac suite" by Hiller and Issacson. However, it is still very far from the techniques used by today's deep learning: it is mostly a piece based on random generations of musical events constrained by rules that ensure the aesthetics of the piece. It’s only in the 1980s that the first models analyzing existing pieces in order to generate new ones emerged (for example, EMI by David Cope in 1981.)

What does it concretely enable in musical creation?

N. D.: As far as musical creation is concerned, the applications are extremely varied. Some models can generate almost any musical content with impressive quality. In this case, the user has no effort to make; it is even possible to choose the instrument, the rhythm and even the desired genre. However, this kind of process does not leave, in my opinion, enough creative space and seems to me less interesting than those that have a real tool function.
For example, some models are designed help a musician to learn an instrument, such as the music education software Yousician. Other software, like Omax (developed by Ircam,) plays in real time in an interactive way with the musician.

New ways of composing music are also possible: in the field of timbre transfer, for example, it’s possible to transform the voice into a violin. Finally, some almost impossible tasks are now possible, such as finding the synthesizer settings used to create a sound (FlowSynth). We have also recently developed a completely new type of electronic musical instrument that allows us to synthesize and modify sounds that are almost impossible to reproduce artificially, such as impacts.

What benefits does it bring to composers?

N. D.: AI-based tools are an opening to new ways of composing. In addition to new sounds and timbres, it also provides a new way of thinking about music and creating it. Musicians or composers can now use new alternatives in each stage of the creative process: composition processes, new instruments, original interfaces, and even help for mastering.

Could musical AI go as far as to compete with a human composer?

N. D.: If rivaling is understood as reproducing, then yes. The particularity of an AI is precisely that it produces a new result by analyzing a database and then imitating its characteristics. Thus, it will be possible for it to synthesize very technical pieces "à la Chopin" for example, by providing it with a sufficiently large database of all the composer's works. On the other hand, I believe that we would be better off asking ourselves about the problem of novelty and originality. It is currently impossible to conceive of an AI capable of producing without example: it is therefore incapable for the moment of any creative initiative. In this sense, AI will never be able to compete with a composer.

The other limitation comes from the lack of judgment from the AI, which can only answer a mathematically formulated problem. The human being remains essential for all qualitative considerations of the generated results.