The crossroads between artificial intelligence and music production – the varsity

Can you imagine a world where music is made using artificial intelligence? The webinar series AI as Foils: exploring the co-evolution of art and technology is a new BMO Lab initiative that features discussions with artists and artificial intelligence (AI) practitioners.

The most recent event, “AI as Foil Series: A New Musical Frontier: AI Meets Music,” took place virtually on October 8th. “The goal is to explore curiosity, excitement, as well as fears and concerns about the role AI technologies play in art and creativity,” said Natalie Klym, curator and moderator of the series.

A new series of webinars

In 2019, BMO Financial Group invested $ 5 million – the largest ever donation to a Canadian institution – in the BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies, and AI. The lab is based at the Faculty of Arts and Sciences at the University of Toronto and aims to explore and research the intersection between creativity and new technologies, including artificial intelligence.

This month, the BMO Lab invited producer and engineer Annelise Noronha and AI scientist Sageev Oore. Noronha has worked with notable artists such as Dragonette, Jennifer Lopez, Blue Rodeo, and Oscar-winning composer Mychael Danna. She is currently working in music composition and has written various musical placements for film and television.

Oore is a musician and associate professor of computer science at Dalhousie University. He holds an IA CIFAR Chair and has also worked on Google Brain’s Magenta Project which applies deep learning to music.

From tape recorders to AI

Noronha started the session with an overview of her unique experience with audio engineering. When she started working in engineering studios, she used software that locked tape recorders together.

If you needed to shift part of an audio track to a multitrack – a recording made from mixing multiple tracks recorded separately – you would need to ‘bounce’ the audio from the tape recorder to another tape recorder before sending it back to the tape recorder. the original tape recorder. “Because we were recording to tape and there were no computers, we were magicians too. People were really impressed, ”she said.

Being able to work with retro sound technology is a very specific and transferable skill set, Noronha explained. Now when people want to record tapes for a retro effect, she can still do it.

Oore comes from a background in machine learning and explained that he mainly works with artificial intelligence and machine learning systems to get them to create musical sounds. It is mind-boggling how much the music industry has grown from the traditional methods Noronha worked with early in her career to the AI ​​and machine learning based technology that Oore works with today. hui.

Oore shared a digital musical instrument interface (MIDI) generation software, which allows musical instruments, computers and hardware to communicate, as well as several 30-40 second videos of classical music performances generated. by software. He explained that the computer did not generate the actual sound waves, but generated the instructions for a sound sampler to play the music, including which notes to play and when.

In order to learn these instructions, the computer received hundreds of samples of classical piano performances. Oore explained that there is an element of randomness when the computer generates instructions similar to rolling a dice, as it selects which notes to play and when. In the future, Oore is looking to have a more controllable model where the computer has its own intelligence but can also be steered.

Future dilemmas

Artificial intelligence is still very new and no one can predict where it will go. It takes hundreds of years to invent a musical instrument and make it accepted in our culture. Even though history tells us that new technologies will eventually be accepted and that any current rejection or resistance to them is only a phase, Klym believes that we should at least try to be critical and aware of the impacts of the technology. AI at all levels.

Ultimately, the goal of AI for music is not the ability to generate music but to create tools that are then used by artists.

Questions naturally arise about the application and the future of artificial intelligence. AI could not only be used to generate musical melody, but also to generate lyrics and tempos for new tracks and entirely new genres of music. Yet, AI-generated music might not be playable by humans as well. There is also the issue of intellectual property: if AI software writes a piece of music, who owns the copyright? The possibility of AI taking over the music industry raises philosophical questions about whether AI-generated music could be considered creative work or not.

Previous webinars in this series are all recorded and posted on BMO Laboratory website. There will be further sessions in spring 2022 that will examine AI and voice as well as writing and neurolinguistic programming.


Source link

Comments are closed.