During her talk, Artemi presented some of her recent interactive works, involving mutual real-time adaptation between human musicians and interactive computer music systems. These systems use machine learning to interpret audio data in real-time and act both in response to the musicians’ actions and as a result of autonomous generative processes. In addition to technical challenges relating to the application of machine learning in machine listening tasks, she then discussed the aesthetic and conceptual implications of this compositional approach, as well as the potential of machine learning as a creative ideation tool, i.e. it’s potential to shape musical thinking, by opening up new technical and conceptual possibilities.
Artemi is a composer & artistic researcher at the University of music and performing arts (Graz, Austria). She is working in the fields of artificial intelligence, interactive and participatory sound art.
(Image / Video (c) Johanna Dulnigg, FH Kärnten)