Sound and music surrounds us all the time. Often we hear all this without really noticing: it just forms part of the background to our lives. The human ability to listen to sounds is something that is very hard for computers; but we are now beginning to build sound processing methods that can help us. In this talk I will discuss some of these techniques that can separate out different sound sources from a mixture, follow the notes and the beats in a piece of music, or show us the sound in new visual ways. These "machine listening" algorithms offer the potential to make sense of the huge amount of sound and music in our digital world: helping us to analyze sounds like heartbeats or birdsong, find the music we want in huge collections of music tracks, or to create music in new ways.
Prof. Mark Plumbley is Director of the Centre for Digital Music (C4DM) at Queen Mary University of London. His research interests include the analysis of audio and music signals, including beat tracking, automatic music transcription and source separation, using techniques such as neural networks, information theory, and sparse representations. He is Principal Investigator on several current EPSRC grants, including "Information Dynamics of Music" and "Sustainable Software for Digital Music and Audio Research", and he holds an EPSRC Leadership Fellowship. He leads the UK Digital Music Research Network, is Chair of the International Independent Component Analysis (ICA) Steering Committee, and is a member of the IEEE Audio and Acoustic Signal Processing Technical Committee.