Machine listening in music

At this workshop on “Machine listening in music: A beginner’s guide” on 21st July 2016 Amy Beeston led us in an investigation of how computers hear and process sound.

By understanding how our own ears and auditory systems work, and how a microphone picks up sound and lets the machine ‘hear’ aspects of the acoustic surroundings, we can begin to really get the very best out of our technology.

We use acoustic instruments alongside digital technology in our music making, e.g. by playing an instrument or singing through a microphone into a computer running some form of digital signal processing software like Cubase or Logic Pro. However, the techniques employed by musical applications to ‘listen’ to sound are far less sophisticated than human listening skills.

Using software such as Sonic Visualiser, Audacity, Pure Data (all available free) and/or Max we explored sound analysis techniques for extracting information that is useful when making music.

In particular the workshop explored related techniques: for amplitude following (keeping track of changes in loudness), for pitch tracking (recognising the notes in a melody), and for describing timbral features of sound such as its ‘brightness’ or ‘noisiness’.

We were also lucky enough to have film maker Angela Guyton attend and make the following short film about the workshop:

Many thanks to the Yorkshire Sound Women Network, Sheffield Hallam University and Catalyst: Festival of Creativity for making this workshop possible.