At this workshop on “Machine listening in music: A beginner’s guide” on 21st July 2016 Amy Beeston led us in an investigation of how computers hear and process sound.
By understanding how our own ears and auditory systems work, and how a microphone picks up sound and lets the machine ‘hear’ aspects of the acoustic surroundings, we can begin to really get the very best out of our technology.
We use acoustic instruments alongside digital technology in our music making, e.g. by playing an instrument or singing through a microphone into a computer running some form of digital signal processing software like Cubase or Logic Pro. However, the techniques employed by musical applications to ‘listen’ to sound are far less sophisticated than human listening skills.
In particular the workshop explored related techniques: for amplitude following (keeping track of changes in loudness), for pitch tracking (recognising the notes in a melody), and for describing timbral features of sound such as its ‘brightness’ or ‘noisiness’.
We were also lucky enough to have film maker Angela Guyton attend and make a short film about the workshop.
Many thanks to the Yorkshire Sound Women Network, Sheffield Hallam University and Catalyst: Festival of Creativity for making this workshop possible.
We will investigate how computers hear and process sound in comparison with your ears, to help you make the best of this information in your own music/sound work.
- Age of participants: 16+
- Level: Beginner. You don’t need to be an experienced producer, but you probably have some experience of making music on a computer.
- Equipment: We can provide a computer for you to use, but if you have your own laptop then please bring it along.
- Venue: DINA, 32 Cambridge St, Sheffield S1 4HP
- Date/time: Thursday July 21st, 6.30 – 9.30 pm – please register here
6.30 pm – 7.00 pm – Pre-workshop warmup (we can help to install software)
7.00 pm – 9.00 pm – Workshop
9.00 pm – 9.30 pm – Refreshments and informal networking
You will attend several short presentations, and coding/software demonstrations developed to guide you through the process of capturing the musically-relevant information in recorded sound.
This will include a consideration of:
- What goes on in the human auditory system so that we unconsciously maximise our chances of hearing well, even in difficult environments and,
- What attempts have been made to give some of this amazing functionality to machine listening systems.
In particular this workshop will explore related techniques: for amplitude following (keeping track of changes in loudness), for pitch tracking (recognising the notes in a melody), and for describing timbral features of sound such as its ‘brightness’ or ‘noisiness’.
Time permitting we will also consider how these techniques can be used as sonic controllers within the music/sound systems that you are already using or familiar with.
We will have computers available for you to use, but if you have your own laptop then please bring it along. That way you can leave the workshop with your own machine all set up and ready to go.
It would also be helpful if you could bring some headphones – but again, don’t worry if that’s not possible.
We’ll be using the following software. If you are bringing your own machine and have time, please download and install any/all of the following software. We will also be available half an hour before the workshop begins to help with any installation queries.
- Audacity (free software) – http://www.audacityteam.org/
- Sonic Visualiser (free software) – http://www.sonicvisualiser.org/
- Praat (free software) – http://www.fon.hum.uva.nl/praat/
- Max (30-day free demo) – https://cycling74.com/downloads
If you have any questions about the workshop please email Amy Beeston on email@example.com
Born into a musical family in Edinburgh, I spent much of my childhood listening to and playing various musical instruments before studying Music Technology (at the University of Edinburgh, 2001). I began building interactive sound installations at around this time, and subsequently focused on sonic control for interactive audio installations during my masters degree in Sonology (Royal Conservatory, The Hague, 2005). More recently, my PhD work (at the University of Sheffield, 2015) has helped me understand some of the challenges that I faced at those times, moving sound installations between practice studios and performance spaces. My PhD work allowed me to firstly examine how human listeners compensate for reflected sound in everyday listening environments and, secondly, to develop machine listeners that exploit principles of the human auditory system in order to deal with the reverberation present in real room recordings.
I am now a researcher in the Speech and Hearing Research Group, Department of Computer Science, University of Sheffield. My research is typically interdisciplinary and collaborative, and primarily involves the development of bio-inspired digital sound processing methods to derive control data from specific parts of audio signals. I have worked in two projects developing computer-assisted language learning software (firstly creating a pronunciation training tool for Dutch school children learning English, and secondly developing software to promote strategies for cochlear-implanted listeners to handle overlapping talk in conversation). In my current role I am developing software for acoustic detection and assessment of snore sounds recorded overnight in the home via users’ smartphones. And when time and opportunity coincide, I enjoy applying these human- and machine- listening skills in musical applications too!
This event is organised by Lucy Cheesman and Amy Beeston. For any additional information on this workshop or any other of our Catalyst: Festival of Creativity events, please contact us on firstname.lastname@example.org.