"Machine listening" is one common term for a fast-growing interdisciplinary field of science and engineering which uses audio signal processing and machine learning to "make sense" of sound and speech [Cella, Serizel, Ellis]. Machine listening is what enables you to be "understood" by Siri and Alexa, to Shazam a song, and to interact with many audio-assistive technologies if you are blind or vision impaired [Alper]. As early as the 90s, the term was already being used in computer music to describe the analytic dimension of 'interactive music systems', whose behavior changes in response to live musical input [Rowe, Maier]. It was also, of course, a cornerstone of the mass surveillance programs revealed by Edward Snowden in 2013: SPIRITFIRE's "speech-to-text keyword search and paired dialogue transcription"; EViTAP's "automated news monitoring"; VoiceRT's "ingestion", according to one NSA slide, of Iraqi voice data into voiceprints. Domestically, machine listening technologies underpin the vast databases of vocal biometrics now held by many prison providers [ref] and, for instance, the Australian Tax Office [ref]. And they are quickly being integrated into infrastructures of development, security and policing.
"Machine listening" is one common term for a fast-growing interdisciplinary field of science and engineering which uses audio signal processing and machine learning to "make sense" of sound and speech [Cella, Serizel, Ellis]. Machine listening is what enables you to be "understood" by Siri and Alexa, to Shazam a song, and to interact with many audio-assistive technologies if you are blind or vision impaired [Alper]. As early as the 90s, the term was already being used in computer music to describe the analytic dimension of 'interactive music systems', whose behavior changes in response to live musical input [Rowe, Maier]. It was also, of course, a cornerstone of the mass surveillance programs revealed by Edward Snowden in 2013: SPIRITFIRE's "speech-to-text keyword search and paired dialogue transcription"; EViTAP's "automated news monitoring"; VoiceRT's "ingestion", according to one NSA slide, of Iraqi voice data into voiceprints. Domestically, machine listening technologies underpin the vast databases of vocal biometrics now held by many [prison providers](https://theintercept.com/2019/01/30/prison-voice-prints-databases-securus/ "Prisons Across the U.S. Are Quietly Building Databases of Incarcerated People’s Voice Prints") and, for instance, the [Australian Tax Office](https://www.computerworld.com/article/3474235/the-ato-now-holds-the-voiceprints-of-one-in-seven-australians.html "The ATO now holds the voiceprints of one in seven Australians"). And they are quickly being integrated into infrastructures of development, security and policing.
![Automatic speech recognition](audio:static/audio/kathy-reid-intro-to-ASR.mp3)[^kathy_audio_1], transcription and translation - targeted key word detection [[i](https://theintercept.com/2015/05/05/nsa-speech-recognition-snowden-searchable-text/ "How the NSA Converts Spoken Words Into Searchable Text")] - vocal biometrics and audio fingerprinting [[i](https://www.nice.com/engage/real-time-technology/voice-biometrics/ "NICE leverages voice biometrics for safer and more secure customer authentication"), [ii](https://www.acrcloud.com/audio-fingerprinting/ "What Is Audio Fingerprinting?")] - speaker identification, differentiation, enumeration and location [[i](https://theintercept.com/2018/01/19/voice-recognition-technology-nsa/ "Finding Your Voice"), [ii](https://patents.google.com/patent/US20100235169A1/en "Google Speech differentiation Patent")] - personality and emotion recognition [[i](https://www.youtube.com/watch?v=86I3-VYIvAM "callAIser in action: Call Center agent gets desperate over angry customer")] - accent identification [[i](https://www.theverge.com/2017/3/17/14956532/germany-refugee-voice-analysis-dialect-speech-software "Germany to use voice analysis software to help determine where refugees come from")] - sound recognition - audio object recognition - audio scene analysis - intelligent audio analysis[^intelligent_audio_analysis] - audio event analysis - audio context awareness - music mood analysis - music identification - music playlist generation - audio synthesis - speech synthesis - musical synthesis - adversarial music [[i](https://arxiv.org/abs/1911.00126 "Real World Audio Adversary Against Wake-word Detection System")] - audio brand recognition - aggression detection [[i](https://www.audeering.com/what-we-do/automotive/ "Cars take care of their passengers")] - depression detection - laughter detection - stress detection - distress detection - intoxication detection[[i](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3872081/ "Intoxicated Speech Detection: A Fusion Framework with Speaker-Normalized Hierarchical Functionals and GMM Supervectors")] - scream detection - lie detection - hoax detection[[i](https://amp.abc.net.au/article/12568084 "University of Southern Queensland gets $300k for hoax emergency call detection technology")] - gunshot detection - autism diagnosis - parkinson's diagnosis [[i](http://www.canaryspeech.com/ "Using voice to identify human conditions sooner.")] - covid diagnosis [[i](https://app.surveylex.com/surveys/5384d6d0-6499-11ea-bc3a-b32c3ca92036 "We are launching an initiative to collect your voices with a goal to be able to triage, screen and monitor COVID-19 virus.")] - machine fault diagnosis - psychosis diagnosis [[i](https://www.sciencedaily.com/releases/2019/06/190613104552.htm "The whisper of schizophrenia: Machine learning finds 'sound' words predict psychosis")] - bird sound identification [[i](https://voicebot.ai/2020/06/26/voice-match-is-for-the-birds-new-google-competition-seeks-avian-audio-ai/ "Voice Match is for the Birds")] - gender identification - ethnicity detection - age determination - voice likeability determination - risk assessment [[i](https://www.clearspeed.com/ "Clearspeed: Using the Power of Voice for Good")]...
@@ -16,7 +16,7 @@ Digital voice assistants - voice user interfaces - state and corporate surveilla
As with all forms of machine learning, questions of efficacy, access, privacy, bias, fairness and transparency arise with every use case. But machine listening also demands to be treated as an epistemic and political system in its own right, that increasingly enables, shapes and constrains basic human possibilities, that is making our auditory worlds knowable in new ways, to new institutions, according to new logics, and is remaking (sonic) life in the process.
Machine listening is much more than just a new scientific discipline or vein of technical innovation then. It is also an emergent field of knowledge-power and cultural production, of data extraction and colonialism, of capital accumulation, automation and control. We must make it a field of political contestation and struggle. And if there is to be a world of listening machines, we must ensure it is emancipatory.
Machine listening is much more than just a new scientific discipline or vein of technical innovation then. It is also an emergent field of knowledge-power and cultural production, of data extraction and colonialism, of capital accumulation, automation and control. We must make it a field of political contestation and struggle. If there is to be a world of listening machines, we must ensure it is emancipatory.
## ~~Machine listening~~
@@ -25,7 +25,7 @@ Machine listening isn't just machinic.
Materially, it entails enormous exploitation of both human and planetary resources: to build, power and maintain the vast infrastructures on which it depends, along with all the microphones and algorithms which are its most visible manifestations [Crawford and Joler]. Even these are not so visible however. One of the many political challenges machine listening presents is its tendency to disappear at point of use, even as it indelibly marks the bodies of workers and permanently scars environments and the atmosphere [Joler interview].
Scientifically, machine listening demands enormous volumes of data: exhorted, extracted and appropriated from auditory environments and cultures which, though numerous already, will never be diverse enough. This is why responding to machinic bias with a politics of inclusion is necessarily a trap [HL excerpt]. It means committing to the very system that is oppressing or occluding you: a "techno-politics of perfection" [Goldenfein].
Scientifically, machine listening demands enormous volumes of data: exhorted, extracted and appropriated from auditory environments and cultures which, though numerous already, will never be diverse enough. This is why responding to machinic bias with a politics of inclusion is necessarily a trap [HL excerpt]. It means committing to the very system that is oppressing or occluding you: a "techno-politics of perfection" [^Goldenfein].
Because machine listening is trained on (more-than) human auditory worlds, it inevitably encodes, invisibilises and reinscribes normative listenings, along with a range of more arbitrary artifacts of the datasets, statistical models and computational systems which are at once its lifeblood and fundamentally opaque [McQuillan]. This combination means that machine listening is simultaneously an alibi or front for the proliferation and normalisation of specific auditory practices *as* machinic, and, conversely, often irreducible to human apprehension; which is to say the worst of both worlds.
@@ -48,4 +48,5 @@ Another response would be to say that when or if machines listen, they listen "o
[^kathy_audio_1]: Interview with [Kathy Reid](https://blog.kathyreid.id.au) on August 11, 2020
[^andre_audio_1]: Interview with [André Dao](https://andredao.com/) on September 4, 2020
[^Goldenfein]: Jake Goldenfein, *Monitoring Laws: Profiling and Identity in the World State* (Cambridge University Press, 2019) https://doi.org/10.1017/9781108637657
[^halcyon_audio_1]: Interview with [Halcyon Lawrence](http://www.halcyonlawrence.com/) on August 31, 2020.