"Machine listening" is one common term for a fast-growing interdisciplinary field of science and engineering which uses audio signal processing and machine learning to "make sense" of sound and speech. [^Cella, Serizel, Ellis] Machine listening is what enables you to be "understood" by Siri and Alexa, to Shazam a song, and to interact with many audio-assistive technologies if you are blind or vision impaired [Alper]. As early as the 90s, the term was already being used in computer music to describe the analytic dimension of ['interactive music systems'](https://wp.nyu.edu/robert_rowe/text/interactive-music-systems-1993/chapter5/), whose behavior changes in response to live musical input.[^Rowe, Maier] It was also, of course, a cornerstone of the mass surveillance programs revealed by Edward Snowden in 2013: SPIRITFIRE's "speech-to-text keyword search and paired dialogue transcription"; EViTAP's "automated news monitoring"; VoiceRT's "ingestion", according to one NSA slide, of Iraqi voice data into voiceprints. Domestically, machine listening technologies underpin the vast databases of vocal biometrics now held by many [prison providers](https://theintercept.com/2019/01/30/prison-voice-prints-databases-securus/ "Prisons Across the U.S. Are Quietly Building Databases of Incarcerated People’s Voice Prints") and, for instance, the [Australian Tax Office](https://www.computerworld.com/article/3474235/the-ato-now-holds-the-voiceprints-of-one-in-seven-australians.html "The ATO now holds the voiceprints of one in seven Australians"). And they are quickly being integrated into infrastructures of development, security and policing.
"Machine listening" is one common term for a fast-growing interdisciplinary field of science and engineering which uses audio signal processing and machine learning to "make sense" of sound and speech. [^Cella, Serizel, Ellis] Machine listening is what enables you to be "understood" by Siri and Alexa, to Shazam a song, and to interact with many audio-assistive technologies if you are blind or vision impaired [Alper]. As early as the 90s, the term was already being used in computer music to describe the analytic dimension of ['interactive music systems'](https://wp.nyu.edu/robert_rowe/text/interactive-music-systems-1993/chapter5/), whose behavior changes in response to live musical input.[^Rowe, Maier] It was also, of course, a cornerstone of the mass surveillance programs revealed by Edward Snowden in 2013: SPIRITFIRE's "speech-to-text keyword search and paired dialogue transcription"; EViTAP's "automated news monitoring"; VoiceRT's "ingestion", according to one NSA slide, of Iraqi voice data into voiceprints. Domestically, machine listening technologies underpin the vast databases of vocal biometrics now held by many [prison providers](https://theintercept.com/2019/01/30/prison-voice-prints-databases-securus/ "Prisons Across the U.S. Are Quietly Building Databases of Incarcerated People’s Voice Prints") and, for instance, the [Australian Tax Office](https://www.computerworld.com/article/3474235/the-ato-now-holds-the-voiceprints-of-one-in-seven-australians.html "The ATO now holds the voiceprints of one in seven Australians"). And they are quickly being integrated into infrastructures of development, security and policing.
![Automatic speech recognition](audio:static/audio/kathy-reid-intro-to-ASR.mp3),[^kathy_audio_1] transcription and translation - targeted key word detection [[i](https://theintercept.com/2015/05/05/nsa-speech-recognition-snowden-searchable-text/ "How the NSA Converts Spoken Words Into Searchable Text")] - vocal biometrics and audio fingerprinting [[i](https://www.nice.com/engage/real-time-technology/voice-biometrics/ "NICE leverages voice biometrics for safer and more secure customer authentication"), [ii](https://www.acrcloud.com/audio-fingerprinting/ "What Is Audio Fingerprinting?")] - speaker identification, differentiation, enumeration and location [[i](https://theintercept.com/2018/01/19/voice-recognition-technology-nsa/ "Finding Your Voice"), [ii](https://patents.google.com/patent/US20100235169A1/en "Google Speech differentiation Patent")] - personality and emotion recognition [[i](https://www.youtube.com/watch?v=86I3-VYIvAM "callAIser in action: Call Center agent gets desperate over angry customer")] - accent identification [[i](https://www.theverge.com/2017/3/17/14956532/germany-refugee-voice-analysis-dialect-speech-software "Germany to use voice analysis software to help determine where refugees come from")] - sound recognition - audio object recognition - audio scene analysis - intelligent audio analysis[^intelligent_audio_analysis] - audio event analysis - audio context awareness - music mood analysis - music identification - music playlist generation - audio synthesis - speech synthesis - musical synthesis - adversarial music [[i](https://arxiv.org/abs/1911.00126 "Real World Audio Adversary Against Wake-word Detection System")] - audio brand recognition - aggression detection [[i](https://www.audeering.com/what-we-do/automotive/ "Cars take care of their passengers")] - depression detection - laughter detection - stress detection - distress detection - intoxication detection[[i](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3872081/ "Intoxicated Speech Detection: A Fusion Framework with Speaker-Normalized Hierarchical Functionals and GMM Supervectors")] - scream detection - lie detection - hoax detection[[i](https://amp.abc.net.au/article/12568084 "University of Southern Queensland gets $300k for hoax emergency call detection technology")] - gunshot detection - autism diagnosis - parkinson's diagnosis [[i](http://www.canaryspeech.com/ "Using voice to identify human conditions sooner.")] - covid diagnosis [[i](https://app.surveylex.com/surveys/5384d6d0-6499-11ea-bc3a-b32c3ca92036 "We are launching an initiative to collect your voices with a goal to be able to triage, screen and monitor COVID-19 virus.")] - machine fault diagnosis - psychosis diagnosis [[i](https://www.sciencedaily.com/releases/2019/06/190613104552.htm "The whisper of schizophrenia: Machine learning finds 'sound' words predict psychosis")] - bird sound identification [[i](https://voicebot.ai/2020/06/26/voice-match-is-for-the-birds-new-google-competition-seeks-avian-audio-ai/ "Voice Match is for the Birds")] - gender identification - ethnicity detection - age determination - voice likeability determination - risk assessment [[i](https://www.clearspeed.com/ "Clearspeed: Using the Power of Voice for Good")]...
@@ -23,17 +23,17 @@ Machine listening is much more than just a new scientific discipline or vein of
Machine listening isn't just machinic.
Materially, it entails enormous exploitation of both human and planetary resources: to build, power and maintain the vast infrastructures on which it depends, along with all the microphones and algorithms which are its [most visible manifestations](https://anatomyof.ai/ "Anatomy of an AI System).[^Crawford and Joler] Even these are not so visible however. One of the many political challenges machine listening presents is its tendency to disappear at point of use, even as it indelibly marks the bodies of workers and permanently scars environments and the atmosphere [Joler interview].
Materially, it entails enormous exploitation of both human and planetary resources: to build, power and maintain the vast infrastructures on which it depends, along with all the microphones and algorithms which are its [most visible manifestations](https://anatomyof.ai/ "Anatomy of an AI System").[^Crawford and Joler] Even these are not so visible however. One of the many political challenges machine listening presents is its tendency to disappear at point of use, even as it indelibly marks the bodies of workers and permanently scars environments and the atmosphere [Joler interview].
Scientifically, machine listening demands enormous volumes of data: exhorted, extracted and appropriated from auditory environments and cultures which, though numerous already, will never be diverse enough. This is why responding to machinic bias with a politics of inclusion is necessarily a trap [HL excerpt]. It means committing to the very system that is oppressing or occluding you: a "techno-politics of perfection." [^Goldenfein]
Because machine listening is trained on (more-than) human auditory worlds, it inevitably encodes, invisibilises and reinscribes normative listenings, along with a range of more arbitrary artifacts of the datasets, statistical models and computational systems which are at once its lifeblood and fundamentally opaque [McQuillan]. This combination means that machine listening is simultaneously an alibi or front for the proliferation and normalisation of specific auditory practices *as* machinic, and, conversely, often irreducible to human apprehension; which is to say the worst of both worlds.
Moreover, because machine listening is so deeply bound up with logics of automation and pre-emption, it is also recursive. It feeds its listenings back into the world - gendered and gendering [YS], colonial and colonizing, ![raced and racializing](audio:static/audio/halcyon-siri-imperialism.mp3)[^halcyon_audio_1], classed and productive of class relations - as Siri's answer or failure to answer; by alerting the police, denying your claim for asylum, or continuing to play Autechre - and this incites an auditory response to which it listens in turn. The soundscape is increasingly cybernetic. Confronting machine listening means recognising that common-sense distinctions between human and machine simply fail to hold. We are all machine listeners now. This must become the starting point of a contemporary politics of listening.
Moreover, because machine listening is so deeply bound up with logics of automation and pre-emption, it is also recursive. It feeds its listenings back into the world - gendered and gendering [YS], colonial and colonizing, ![raced and racializing](audio:static/audio/halcyon-siri-imperialism.mp3),[^halcyon_audio_1] classed and productive of class relations - as Siri's answer or failure to answer; by alerting the police, denying your claim for asylum, or continuing to play Autechre - and this incites an auditory response to which it listens in turn. The soundscape is increasingly cybernetic. Confronting machine listening means recognising that common-sense distinctions between human and machine simply fail to hold. We are all machine listeners now. This must become the starting point of a contemporary politics of listening.
But machine listening isn't exactly listening either.
Technically, the methods of machine listening are diverse, but they bear little relationship to the biological processes of human audition or psychocultural processes of meaning making. Many are fundamentally [imagistic](https://medium.com/@krishna_84429/audio-classification-using-transfer-learning-approach-912e6f7397bb "Audio classification using transfer learning approach"), in the sense that they work by first transforming sound into spectograms. Many work by combining auditory with other forms of data and sensory inputs: machines that [listen by looking](https://www.wired.com/story/lamphone-light-bulb-vibration-spying/ "Spies Can Eavesdrop by Watching a Light Bulb's Vibrations"), or by cross-referencing audio with geolocation data. In the field of Automatic Speech Recognition, for instance, it was only when researchers at IBM moved away from attempts to simulate human listening towards statistical data processing in the 1970s that the field began making decisive steps forward [^airplanes]. Speech recognition needed to untether itself from "human sensory-motor phenomenon" in order to start recognising speech. Airplanes don't flap their wings [^airplanes].
Technically, the methods of machine listening are diverse, but they bear little relationship to the biological processes of human audition or psychocultural processes of meaning making. Many are fundamentally [imagistic](https://medium.com/@krishna_84429/audio-classification-using-transfer-learning-approach-912e6f7397bb "Audio classification using transfer learning approach"), in the sense that they work by first transforming sound into spectograms. Many work by combining auditory with other forms of data and sensory inputs: machines that [listen by looking](https://www.wired.com/story/lamphone-light-bulb-vibration-spying/ "Spies Can Eavesdrop by Watching a Light Bulb's Vibrations"), or by cross-referencing audio with geolocation data. In the field of Automatic Speech Recognition, for instance, it was only when researchers at IBM moved away from attempts to simulate human listening towards statistical data processing in the 1970s that the field began making decisive steps forward [^airplanes]. Speech recognition needed to untether itself from "human sensory-motor phenomenon" in order to start recognising speech. Airplanes don't flap their wings. [^airplanes]
Even if machine listening did work by analogising human audition, the question of cognition would still remain. Insofar as "listening" implies a subjectivity, machines do not (yet) listen. But this kind of anthropocentrism simply begs the question. What is at stake with machine listening is precisely a new auditory regime: an analog of Paul Virilio's "sightless vision", [^Virilio] the possibility of a listening without hearing or comprehension, a purely correlative listening, with the human subject decentered as privileged auditor.