You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

parker.md 37 KiB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128
  1. ---
  2. title: "Angie Abdilla"
  3. status: "Auto-transcribed by reduct.video"
  4. ---
  5. Jasmine Guffond (00:00:21) - ... , this is computational Listening with Jasmine Guffond. This first edition includes an interview with academic writer and curator James Parker, about his research into Machine Listening, followed by a selection of electronic music and track listing is available and joy. Hi
  6. Jasmine Guffond (00:00:56) - James, thanks for your time this evening.
  7. James Parker (00:00:58) - It's a pleasure.
  8. Jasmine Guffond (00:01:00) - So you're a senior lecturer at Melbourne law school, which is part of Melbourne university. And you're also an associate curator, uh, with liquid architecture, which is an Australian organization that curates artists working primarily with sound.
  9. James Parker (00:01:17) - Yeah, so I mean, you know, there's not that many people in law working with sound and Listening. Um, so, you know, I I've carved out a little niche for myself. I wrote this book about the trial of Simon Bikindi, um, who was a Rwandan singer musician popular figure who was accused of inciting genocide with his songs. So what I was trying to do in that book really was to just to think about, you know, what would that really mean? What, what would you have to do in order to think seriously about the possibility that someone had incited genocide with his songs? And what I say basically in the book is that the international criminal tribunal for Rwanda doesn't really take seriously the kinds of questions about the nature of sound, about the nature of music or Rwandan music or random music on the radio in a specific Listening context that it would need to in order to, you know, to seriously grapple with that question. So, you know, subsequently moved on to think about law in relation to Sonic weapons and the weaponization of sound. And then subsequently in my work with liquid architecture, I've been working on eavesdropping. So I kind of thinking about the law and politics of Listening and being listened to, and, you know, particularly in the kind of surveillance contexts. And that's what led me to my most recent work on Machine Listening, which is sort of only just getting underway now.
  10. Jasmine Guffond (00:02:37) - Okay. Could you explain what you mean when you use the term Machine Listening and perhaps give one or two concrete examples? So for people who haven't heard of the term before
  11. James Parker (00:02:48) - Yeah, sure thing. I mean, I think in some ways everybody's kind of familiar with, you know, the idea of Machine Listening, anybody who's engaged with a voice assistant or a smart, you know, or even a call center where, you know, it asks you to say your name or respond yes or no, or say a number or something. Right. So, you know, Machine Listening on one level is just engagement with a Machine that seems to understand you, right. And they're getting increasingly good at understanding you. So, you know, that's, that's one sort of simple way of thinking about Machine Listening. It's, it's engaging with machines that seem to listen, but actually it's a term that's being used in the scientific literature in, uh, as well. So there's a sort of an increasingly large number of researchers working on audio and AI basically. And they sometimes refer to what they're doing is Machine Listening. They also use a whole number of, uh, synonyms. So people might be familiar with automatic speech recognition, right. Which is the kind of very sort of speech oriented version of this, which is one of the oldest disciplines of Machine Listening actually. Um, but there's also sub fields like audio event detection, which is about training, um, algorithms, neural networks, machine learning systems, to be able to understand Sonic events like the shattering of glass or a crying baby or a gunshot, or, um, what have you and audio go in
  12. Jasmine Guffond (00:04:24) - And would that be used to try and say locate you? Like what room that you're in or if you're in a shopping mall or at home, or,
  13. James Parker (00:04:31) - Yeah. So, um, from my understanding, uh, what you're describing is more like audio scene analysis. I mean, I don't want to get too technical because I think in some ways they overlap and, you know, in order to understand a Sonic space like a mall or something, you need to be able to detect speech for example. And so that, you know, th th it's not like there's no speech analysis going on in audio scene analysis and so on and so on. But yeah, audio event detection is about identifying very specific sounds. So one company, for example, audio analytic, which is a UK based company,
  14. James Parker (00:05:04) - Sells various security systems that can determine, for example, you know, the, the sound of a, of a window shattering and that, and the idea is that then they can trigger an alarm or, um, trigger some kind of automated system that would alert police or, um, alert the user via this, you know, their smartphone or what have you, you know, that they have a product that, um, would be in cars, right? So that it would be able to hear the sound of a bell or an engine or a driver yawning. So audio event detection is about identifying very specific sounds. Whereas audio scene analysis is about recognizing ambient environment and then sort of determining, automating what to do with some kind of an ambient environment. So audio scene analysis, the kind of thing that you have embedded in your smart headphones. If you've got smart headphones, maybe you're sensible not to, uh,
  15. James Parker (00:05:53) - But they're, they're not devices there, there are, um, Headphones now. Um, you know, so headphones will once for Listening and now headphones listen in order to supposedly improve the Listening experience for you. So they can tell whether you're listening in a noisy environment or a quiet environment and adjust the noise canceling accordingly, or they can tell whether somebody is trying to speak to you, or this is the claim anyway, and then they can turn on ambient and microphones embedded in the, in the headphones that then can sort of transmit the speech of the person speaking to you via the microphone. Now, like how useful that is, how much it improves the Listening experience, whether it's just, um, another way of making a product smart in order to extract more data and more information. That's a slightly different question, but so to return to your original question, you know, what's Machine, this needs, it's a field of science and technology that people are increasingly familiar with via things like smart speakers, voice assistance, but it's much, much More than that. Smart headphones, gunshot detection, voice prints, emotion detection. I don't know if you read about this new device launched by Amazon, Amazon halo, that's kind of like a, like a Fitbit, but that listens to you as well, and supposedly tracks your emotions throughout a day in order to give you feedback on whether you were particularly happy or sad or depressed, or what have you, and then
  16. Jasmine Guffond (00:07:23) - Via the vocal quality from listening to your voice. Yes.
  17. James Parker (00:07:28) - Supposedly, and there are, I mean, there's all sorts of other, uh, similar applications, aggression detection for security systems. The idea is that they'd be able to identify in advance, you know, an argument as it's brewing from the sounds of voices in relation to a sort of normal, um, ambient environment, also age detection, gender detection, supposedly ethnicity detection, depression detection, outsiders diagnosis, one company, one American company, um, does vocal risk assessment. You know, what if they think risk is, is a fascinating question, but, you know, they, they claim be able to, and by means of a two to 10 minute long phone call, determine whether or not you're, um, a risk to the company that's buying their product in some way in military context. So as a screening mechanism for taking on contractors or, you know, employing, you know, soldiers or, um, security agents, or what have you, but also, you know, they're selling their products to banking, to detect supposedly frauds all from the sound of people's voices.
  18. James Parker (00:08:32) - You know, so there's many, many, many uses or putative uses of Machine Listening techniques and they're growing by the day. And I don't think that the friendly face of, um, not that they have faces of things like Siri and Alexa, they take up a lot of the attention where I think actually this kind of field of Machine Listening has growing and making itself ubiquitous more than people tend to realize. And so for that reason, I think we should think of Machine Listening, not just as a field of science and technology, but also of as a kind of a field of power, a field in which politics is happening. You know, that's connected to other systems, exploitative and oppressive systems. You know, it's obviously collected in some way to capitalism, to data colonialism, you know, even white supremacy and patriarchy and things, right? So this is a, an emerging system of power, not unlike computer vision, not unlike facial recognition. Um, when people think about the politics of, you know, search and other algorithms, they might begin. I hope to think about the politics of something like Machine Listening too, because I think it's a bigger deal than people are giving it credit for at the moment. And it's going to keep it.
  19. Jasmine Guffond (00:09:47) - We are certainly painting a dystopian image. I think we're at each and every one of our devices are listening to us and then using that data to predict how we might act in the future and in a way sort of assuming that.
  20. Jasmine Guffond (00:10:03) - We could be acting negatively like aggressively. I mean, I imagine there must be so much room for error. Like you could just be annoyed because you stubbed your toe walking down the street and is that going to then be misconstrued?
  21. James Parker (00:10:16) - Well, that's one of the interesting things about this. So to the extent that any kind of AI system is political both to the extent that it works and there's a kind of Omnisphere and surveillance system, you know, that knows too much and knows too well. And just as problematically to the extent that it doesn't work, to the extent that it embeds and, uh, normalizes and makes seem neutral or objective existing biases, like racial bias, gender biases, and many, many others, but also in trenches and produces kind of new and more arbitrary biases and errors of its own. Right? So the problem comes both ways. It's a challenge politically, to the extent that these systems work, we might want to think about whether we want the systems to work and the way that they claim to. And that is a challenge to the extent that they don't work. And sometimes what we're being sold is simply snake oil. You know, that the promise of AI as a sort of a marketing term, outstrips the actuality. And so I think that there's many companies that are claiming to be able to do things that they simply can't do and with very little kind of scrutiny and regulation.
  22. Jasmine Guffond (00:11:37) - Yeah. I mean, it seems like one of the issues with AI is its inability to be able to tell context. So am I hear the sound of breaking glass, but that could just be you knocked over a glass at home. It's not someone breaking into your house for example.
  23. James Parker (00:11:53) - Yeah. But the problem with that argument, that what some of the problem with your argument is that you can just imagine like the cogs turning in the AI technicians mind, right. Because the immediate answer is, well, what we need is we need more context
  24. Jasmine Guffond (00:12:06) - ... or more data to define context.
  25. James Parker (00:12:10) - Right? Exactly. And it's the same when people point out biases. Right? So for example, there's a great scholar in the U S called Halcyon Lawrence. Who's written about the racial biases of Siri and other voice assistants and the way in which they are unable to, or sort of refuse to hear accented speech and the way that, that taps into a long history of oppression and imperialism via and in relation to language. And there are all sorts of questions about what, what you do about that problem. So there are obviously not financial incentives for Amazon or Google to go after minority markets, right? Minority speech markets. You know, you might say like a heavily accented speech from whatever communities, but you know, if they were doing AI for good, maybe, maybe they would. Um, and so, you know, you have this logic where you go, well, let's make it so that these assistants can understand all forms of speech as being inclusive.
  26. James Parker (00:13:06) - So, you know, a politics of inclusion that's on neverless that wants to understand everything better and perfect itself constantly, you know, it tends towards there being no limit. So the kinds of data or the people and the places and the context that warrant data extraction. So there's a bit of a sort of a trap in, you know, pointing sometimes to the errors and the problems with the systems because the immediate response is, well, we can correct that if only you you'll allow us to have more data, but what's never questioned is the frame where whereby we need the system in the first place.
  27. Jasmine Guffond (00:13:46) - Well, speaking of frames and no limits, would you be able to give an example of Machine Listening, being used in health applications? So particularly in the current context of the COVID-19 pandemic,
  28. James Parker (00:14:00) - I can have a go, but I guess I want to premise what I'm going to say by pointing out that I don't really know to what extent these things are actually happening already. So one of the things with the pandemic context is that, you know, things are happening extremely fast. I mean, it's clear that big tech is using the pandemic as a way to sort of expand its tendrils and tentacles into more and more spaces. So one of the moves that you're seeing big tech do is move into health. You know, all data becomes health data after COVID, you know, um, and the same way that big texts moving into education, uh, in homeschooling and home universities, there's an opportunity, a market opportunity here. So, so people are trying to move in to the health space. That's that's really, really clear. And one of the ways in which they're trying to do it in the Machine Listening context has been COVID voice diagnostics. So this is the idea that we could tell if we just had enough data and trained our algorithms.
  29. James Parker (00:15:05) - Correctly that you have COVID based simply on a voice sample. So it might be your cough, or it might be the, you know, a sample of speech. And, you know, this responds to the intuition that we have that, you know, you can sort of tell when somebody's got a cold, right. You know, they don't seem to be speaking. Normally they sound stuffed up. The idea is that, well, maybe even though I couldn't tell you whether you specifically have COVID, you know, just sound a bit funky wouldn't know, but maybe the machine learning system can tell the truth of your voice, you know, beyond the limits of human Listening. And so there's all sorts of companies and universities. Um, and then, and entities that have tried to move into this space. And the reason I was a bit cagey at the beginning is that that's not at all clear to me, which if any of these things work, to what extent they work, whether they're actually being deployed already. But I think we're going to see it happen pretty soon. So to give some examples, there's an organization called voca.ai, which is ordinarily a, um, they provide this an Israeli company that provides voice assistance in sort of call center context. So they're sort of AI driven voice assistance. They partnered up with some researchers at Carnegie Mellon, I think by March to gather data in relation to, to train a system in order to detect
  30. Jasmine Guffond (00:16:31) - COVID. So they're jumping right in there
  31. James Parker (00:16:34) - And right in, they were sort of ready to go. And so they were getting hundreds and hundreds and thousands of samples really quickly that they, they had all this language like volunteer your voice, you know, to help, uh, fight against COVID-19
  32. Jasmine Guffond (00:16:48) - Actually, I'll probably play that Sonify ad so that listeners can hear an example if, um, so the company's asking for people to voluntary, um, give over voice recordings for their data set to train there. Yeah.
  33. James Parker (00:17:03) - And then this idea of like voice donation, I think is really interesting. And, you know, with voca.ai, I don't want to sort of say anything that will get me into trouble, but like, I think I can stay at least that it wasn't, it's not clear that they won't use your data to train their call center agents, you know, uh, uh, beyond the, um, you know, w w whether or not anything comes out of the COVID voice diagnostic attempt. So that's one, that's a private organization. Cambridge university has got a team working on something like MIT has got a team that, that one's a bit different. They, they were trying to train their algorithm on newsreaders that they gathered from YouTube. So they had recordings of newsreaders speaking once they'd had a COVID diagnosis, and then they went back and trolled through their history and found recordings of them prior to getting COVID.
  34. James Parker (00:17:55) - And then they sort of, you know, did it compare and contrast, and, you know, they say that they produced a study and says, Oh, it's very promising and so on and so on. And then there's companies Sonify, which are claiming that they've already, they're already able to do this. And the Sonify ad, you know, um, um, says that they're, they're seeking FDA approval right now. Yeah. Uh, no, I find that really interesting. I want to know, I just want to know what's going on, uh, what paperwork and what conversations are being had. It's not that I don't think it's conceivable that COVID voice diagnostics could work, but, um, at least a little bit skeptical. And I'm also really concerned about, you know, what systems such a thing would be embedded into and who's scrutinizing it and what measures there are for false positives. And then my mind, you know, you said before that, you know, it's just opened my mind.
  35. James Parker (00:18:54) - I think I've got a sort of a dystopian tendency. And I just imagine a world in which your health is always being monitored by means of your voice. And it's not a fantasy. Google has it in patents, Google already has, or maybe it's Amazon actually has a patent, you know, it's Amazon because they have a pattern that so a few years old now that recognizes you coughing when you're speaking to Alexa, and then it begins to offer you, would you like me to, you know, buy some, some cough medicine and get it shipped to your house? And then you can imagine a world, right? Because you know, one of the promises of voice diagnostics like this is that they can tell you have the thing before you, before the symptom is kind of fully realized. So before you're even coughing, you know, it might be able to hear that kind of the pre cough. And, you know, you could imagine a world in which, you know, the cough medicine arrives before the cough, uh, you know, it's a kind of, uh, you know, that's the end point of logic. So, you know, what would it mean to live in a world in which it's not just that the content of one's speech is being monitored, but the, the manner of one's speech, the manner of one's health, you know, it's, it's like, you know, the embedding of.
  36. James Parker (00:20:03) - A stethoscope into every surface and environment. I don't know if you saw recently that Amazon's just launched this thing called Amazon residential, or maybe it's Alexa or echo residential or something whereby they're sort of working with landlords to embed, um, Amazon echo and Alexa devices sort of throughout entire apartment complexes
  37. Jasmine Guffond (00:20:26) - And next to the police. Right. Is that, did the police get it?
  38. James Parker (00:20:30) - Sure. To be honest. Yeah. You know, um, uh, that, that one, I just read like a couple of days ago, but, you know, sort of on one level the details, I don't want to say that the details don't matter of course the details matter a lot, but Google nest, you know?
  39. Jasmine Guffond (00:20:47) - Yeah. Google does. And Amazon has one that does as well, like, um, is it called neighbor? I forget now, but they definitely,
  40. James Parker (00:20:55) - You know, the, the, that that's the, that's a horizon at the very least the sort of the system of power and control whereby like walls monitor your health continuously. And they have a continually expanding ability and to not have really control over because you never do, uh, over what, you know, what kinds of things these devices or these ambient systems know about you or capable of interpreting, you know, I think that's, that concerns me.
  41. Jasmine Guffond (00:21:33) - Yeah, totally. Um, I'm just gonna quickly jump back to FDA because the first time I saw the Sonify ad, I actually had to ask someone what's the FDA. So it's the federal drug authority. Is that right? Yes. Okay. Yeah.
  42. James Parker (00:21:47) - And I, I have no idea what ability and organization like that has to think about political questions in relation to sound and AI systems. So who knows what's going on there
  43. Sonify Ad (00:22:00) - COVID-19 is potentially the single greatest problem facing humanity today. The inability to rapidly, safely and accurately test for COVID-19 is one of the most challenging factors that has led to its infection rate. Sonify has developed a voice detection algorithm using machine learning and AI to identify very specific health characteristics in the human body. Our team of scientists and technologists have found a way to tell if a person has COVID-19 by simply analyzing their voice on a mobile device. Since we have identified the biomarkers of COVID-19 in the voice, we are now working with the FDA to seek clearance for our technology and need your help. We need to provide the Sonify machine learning algorithm with more validated voices of people positive with COVID-19. If you or someone, you know, has recently tested positive for COVID-19 within the last two weeks, sharing 30 seconds of your voice at ... dot com slash R. I can Help us get this app to the world by sharing your voice, you can save lives.
  44. Jasmine Guffond (00:23:13) - So one of the concerns that emerges from your research that I'm particularly interested in is that Machine Listening bears I'm quoting from you now little relationship to the biological processes of human audition. So for us to really comprehend how machines listen, we need to move away from thinking about Machine sensing in terms of anthropocentric modes of perception. Like they don't listen in the way we as humans listen. And so you've come up with a couple of terms to explain Machine Listening and how it's different from human Listening, such as Listening effects and operational Listening. If you could explain what you mean by Listening effects and operational Listening, and also, do you think it's misleading to continue to use the term Listening in relation to machines?
  45. James Parker (00:24:02) - Hmm. Oh, there's so much in that question. It's a great question. So where should I begin? Just to unpack the question a little bit, you said one of my concerns is that Machine Listening bears literal relationship to, you know, the biological process of humans. I just want to clarify
  46. Jasmine Guffond (00:24:18) - That emerged.
  47. James Parker (00:24:19) - Yeah. I just want to clarify that I don't, it's not the fact I don't care. It's not like they should, it would be better. I'm not sort of, you know, a humanist in that sense. Um, but I think we need to understand that that's what's happening. Um, you know, actually the early kind of experiments with automatic speech recognition, we're really trying to ape, uh, and replicate human methods of Listening. And there's a great scholar called Xiaochang Li, in the U S at Stanford currently, who's done. Some work on the history of automatic speech recognition. And one of the things she talks about is how it was the kind of the abandoning of trying to model human Listening in favor of statistical modeling that kind of provides the break through IBM thinking the seventies, although it could be wrong about that. I can't quite remember. That's the breakthrough moment when we abandoned trying to listen like a human word, you know, so, okay. I could be a bit flippant, you know, what's the problem with the frame, the framing, but framing this, all of this in terms of Machine Listening well, um, it's not really machines doing it on one level and it's not really Listening either. So why talk about it in terms of Listening as a matter of politics or sort of advocacy? I think there's something helpful about thinking in terms of Machine Listening.
  48. James Parker (00:25:36) - So I think people understand what you're talking about and if you swap out computer audition, you know, which is another term that's used in the scientific field. I don't know if people, people are probably going to translate it into something like Machine Listening in their head. Anyway, I like the fact that Machine Listening kind of sounds a bit like Machine learning and machine learning has already got kind of some political traction so on. And so it's an analog to machine learning, but at the same time, as you identify it as a political problem, I think we immediately need to move on to say, okay, it's not working in exactly the same way as, or anything like the same way as human Listening. And that has important political what technical and political consequences we need to follow those and see where they lead.
  49. Jasmine Guffond (00:26:24) - Okay. I was just going to say, the reason I ask is because I agree with you. I think it's really important to understand the nature of Machine Listening and that it's not, um, it's know essentially data extraction because there's one great example when, um, for me, when Zuckerberg spoke before the us Senate about Cambridge Analytica. Yeah. And, um, he was asked by, um, I think it was Senator Gary Peters. Do you use microphone to obtain personal information about people and zackerburg could very easily just say, no, we don't. And he also said, that's this like conspiracy theory that gets passed around that we listened to what's going on with your microphones. And so he was kind of able to sidestep the fact that they do gather so much information about us. Um, and they don't need a microphone necessarily to do that though. Of course they do also have patients to use microphones. And that's why I think it's really important that policy makers and politicians at least understand the nature of Machine Listening, because otherwise they're not able to ask the right question.
  50. James Parker (00:27:34) - Yeah, no, I, I agree. Um, I don't know if I understand the nature of Machine Listening. I mean, you know, it's like with computer science, it's sort of a grab bag of techniques, you know, in ones and one level, I'm trying to name something that say political or so-so socio-technical object, uh, sorry, that's a bit jargony, but a social and a technical object, as well as, you know, rather than just simply a technical one. I want to say that Machine Listening is as a system of power, rather than just a technique of Listening. It's a whole range of techniques that intervene in the world by methods that are both analogous to Listening and the sense that they use auditory data and they sort of comprehend or analyze it in some way and are experienced as this to me. Um, so that's one of the things that I was trying to get at when I, when I talk about Listening effects, right?
  51. James Parker (00:28:31) - So that it is really important to focus on what precisely the Machine is doing. But it's also really important to understand that there's, it matters that we experience ourselves as being listened to it matters that one moves through the world and feels that the, you know, the, the walls have ears or doesn't feel enough that they have is, you know, and one way of responding to the question just to say, well, we could, we could also what it means to, to be in the world and feel that, you know, you're being listened to. That's important too. When I talk, I talk about operational Listening. I mean, and I should say that I borrowed that in the first instance from a guy called Mark ... , who's an amazing, um, American scholar he's based in Melbourne right now. Um, he's just written a book called automated media and he's borrowing it in turn or expanding it from Harun Farocki and Trevor Paglen, his ideas of the operational image or operational vision. What broken Paglen and getting at is the idea that images are increasingly being created for Machine eyes or on the visible at all to Machine eyes, which are not eyes. So there's a whole world of images being created, which totally bypass.
  52. James Parker (00:29:54) - Human perception that are created by machines for machines. And so Mark Andrejevic, which says, well, that idea of operationalization where the, where the Machine, where the image production or the sound production is just that to do something within a system, it performs an operation. It does something right. It's not for aesthetic perception or consideration. There's no understanding. It's just an image is produced. It performs, uh, an effect in a Machine X system that apprehends it on some level and, and, and therefore produces some kind of result. Well, we can draw that out in relation to Listening too, and say, that sounds, uh, increasingly operationalized, right? That they're not there for understanding Listening is not like a relevant consideration. So if we say operational Listening, we're meaning a form of Listening that is trying to do something rather than understand something. It, that when Amazon echo listens to you, it's not Listening.
  53. James Parker (00:30:55) - It's just trying to produce some kind of result, which is often going to be to try and sell you something, you know, so it's a kind of a Listening without understanding or a Listening, a purely correlative Listening, as opposed to kind of comprehending Listening. It's a Listening that is kind of is just an operation within a Machine X system. And so that's a way of responding to the same question that says, okay, well actually we do need to think about the sense in which this is not really Listening and the way that we understand it as kind of comprehension or knowledge it's, it's bypassing that it's, it's something different. It's there are sounds being apprehended in a way that's quite particular and exclusive to machines and increasingly sounds being made by machines that are only for machines as well. I don't know if you've heard about adversarial, audio and adversarial.
  54. James Parker (00:31:50) - Audio is basically audio that is intended is produced by a typically some kind of algorithm. Um, I think exclusively by an algorithm that's meant to be, apprehensible only to some other Machine Listening device. So basically the reason it's adversarial is that you can play audio. That just kind of sounds like noise to a human, or maybe it sounds like part of music, you know, a piece of music, or you can sort of overlay it or embed it in, you know, your next album, Jasmine, uh, and, uh, but, um, you can trick the Machine Listening system in, in, you know, a smart speaker or some other kind of system to understand it as either, you know, a trigger, you know, so there are examples, if you Google adversarial audio, you'll see these, you know, you can click play and it will, it'll go. Th that shows you what the Machine is understanding as being, as having been said.
  55. James Parker (00:32:52) - So for example, you know, buy me five packets of whatever and deliver it to my house immediately, you know, um, or email such and such or, um, whatever it might be. So these are in other words, sounds that are the content of which is not available to human ears, but which are audible to machines. And so that can be used to kind of, to intervene in the sort of the soundscape in a way that sort of bypasses human comprehension. So it sounds made by machines for machines, right? So, um, and you know, obviously there are espionage and kind of hacking kind of CA capabilities there.
  56. Jasmine Guffond (00:33:34) - It makes me think of the, you beacons, the ultrasonic beacons, and that's, um, where your TV or radio, or say YouTube content will emit an ultrasonic frequency above a human hearing range. And if you have an app on your device that can receive it, it'll receive it and then send that data back to the advertising company. Usually what you're watching and what time, and it's also,
  57. James Parker (00:33:58) - This is not ultrasonic, but you're quite right. Like it's doing that, it's doing something similar, isn't it?
  58. Jasmine Guffond (00:34:06) - So it's a means of communication. Yeah. And it's also used apparently to follow you around supermarkets and sports stadiums. Right. But, um, I think you have to go, so I was gonna just ask you jump to the last question, because over the course of these three radio shows, I'm asking different kinds of practitioners about strategies for addressing, um, ubiquitous and pervasive modes of contemporary surveillance. So how would you propose to challenge invasive Machine Listening or what would be an approach I've spent like the last
  59. James Parker (00:34:41) - Few days, uh, trying to think about that exact question in relation to a project that I'm working on at the moment with liquid architecture, that's going to be launched at unsound in October. There's a part of it called lessons in how not to be hurt. And so I was trying to write that over the last couple of days, but I'm going to bracket that question because what I ended up saying in that section is like the sort of groundwork to be done. First of all, we need to, we need to identify Machine Listening as an object of political contestation and artistic and aesthetic and activists.
  60. James Parker (00:35:13) - Inquiry and, you know, and to some extent it is, but, you know, comparing it to something like facial recognition or computer vision, I mean, we're way behind. So one of the, one of the purposes of the unsound project is to sort of begin to grow a community of people, activists, academics, artists who are, um, produce Machine Listening as an object of concern, right. Um, and to produce a kind of a group of a network around it. And so we're calling the project with unsound Machine Listening, our curriculum because we're involved in, uh, you know, we're, it's early days for the research. So for us, it's kind of like the beginnings, you know, w w we're studying it now, you know, live in real time. We don't have the answers where we want to work with people like you and others to begin to produce the answer to those questions.
  61. James Parker (00:36:03) - So it's a process of kind of collective study and sharing, and it's kind of, the curriculum is going to be open source and available to everybody and kind of keep growing and expanding and keep coming together around various different events after unsound. And so, yeah, I want to, I want to sort of answer the question by saying, I don't know the answers yet, but to get together with people and just begin to pose that question and orient Machine Listening as an object of political concern. And I hope in a few years' time that we can have got a little way down that path and then be able to give you a more satisfactory answer. So could constituting a community, uh, and, and a set of questions to begin with.
  62. Jasmine Guffond (00:36:51) - And if people want to take part in your unsound Machine Listening curriculum, what would they have to do?
  63. James Parker (00:36:58) - There's all sorts of different phases of the launch, but if you go onto the unsound website to begin with unsound.pl you'll find some information about it there. And likewise, if you go to liquid architecture.org.edu, because the project is kind of a, it's a, co-production really of, of unsound and liquid architecture. So you can find information about it on either website and eventually we'll have a, a separate website up at Machine Listening, dot exposed, um, that if you're Listening after October, you can, you can, uh, go and check out live.
  64. Jasmine Guffond (00:37:30) - Yeah. Thank you. Um, this is my first show, so perhaps I can put the link to the Machine Listening curriculum on unsound. I'll ask nudes radio if that's possible, but yeah,
  65. James Parker (00:37:41) - There's also a Facebook page, although I don't know if I'm allowed to promote, promote yeah. To promote Facebook and the context of what we've just been discussing.
  66. Jasmine Guffond (00:59:41) - Thanks for tuning in to computational Listening. This was the first of three radio shows as part of my residency at nudes radio. The next additional feature and interview is sound artist, Helen Hesse, and we'll be aired at the same time. On the 19th of October. I'm learning.