Parcourir la source

Merge branch 'master' of git.metadada.xyz:machinelistening/curriculum

master
Sean Dockray il y a 3 ans
Parent
révision
46270987a2
14 fichiers modifiés avec 1130 ajouts et 52 suppressions
  1. +1
    -1
      PUBLISH.trigger.md
  2. +21
    -0
      content/interview/halcyon-lawrence.md
  3. +20
    -0
      content/interview/kathy-reid.md
  4. +21
    -0
      content/interview/liz-pelly.md
  5. +20
    -0
      content/interview/mark-andrejevic.md
  6. +22
    -20
      content/interview/shannon-mattern.md
  7. +21
    -0
      content/interview/thomas-stachura.md
  8. +6
    -31
      content/topic/interviews.md
  9. +234
    -0
      content/transcript/andrejevic.md
  10. +132
    -0
      content/transcript/lawrence.md
  11. +128
    -0
      content/transcript/mattern.md
  12. +200
    -0
      content/transcript/pelly.md
  13. +172
    -0
      content/transcript/reid.md
  14. +132
    -0
      content/transcript/stachura.md

+ 1
- 1
PUBLISH.trigger.md Voir le fichier

@@ -1,4 +1,4 @@
don't gaslight me bro
don't gaslight me bro111
a2`121321111
jp 3 1d2131
zoe publishing new interview . k11


+ 21
- 0
content/interview/halcyon-lawrence.md Voir le fichier

@@ -0,0 +1,21 @@
---
title: "Halcyon Lawrence"
Description: "[Halcyon](http://www.halcyonlawrence.com/) talks us through some of her work on the politics of voice user interfaces: in particular accent bias, 'Siri discipline' and the ways in which smart speakers reproduce and hardwire longstanding forms of linguistic imperialism."
aliases: []
author: "Machine Listening"
date: "2020-08-31T00:00:00-05:00"
episode_image: "images/ml.gif"
explicit: "no"
hosts: ["james-parker", "joel-stern", "sean-dockray"]
images: ["images/ml.gif"]
news_keywords: []
podcast_duration: "62:53"
podcast_file: "https://machinelistening.exposed/library/Halcyon%20Lawrence/Halcyon%20Lawrence%20(18)/Halcyon%20Lawrence%20-%20Halcyon%20Lawrence.mp3"
podcast_bytes: ""
youtube: ""
categories: []
series: []
tags: []
transcript: "http://git.metadada.xyz/machinelistening/curriculum/src/branch/master/content/transcript/lawrence.md"
---


+ 20
- 0
content/interview/kathy-reid.md Voir le fichier

@@ -0,0 +1,20 @@
---
title: "Kathy Reid"
Description: "[Kathy](https://github.com/KathyReid) talks to us about her work with [Mycroft](https://mycroft.ai/), [Mozilla Voice](https://voice.mozilla.org/) and now 3AI on open source voice assistants and the technics and politics of automatic speech recognition, along with a couple of utopian possibilities."
aliases: []
author: "Machine Listening"
date: "2020-08-11T00:00:00-05:00"
episode_image: "images/ml.gif"
explicit: "no"
hosts: ["james-parker", "joel-stern", "sean-dockray"]
images: ["images/ml.gif"]
news_keywords: []
podcast_duration: "59:04"
podcast_file: "https://machinelistening.exposed/library/Kathy%20Reid/Kathy%20Reid%20(27)/Kathy%20Reid%20-%20Kathy%20Reid.mp3"
podcast_bytes: ""
youtube: ""
categories: []
series: []
tags: []
transcript: "http://git.metadada.xyz/machinelistening/curriculum/src/branch/master/content/transcript/reid.md"
---

+ 21
- 0
content/interview/liz-pelly.md Voir le fichier

@@ -0,0 +1,21 @@
---
title: "Liz Pelly"
Description: "[Liz](https://lizpelly.neocities.org/) talks to us about the cultural politics and political economy of Spotify. We talk through some of the ideas in her [amazing column](https://thebaffler.com/authors/liz-pelly) for The Baffler, along with some of the listening experiments she's conducted on Spotify's algorithms (and herself), before turning to her argument for ['socialised streaming'](https://reallifemag.com/socialized-streaming/)."
aliases: []
author: "Machine Listening"
date: "2021-03-23T00:00:00-05:00"
episode_image: "images/ml.gif"
explicit: "no"
hosts: ["james-parker", "joel-stern", "sean-dockray"]
images: ["images/ml.gif"]
news_keywords: []
podcast_duration: "59:15"
podcast_file: "https://machinelistening.exposed/library/???????.mp3"
podcast_bytes: ""
youtube: ""
categories: []
series: []
tags: []
transcript: "http://git.metadada.xyz/machinelistening/curriculum/src/branch/master/content/transcript/pelly.md"
---


+ 20
- 0
content/interview/mark-andrejevic.md Voir le fichier

@@ -0,0 +1,20 @@
---
title: "Mark Andrejevic"
Description: "[Mark's](https://research.monash.edu/en/persons/mark-andrejevic) recent book [Automated Media](https://www.routledge.com/Automated-Media/Andrejevic/p/book/9780367196837) considers the politics of automation through the 'cascading logics' of pre-emption, operationalism, and 'framelessness'. We talk through some of these ideas, along with the limits of 'surveillance capitalism' as an analytic frame, 'touchlessness' in the time of Covid, 'operational listening', what automation is doing to subjectivity... and how all this relates to reality TV."
aliases: []
author: "Machine Listening"
date: "2020-08-21T00:00:00-05:00"
episode_image: "images/ml.gif"
explicit: "no"
hosts: ["james-parker", "joel-stern", "sean-dockray"]
images: ["images/ml.gif"]
news_keywords: []
podcast_duration: "50:26"
podcast_file: "https://machinelistening.exposed/library/Mark%20Andrejevic/Mark%20Andrejevic%20(part%201)%20(16)/Mark%20Andrejevic%20(part%201)%20-%20Mark%20Andrejevic.mp3"
podcast_bytes: ""
youtube: ""
categories: []
series: []
tags: []
transcript: "http://git.metadada.xyz/machinelistening/curriculum/src/branch/master/content/transcript/andrejevic.md"
---

+ 22
- 20
content/interview/shannon-mattern.md Voir le fichier

@@ -1,20 +1,22 @@
---
title: "Shannon Mattern"
Description: "Leading off from [Shannon's](https://wordsinspace.net/shannon/) essay [\"Urban Auscultation; or, Perceiving the Action of the Heart\"](https://placesjournal.org/article/urban-auscultation-or-perceiving-the-action-of-the-heart/), which addresses machine listening in the pandemic, we talk about the stethoscope, the decibel and other histories of machine listening, along with its epistemic and political dimensions and artistic deployments."
Description2: "Leading off from Shannon's essay \"Urban Auscultation; or, Perceiving the Action of the Heart\", which addresses machine listening in the pandemic, we talk about the stethoscope, the decibel and other histories of machine listening, along with its epistemic and political dimensions and artistic deployments."
aliases: []
author: "Machine Listening"
date: "2020-08-18T00:00:00-05:00"
episode_image: "images/ml.gif"
explicit: "no"
hosts: ["james-parker", "joel-stern", "sean-dockray"]
images: ["images/ml.gif"]
news_keywords: []
podcast_duration: "00:55:00"
podcast_file: "https://machinelistening.exposed/library/Shannon%20Mattern/Shannon%20Mattern%20(19)/Shannon%20Mattern%20-%20Shannon%20Mattern.mp3"
podcast_bytes: ""
youtube: ""
categories: []
series: []
tags: []
---
---
title: "Shannon Mattern"
Description: "Leading off from [Shannon's](https://wordsinspace.net/shannon/) essay [\"Urban Auscultation; or, Perceiving the Action of the Heart\"](https://placesjournal.org/article/urban-auscultation-or-perceiving-the-action-of-the-heart/), which addresses machine listening in the pandemic, we talk about the stethoscope, the decibel and other histories of machine listening, along with its epistemic and political dimensions and artistic deployments."
aliases: []
author: "Machine Listening"
date: "2020-08-18T00:00:00-05:00"
episode_image: "images/ml.gif"
explicit: "no"
hosts: ["james-parker", "joel-stern", "sean-dockray"]
images: ["images/ml.gif"]
news_keywords: []
podcast_duration: "55:00"
podcast_file: "https://machinelistening.exposed/library/Shannon%20Mattern/Shannon%20Mattern%20(19)/Shannon%20Mattern%20-%20Shannon%20Mattern.mp3"
podcast_bytes: ""
youtube: ""
categories: []
series: []
tags: []
transcript: "http://git.metadada.xyz/machinelistening/curriculum/src/branch/master/content/transcript/mattern.md"
---



+ 21
- 0
content/interview/thomas-stachura.md Voir le fichier

@@ -0,0 +1,21 @@
---
title: "Thomas Stachura"
Description: "Thomas is CEO of [Paranoid Inc](https://paranoid.com/#paranoid), which makes devices that block smart speakers from listening. The company's mandate 'earn lots of money by increasing privacy, not eroding it' imagines an emerging privacy industry, as data mining and surveillance continues to become the dominant business model in silicon valley and elsewhere."
aliases: []
author: "Machine Listening"
date: "2020-08-28T00:00:00-05:00"
episode_image: "images/ml.gif"
explicit: "no"
hosts: ["james-parker", "joel-stern", "sean-dockray"]
images: ["images/ml.gif"]
news_keywords: []
podcast_duration: "55:29"
podcast_file: "https://machinelistening.exposed/library/Thomas%20Stachura/Thomas%20Stachura%20(20)/Thomas%20Stachura%20-%20Thomas%20Stachura.mp3"
podcast_bytes: ""
youtube: ""
categories: []
series: []
tags: []
transcript: "http://git.metadada.xyz/machinelistening/curriculum/src/branch/master/content/transcript/stachura.md"
---


+ 6
- 31
content/topic/interviews.md Voir le fichier

@@ -4,6 +4,8 @@ has_experiments: []
---
# Interviews

{{% interview "liz-pelly.md" %}}

{{% interview "feldman-li-mills-pfeiffer.md" %}}

{{% interview "lauren-lee-mccarthy.md" %}}
@@ -24,39 +26,12 @@ has_experiments: []

{{% interview "vladan-joler.md" %}}

## Halcyon Lawrence

[Halcyon](http://www.halcyonlawrence.com/) talks us through some of her work on the politics of voice user interfaces: in particular accent bias, 'Siri discipline' and the ways in which smart speakers reproduce and hardwire longstanding forms of linguistic imperialism.

![Interview conducted on 31 August, 2020](audio:https://machinelistening.exposed/library/Halcyon%20Lawrence/Halcyon%20Lawrence%20(18)/Halcyon%20Lawrence%20-%20Halcyon%20Lawrence.mp3)


## Thomas Stachura

Thomas is CEO of [Paranoid Inc](https://paranoid.com/#paranoid), which makes devices that block smart speakers from listening. The company's mandate "earn lots of money by increasing privacy, not eroding it" imagines an emerging privacy industry, as data mining and surveillance continues to become the dominant business model in silicon valley and elsewhere.

![Interview conducted on 28 August, 2020](audio:https://machinelistening.exposed/library/Thomas%20Stachura/Thomas%20Stachura%20(20)/Thomas%20Stachura%20-%20Thomas%20Stachura.mp3)


## Mark Andrejevic

[Mark's](https://research.monash.edu/en/persons/mark-andrejevic) recent book [Automated Media](https://www.routledge.com/Automated-Media/Andrejevic/p/book/9780367196837) considers the politics of automation through the "cascading logics" of pre-emption, operationalism, and "framelessness". We talk through some of these ideas, along with the limits of "surveillance capitalism" as an analytic frame, "touchlessness" in the time of Covid, "operational listening", what automation is doing to subjectivity... and how all this relates to reality TV.

![Part 1: Interview conducted on 21 August, 2020](audio:https://machinelistening.exposed/library/Mark%20Andrejevic/Mark%20Andrejevic%20(part%201)%20(16)/Mark%20Andrejevic%20(part%201)%20-%20Mark%20Andrejevic.mp3)

![Part 2: Interview conducted on 21 August, 2020](audio:https://machinelistening.exposed/library/Mark%20Andrejevic/Mark%20Andrejevic%20(part%202)%20(17)/Mark%20Andrejevic%20(part%202)%20-%20Mark%20Andrejevic.mp3)
{{% interview "halcyon-lawrence.md" %}}

{{% interview "thomas-stachura.md" %}}

{{% interview "mark-andrejevic.md" %}}

{{% interview "shannon-mattern.md" %}}


## Kathy Reid

[Kathy](https://github.com/KathyReid) talks to us about her work with [Mycroft](https://mycroft.ai/), [Mozilla Voice](https://voice.mozilla.org/) and now 3AI on open source voice assistants and the technics and politics of automatic speech recognition, along with a couple of utopian possibilities.

![Interview conducted on 11 August, 2020](audio:https://machinelistening.exposed/library/Kathy%20Reid/Kathy%20Reid%20(27)/Kathy%20Reid%20-%20Kathy%20Reid.mp3)




{{% interview "kathy-reid.md" %}}

+ 234
- 0
content/transcript/andrejevic.md Voir le fichier

@@ -0,0 +1,234 @@
---
title: "Mark Andrejevic"
status: "Auto-transcribed by reduct.video with minor edits by James Parker"
---

James Parker (00:00:00) - So, I mean, uh, it would be great if you could just kick things off by telling us a bit about yourself, um, and your work, you know, how you get to the questions that you're currently thinking about.

Mark Andrejevic (00:00:12) - Thanks. Thanks so much. It's great to talk to you. And, um, the, you know, the, the longer background story is I started my academic career nominally in television studies, um, because I wrote a book called reality TV, the work of being watched, and, you know, what I was really interested in and talking about reality TV was the way in which I thought it modeled the online economy. And the question really that book came out of the question, uh, that had to do with the discussions around interactive media at that point. So I've been taking a look at interactive media art, you know, talking about like late nineties.

Mark Andrejevic (00:00:50) - So, um, you know, hypertext novels, participatory, artworks the promise of, um, reconfiguring the relationship between Taryn dater. Uh, and, uh, I remembered him as often as the case, I think in the aesthetics fear, the kind of empowering and interesting and original characteristics of the medium are being explored. What's going to happen when capital takes it up. And the answer that, you know, I was thinking like, where's the space where I can see that happening and reality TV, which was just getting big at that time. And it's, you know, in its iteration that took off in the two thousands was, you know, they had some successful formats. The real world and road rules is kind of MTV formats in the U S and the big brother followed immediate on that. Um, and I looked at that and I thought, that's it, that's one of the ways in which the culture shows us what happens when capital takes up the logic of interactivity.

Mark Andrejevic (00:01:47) - It's about offloading, uh, portions of the labor of production onto the audience in the name of their self-empowerment and self-expression and acquainting willing submission to being watched all the time with those forms of self-expression and self-empowerment and self knowledge and reality TV stage that. So interestingly, and as a media studies scholar, I was really interested in the ways in which the media kind of reflect back to us our moment and reality TV became, um, the, the, I don't think of it as a genre. I think of it as a kind of mode of production. It became the mode of production. It reflected back to us what was happening with mass customization via the interactive capability to collect huge amounts of information, and to use that, to rationalize production marketing and distribution processes. Um, so I got my first job in television studies, but I think people understood, I wasn't intelligence studies when I sent that book out, you know, they circulated it, the manuscript to TV studies authors, and, you know, the first set of comments I got back from the first publisher I sent it to was, you're not really engaging with television and it was true.

Mark Andrejevic (00:03:01) - So unfortunately I found a publisher who was okay with that, um, because it was really a book about the political economy of interactivity. Uh, and then my subsequent work has really, you know, it was moved away from reality television and, and looked at that, those questions of the political economy of, of interactivity, the ways in which, um, the economy that we're building is reliant on the mobilizing, that interactive capability of digital media, uh, for the purposes of information collection and the forms of economic rationalization and risk management and control, uh, that flow from that. Um, I remember, you know, early on making that argument in 2008, and it was 2001, I was in, uh, in, um, a job search. And, you know, I was trying to make the argument that this is the basis on which the online economy is being built. Uh, and at that time I didn't have a very receptive audience, but times have changed since, since then, you know, now that's become kind of a commonplace that that's what the interactive economies is built on. So my subsequent work has really focused on the relationship between digital forms of monitoring and data capture and different spheres of social practice, uh, and how, uh, forms of, uh, power and control are being a mass by those who have access to the means of collecting processing and mobilizing the data.

Sean Dockray (00:04:32) - When you think people started catching up with, with that observation that you made, like, what, what is it that happened in culture that, that made it apparent to people?

Joel Stern (00:04:41) - If your argument was it when the reality TV star became president?

Mark Andrejevic (00:04:47) - So that's, um, uh, it was, I, you know, I th th the big shift looked to me yes, right around that moment. Um, you know, I think there'd been a lot of coverage of, um, you know, how much information is being collected about you and what that means for privacy.

Mark Andrejevic (00:05:05) - So the privacy stuff started to kick in, you know, not too long after that, I would say, you know, mid two thousands, mid naughties. Um, but the moment where things really started to galvanize would be I think the Snowden revelations, which made the connection between the commercial data collection and, um, state use of that information and, you know, made it quite palpable. So that snowed moment was a big episode. And then I think probably the other big episode is the Cambridge Analytica moment and, you know, Brexit and the 2016 election, you could really just see the whole time switched. And all of a sudden these companies that had been, you know, floating on some kind of halo that differentiated them from other big corporate industries, you know, I remember talking to undergraduate students and, you know, the way they thought about, for example, finance industry, you know, it was so different from how they thought about the tech sector.

Mark Andrejevic (00:06:03) - You know, the tech sector was where you'd go, if you wanted to express yourself and event and you could be young and do all kinds of cool creative things. And, you know, what had been the big, you know, kind of instrumental economic sector to go in, in the 1980s, which was finance, that was the state boring, you know, um, bad capital and tech was kind of good, cool capital. Um, but that really got reconfigured. I think, you know, uh, all of a sudden, you know, the ability of Google to say things like do no evil, it just didn't work anymore. Couldn't get it, couldn't get away with that. And now we live in this moment where, um, I don't know, there's something interesting going on with the way, um, quote unquote, legacy media frame, digital media, and, you know, in a way they get to blame digital media for a lot of the pathologies that they participate in and digital media, because, you know, the tech sector now becomes the villain. Uh, you know, I, I, I think it's, it's not the only villain and those, those who hold it up as the villain or are participating in, uh, in what they're imagined they're critiquing. Uh, but yeah, I would say that shift happened, happened around them and of course, Shoshana Zuboff surveillance capitalism, right. Like hit right at that moment, that sweet spot and just gave a vocabulary to, um, the, the kind of critical moment in the popular culture and in the punditry realm. And that just took off,

James Parker (00:07:32) - You've got this new book, um, automated media, which is obviously very concerned with, or responsive to, you know, a similar kind of paradigm or, you know, contemporary, contemporary situation that the Zuboff is concerned with. But, but you frame it quite differently. I mean, of course you're concerned with capital. Um, but you, you tend to talk about, um, what you call the biases of automation, which sounds like a kind of it's, is it a political logic or, or, uh, sort that, that seems like a, sort of a really key and foundational concept in the book. Could you say a little bit about what the bias of automation is, how it's different from other forms of bias, algorithmic bias, or other forms of bias in tech? What sort of, what sort of object is it?

Mark Andrejevic (00:08:20) - You know, I, I, when, when surveillance capitalism, when that book got so much attention, I remember kind of kicking myself and going like, damn, why didn't I use that term? And, you know, cause I've been writing about this stuff for 19 years or so beforehand. And, uh, and then I realized it, you know, I thought would it have occurred to me and I realized it probably wouldn't have occurred to me because I see surveillance capitalism is a as, um, a redundancy, a plan is right. You know, it's, uh, capitalism and surveillance go hand in hand. So that, so that seemed to me to be a given. Um, and, uh, the idea that there was some special subset of capitalism that was surveillance capitalism is distinct from other forms of capitalism, just wouldn't quite occur to me because, you know, historically all the work that I'd been doing had been looking at the continuity, uh, between very early forms of, you know, enclosure and wage labor, and the forms of monitoring that, that enabled, that actually made, um, you know, industrial capitalist production possible that the idea that this was a kind of distinct rupture, um, just wasn't in the historical stories as I saw it.

Mark Andrejevic (00:09:31) - So too bad for me because it would've been a useful, useful term to mobilize. But so the, the book automated media, a little bit of, I don't know, just a little bit of a backstory was it started as a book that might still, that I'm still kind of working on that was called drone media. Um, because I really, I wanted to write a book about how automation under the social conditioning conditions that was being developed. Operator's a form of violence.

Mark Andrejevic (00:10:00) - And I liked, um, the figure of the drone as an avatar for automation, uh, and a lot of the logics that I then described as biases, uh, and incorporated into that book, automated media came out, spending a fair amount of time looking at how drones, uh, reconfigure the, I mean of conflict and what it, you know, the forms of asymmetry that are, that they rely on the forms of kind of always on, uh, every space. Every space is a potential site of conflict at any particular moment. Uh, the way the French did space disappeared, withdrawn conflict, uh, those informed what I call the biases of automation that I described in the book. Um, and what I mean with that term bias that comes out of North Americans, specifically Canadian, uh, you know, media studies research, Harold Innes, uh, writes a book called the bias of communication.

Mark Andrejevic (00:11:01) - And he's, he's really interested in large acts of empire. The bias communication for him refers to the tendencies, the different media technologies, uh, reinforce a reproduce within particular social formations. So, uh, you know, it can sound a little bit technologically determinist in the sense that it's the technology that carries the bias, but I, the way I read his work is that it's the technology within a particular social context, that those two are connected. Uh, and w and, you know, for example, he thinks about, um, media technologies that, uh, lend themselves to, um, rapid transmission across space are used to control, um, Imperial programs that reach across space. Uh, whereas those that are durable through time lend themselves to a different concept of, of, uh, control of information through times, you know, durable stone tablets versus portable pirates, or these types of things. But what interested me is that the notion that in a particular context, immediate technology could carry the very choice to use.

Mark Andrejevic (00:12:11) - It could carry logics of power within it. That was the insight that I thought was interesting because I tried to figure out if you're writing about digital media technology, and you want to critique the power relations that it reproduces in which it's embedded, what level do you do that? Uh, and you know, one of the difficulties in writing about digital media is things move so fast. Uh, and so my goal was to see, you know, what are some of the tendencies or patterns that we can abstract from the fast movement, um, that will allow us to maybe anticipate logics that are going to emerge over time, uh, and provide some form of critical purchase so that we exist, not in a purely reactive relationship to the technological developments, but in a more knowing and anticipatory relationship to it. We can where these things are headed. And the goal of extracting biases was to, to suggest that these particular tendencies, that flow from the choice to use a particular technology media technology in this case allow us to, to, to generate some understanding of where we're headed as we take these technologies in hand. Um, and so that's what I tried to do in the book was isolate some of those tendencies. Um, yeah.

James Parker (00:13:27) - And what are some of those tendencies, are you, you're not talking for example about, you know, racial bias. I mean, you're not just talking about bias in that sense, you're talking about a different kind of tendency.

Mark Andrejevic (00:13:38) - Yeah. So, so bias in this sense refers really to kind of tendential logics that flow from the choice to use a particular technology. I mean, probably the best, I don't know if you think about the apparatus of the market, for example, you know, what would be the, one of the biases of the, of, of market exchanges is, uh, they're biased towards the quantification of value, you know, understanding value in purely, um, quantifiable terms, but you're right, when it, when you're talking about bias and digital media, the ready correlation that people make is to the ways in which automated systems are biased on protected categories, um, you know, race, gender, skin, color, et cetera. Um, and, and I think that work is, is super important. And I think there's also an interesting connection or question to ask about the type of biases that I'm interested in, uh, logics and those biases. Cause I think there may be a way to make the connection that under particular social circumstances, those are connected, but the biases that I'm thinking about, uh, and I'll I'll name a few of them have to do with, again, the kind of social imperatives that are carried through the technologies that are implemented. And one of them that I talk about is frameless newness. Um, and what I mean by frameless anise is the ambition basically to reproduce the world and data fired form. Um, and I th the term famousness flows.

Mark Andrejevic (00:15:03) - It really from a particular example that I came across a while ago, it was an ad for, uh, uh, the lifestyle cam that you wear on yourself all the time. And it records your life and that the advertising promotional material for that device said, uh, you know, did you know that the average human only rep remembers? I can't remember. It was like 0.9% of, you know, what they encounter every day. That's about to be fixed by the life, or now you can remember everything and that ambition to record everything, to be able to remember, everything seemed interesting and telling from the perspective of automated media, because it reproduces the ambition of, uh, digital media technologies mobilized for commercial purposes or policing purposes or state surveillance, uh, to quote. And this is a quote from the former chief technical officer of the CIA to collect everything and hold onto it forever.

Mark Andrejevic (00:16:03) - Um, because the idea there is that if humans are trying to make sense out of information, you have to do, you have to engage in what's what they call search and window, um, surveillance or data collection. You collect what you can, and then you get rid of what's not relevant, but if you have automated systems that are processing the data you collect, um, what you want is as much data as possible because the logic is the machines can go through this at speeds that are superhuman, and they can detect correlations that are, uh, undetectable to humans. And that if you leave any piece of information out may form part of a pattern that would be undetectable, you discard it. So the winnowing process is, is, um, is reversed the, the way the CIA chief technical officer described it was you. We want to connect the dots.

Mark Andrejevic (00:17:00) - Um, and sometimes the pattern of the dots emerges only when you get later dots, if you throw away some of the earlier dots, you won't get to see the pattern. So that's why you temporarily you'll have to hold onto the data forever. Um, and to see the pull pattern, you need as much data as you can get. So that's why you need all of, um, and so I use that notion of the frame to, to think about the, uh, you know, the way in which a particular picture or a narrative, uh, is framed, basically understands that pictures and narratives are always as, as our subjectivities. They're always selective, you know, the subjectivity that of, of me who remembers my day, um, if I remembered everything in a sense, I lose myself. It's part of what constitutes our identity are those things that we remember and the gaps that we don't, um, the idea that you could fill all of that in, and that would be perfecting yourself, I think is tantamount to saying, well, that would also be obliterating herself.

Mark Andrejevic (00:18:01) - Um, so I use that. I try to put, I put a lot of weight on that figure of the, because it, it talks about what gets left out. Um, and of course it's a visual metaphor, right? In framing the picture. But, you know, I try to extend it to thinking about it in informational terms, frames, having, uh, defining where the information collection would stop, where the use would stop, um, where the motivation would be restricted or limited. So, um, if you think about, I don't know, no, the average marketing researcher, I imagine asking them what type of data that you could collect about people and what they do would not be relevant to, you know, your marketing initiative. And I imagine the answer, right, it'd be, it's all potentially relevant. There is no frame that would stop. It would say you just stop here. And the other thing that interests me about that concept of the frame Listening is the way in which it gets reproduced actually in forms of representation, because representation, one thing that I would argue about it is that it's always in framed in a certain way, like a representation has to leave something out, but the, the, the generation of, um, forms of innovation capture and forms of information, representation that don't leave anything out is, is something that's familiar in our moment.

Mark Andrejevic (00:19:21) - So three 60 degree cameras, virtual reality that there's, um, I was really interested. I don't know what's happening to this technology in a data collection technology that I think of, you know, it's trying to imagine what would, what would a frameless form of, uh, yeah, I can imagine kind of F kind of a frameless form of representing patient and essentially digital reality. That was inf sorry. Um, virtual reality. That's infinite that reproduces actual reality, right? Um, that you get in there and there isn't any frame, the only frame would be provided, uh,

Mark Andrejevic (00:19:54) - You know, what are the things, but what's the information capture, um, medium to collect all of that. Uh, and I don't, I don't think there's any particular answer, cause I think these are all impossible goals, but I think they're also tendencies that, that we can discern. So the fact that the goal is impossible, it doesn't mean that the tendency to move in that direction is not emerging. And, uh, this was the, the smart dust, um, moment that came, it came, uh, came out of the, um, uh, second Iraq war. Uh, they were trying to figure out how can you monitor urban spaces in ways that get around the problem of, of what urban warfare poses, which is, you know, walls and hidden spaces and things that, you know, can't be seen through. And they, one of the technologies that they came up with was smart dust sensorized particles that could be distributed through the air, uh, and that, and that would relay their position to one another. And the idea was that the air itself would be permeated with these particles. So the air would become the medium of information capture. So you could see internal spaces if there was a person behind the wall, the air would have to flow around them and that information would be captured by the dust particles. And so you could isn't that sound? Yes, exactly.

James Parker (00:21:11) - Yeah, I was, um, I was thinking, as you were speaking about how, you know, you said you started off with the frame of visual metaphor and then you sort of, you're, you're sort of tendency is expansive towards logics of automation in general, biases of automation in general. And, you know, we framed this project in terms of, you know, we're sort of advocating or trying to think through a politics of Machine Listening um, you know, what, what would it mean to take that as an object? And I'll be, I've been struck from the beginning about the way in which that's sort of important because, you know, facial recognition, computer vision, um, search, uh, sort of the smart city more generally, you know, these things get a lot of press, but a lot of the audio technologies, which are actually very pervasive and there's many more companies than people seem to realize, sort of get missed.

James Parker (00:21:57) - And so there's something about, you know, signal boosting or consciousness raising or something to say, you know, Machine Listening is a thing too, but I'm really conscious that there's a risk that comes with that of, you know, embedding and privileging. I don't know if it's just anthropocentric or, you know, uh, sense specific logics in a way when, when the data is kind of agnostic to sensory input on some level, although perhaps, you know, there are a sudden biases, you know, that that are, or affordances from Listening or audio technologies specifically that, you know, you might put performance, some logics I'll be interested to know, like, what do you think about the politics that say of sense agnosticism or framing things around specific sensory modalities?

Mark Andrejevic (00:22:44) - Well, I, you know, I think the specificity of the sensory modality is, is crucial to understanding the affordances of these monitoring processes. There, there is kind of a, you know, a synesthetic quality of some of this, you know, it's on the one hand, there's a real temptation in the data monitoring approach to just, as you say, collapse it all into data. Um, but that does bulldoze some of the, uh, kind of specific technological affordances and some of the synesthetic stuff is really interesting, right? This is stuff that I think you, you know, more much more than I do, but, you know, the, the, you know, those Listening processes that rely on ultra high speed cameras, um, that can capture vibrations, right. And so kind of can transpose auditory information into visual information and then back into auditory information again. Um, and what that, to me points to is the specificity of the modalities of those, of those affordances.

Mark Andrejevic (00:23:46) - In other words, you can't, you can't see what people are saying, unless you can capture the traces of the, of the sound waves. And so, I don't know, maybe maybe a better approach than, you know, undifferentiated convergence is synesthesia, you know, the, um, the ability to, to find ways, how can you use one medium to capture the information from, from another medium, uh, and then how can you retranslate it or translate it back. But, but I think your, the, the point that you make about between sound particulate capture, um, I mean, it, that's so interesting, right? Because you know, the, the challenge then posed, you know, something like basically what the smart dust is really trying to do is to take something very similar to sound, which was the, you know, the disturbance of the, of the particles, uh, but make it travel.

Mark Andrejevic (00:24:40) - You know, piggyback it on because these things are supposed to relay. It doesn't work. Right. But it's the notional idea they're supposed to relay this information electromagnetically in data form so that you could hear through spaces that are more direct promulgation of sound waves, you know, wouldn't be able to reach, but what if you could make sound that would, you know, travel through all the byways that you need to get it to go to, to get back to you. But I think, I mean, I, I think probably the, one of the things that you're addressing and I think it's super important is the default vocabulary for monitoring is so visual. And that's one of the challenges that one faces when we look at a realm in which monitoring practices or visual is just one subset of, of all the forms of information and data collection that, that participate in this kind of interactive form of data capture. I've been talking about.

James Parker (00:25:38) - I mean, one of the things that occurs to me with sound and voice specifically is this idea of, you know, the disappearance of the interface. And, you know, that's a, quite a specific audio phenomenon. You described it to us once recently as touchless listeners, the tendency of the interface to sort of disappear in the way in which COVID and the pandemic contexts and ideas of hygiene are sort of accelerating that kind of that discourse. And it seems like touchless, Snus and voice are quite a specific subset that have their own sort of politics and tech and biases baked in.

Mark Andrejevic (00:26:19) - I mean, the, the stuff on touchless Snus came out of, you know, spend some time looking at the facial recognition technology and then watching them pivot in response to the pandemic moment. And, you know, they saw this immediately as an opportunity and the opportunity was a whole range of kind of transactional informational interfaces that rely on some form of physical contact in an moment when physical contact was stigmatized because of its relationship to contagion. Um, and so I was at an NEC online conference, that's their big developer of among other things, uh, facial recognition technology. And they're coming up in partnership with other, um, commercial providers with a range of, you know, touchless miss solutions. And because of the way in which, and, and, you know, this is, I guess again, thinking about the affordances of the different sensory technologies, but they saw, uh, facial recognition is providing solutions to touch Listening.

Mark Andrejevic (00:27:16) - So everything from a security room where, where, you know, you may have had to do a fingerprint or a card swipe, replacing that with facial recognition to access, to meds, mass transit, um, to access to stadiums, uh, to, you know, clearance for entrance to buildings. All of those could be transposed into facial recognition registered. So shopping was, was one of the big ones, you know, you don't have touch those dirty screens that others have touched. Um, uh, you know, there's, I think there's a, there's a deeper tendency here that, um, it's probably noting it's something I spent some time in automated. Yeah. Kind of going on a screed about, um, but it's, it's the kind of anti-social fantasy of digital media technologies and the pandemic moment really. And, you know, that, that gets phrased in, in, you know, I don't know, certain kind of popular discourses around, you know, like does Facebook depress you, or that it's not really at that level that I'm, I'm thinking about it, it's more on the level of what it means to see oneself is kind of hermetic sphere and others is potential threats to a permit system.

Mark Andrejevic (00:28:26) - Um, it's been real interesting to see all of the discourses around the dirtiness of the other, right? Like the, you know, here are the particles that come out of the mouth, here are the things that they leave. Here's what happens when they flush the toilet. It's not like some really, um, you know, nasty embodied, like just wherever we go, we leave these organic particles that are potentially contaminants. And it reminds me a little bit of, you know, futurists Ray Kurt's wild, he's kind of scorned for the flesh, you know, like, yeah. You know, the thing about proteins are, is that, you know, they're like really temperature sensitive and, you know, very fragile, like carbon polymers are much better. So if we could, you know, upload ourselves into carbon pollen, you know, like the flesh is dirty and weak and, and mortal and, you know, yuck. And that really fits with a particular tech aesthetic. Right. You know, like the clean Machine. But I think of those early days, you know, um, going back when I started reading about digital media of the, the, all of the fantasies that were socially hermetic, you can live and they were very privileged, right. You know, you can live in your mountain cottage and never leave. And anytime, like for work, you just zip on your corporate virtual workspace, this is, was a thing, the corporate virtual workspace. And, you know, you'd find yourself in a haptic space.

Mark Andrejevic (00:29:39) - We're with virtual reality. And you know, I'm not that different from what we're doing now, but you know, it's still a few dimensions, uh, extra, but the fantasy was one of stasis and the fantasy was one of the, um, diminution of the social. And it was very hermetic, right? Bill Gates, his early fantasies were very similar. You know, if you read, um, the road ahead, not only would you be able to pick, I want to live on this tropical Island and you know, I don't have to go anywhere because I can go there virtually, but all the media that I can consume, I thought this was such a telling moment. He said, you know, like you're watching your favorite movie, you'll be able to customize it. So you are the star. And it was interesting to me, that was the first thing that came into his head.

Mark Andrejevic (00:30:20) - Right. Like everywhere I look, I see only me. Um, the extension of that into, into informational space is hyper customization and hyper targeting. Right. You know, um, yes, everywhere you go, you will see only you, the movies you want to watch. The news you want to hear. Um, this is the fantasy of digital customization. It's a kind of, um, complete background of the social processes that make those decisions. Instead, the promise to reflect back to yourself, yourself in perfected form. To me, those two things seem to fit, right? This kind of desire for kind of a pure, um, I, you know, hermetic individual, um, understood is kind of your own fully defined entity unto yourself. If only enough data can be collected about you and this kind of reaction, that's really brought out by the pandemic, um, to, you know, like the threat of others. Um, you know, just, just walking down the street that, that weird feeling that you get when in the pandemic moment, you know, somebody comes to close, somebody makes a little cough, right. You know, it's that, it's that kind of, to me, fully terrifying moment of, um, other, you know, of the others, the literalization the threat of the other,

Joel Stern (00:31:40) - I was just gonna say, uh, of sort of social exposure, but it's also, you know, it's, I was thinking about, as you were talking about, um, the particles that sort of come off other people, the sort of synthesized voice of Siri or Alexa as a, as a kind of voice, you know, without breath, without contagion as a kind of, uh, an, a lot of bodies, you know, and on the other end of that, the sort of, um, co COVID diagnostic tools that are sort of coming up through, through voice analysis and recognition where you sort of cough into, you know, the telephone or a microphone and get diagnosed. So this sort of set in a way, um, these sort of voice interfaces, as ways of mediating this kind of fear, fear of contagion, or, you know, producing a voice that is, um, dangerous or, or infectious.

Mark Andrejevic (00:32:36) - Yeah. That's a great, great connection. Sorry. I dropped out for a little bit, so I just keep it, um, but yeah, that's really nice. Voicelessness and breathlessness. Um, yeah, the breath is such a threat, you know, for somebody who's obsessive compulsive a little bit, like, I, it's really weird to watch these, you know, as a kid, you know, I really had that journal really weird to see it's like the neurotic fantasies of, of my child, the contemporary sort of it's terrifying.

Sean Dockray (00:33:15) - Um, I wanted to ask her a question about, um, paranoia, um, just cause you know, like this kind of relationship between like sort of imagining everything else is connected and somehow that you're subject to that kind of vast conspiracy around you, but not actually part of it, you know, that kind of follows from what you were just saying about this kind of crisis of the cells and the hermetic cells and everything. And I guess one thing I had wanted to ask was just, just about the role of like paranoia and the mobilization of paranoia, both in like critiques and also even in the marketing of, of, of a lot of the, the new devices. So that was one direction that I was hoping to go down and then another direction, um, it was just in the, in the touch Listening um, and maybe even related to that previous point, it's just this like ultimate disappearance of the, um, of the computer at this moment in history, you know, that we sort of imagined that the computer appeared on the scene and it's like more and more a part of our life. But actually it seems like the computer is a historical artifact, which was kind of around from like the eighties to the two thousands. And then it's gone like that. The computer as a figure is, uh, you know, in the workplace a little bit, like it wasn't the sixties, but otherwise it's something that's disappearing. Right. And we're back to like. The the, the, the kind of domestic space, again, you know, everything is different. Everything is kind of haunted by the computer, but it's, uh, so I guess it was just, that was another area I was sorta wondering about just in relation to what you were talking about, about cleanliness and the aesthetics of the aesthetics of that.

Mark Andrejevic (00:34:56) - The questions are both so great. The, the paranoia one is one that I think about a lot, you know, I started, I started listening to that podcast about, um, Q Anon, uh, and it, it fascinates me. The conspiracy theory is tough. The formulation that you made to me is really, uh, you know, about the kind of, I think about it as the kind of suppression and misrecognition of interdependence in the social, that's lined up with the offloading of the social work of constructing individuality onto automated systems, right? So, you know, if we can see from the start that, you know, whatever our conception is of the, of the individual is, uh, constituted by the social relations in which we exist. The work of a, of a kind of, um, emphasized individualism is to misrecognize that, that role, uh, and the role of automated systems that individualize us for us is to participate in that fetishization.

Mark Andrejevic (00:35:52) - So th so that we can really start to imagine, yes, I am that constituted, um, individual. One of the moments in the book, that's kind of, one of the defining starting moments for me is when trying to imagine that he can reconstruct his father in his entirety, by getting all of the data traces that his father now deceased, left behind, uh, and then reconstruct them AI that he can converse with, and that it would be just like conversing with his father and somebody presses him on that and says, but would it be really like, um, you know, being with your father? And he said that bot would be more like my father than my father was to me. That's a, that's a really interesting formulation because it, it suggests what the fantasy of total totalization via datafication is that you can actually be specified to be more like yourself than you are.

Mark Andrejevic (00:36:45) - Um, the book has a kind of full psychoanalytic, um, framing for out it, right. But you can see why that's really appealing from a psychoanalytic perspective as a, as a pathological observation, because what it, what it attempts to do is to take a subject it's constituted through from a psychoanalytic perspective, um, lack, split, and say, no, we can make you whole, and we can make you whole through filling in the lack with all of, all of our data. Um, but how that, I think connects to paranoia is precisely what, what you've been saying is this, um, once you lose the coordinates of, um, the social and you kind of forward this fantasy of kind of pure autonomy, um, it's actually the breakdown between the relationship, the individual and the social that makes paranoia seem possible. Cause the social is part of the constitution of subjectivity.

Mark Andrejevic (00:37:39) - But if you can't recognize that, um, it looks like an external force, um, from which one is disconnected and this embedded, and the fact that it has its logics start to look like, you know, start to foster the forms of paranoia that you're talking about. There's something happening here. It's systemic, I'm on the outside. Um, but that outside does two things. One, it makes me feel excluded, but two, it gives me that kind of outside position. Uh, ha I can see now the patterns, um, and that, that kind of megalomania and maniacal fantasy of the conspiracy theorists. Like I see something that nobody else sees because I've, uh, I'm on the outside. All of you are on the inside.

James Parker (00:38:19) - I'm right. Trying to write something about this company. Clear speed. Um, right now, um, who are very clear that the snake oil, I mean, sorry, technology they're selling is, uh, not voice stress analysis, but what it does is it it's a, um, a risk assessment tool that, uh, I mean, it's been very widely sold by as far as I can tell. Um, something like, yeah, 13 languages, 12 countries, 23 industries, and the technology offers to vet for fraud, security and safety risks based on a two to 10 minute long phone call with greater than 94% accuracy. And now they're selling, uh, COVID specific ones, uh, sort of, you know, you can buy the product now for, um, um, dealing with COVID, um, financial fraud. So, you know, a very F so like the, the moment like the pandemic comes along, they suddenly think, well, think of how many money grubbers there are out there who are desperately going to be exploiting welfare systems and, you know, loan schemes, um, and whatever. And that w and, and now we can sell our anti-fraud vocal, uh, technologies. And, and they're very clear that it's not stress analysis, so whatever they were, I mean, clearly that stuff got a bad rap, you know? Um, and they, they, they talk about macro biomarkers and things like this, um, not micro biomarkers.

Mark Andrejevic (00:39:50) - Can you buy that tech and turn it back on them because they're, they're going to be their own undoing,

James Parker (00:39:55) - Right. I mean, there's, no, you have to buy it. It's like, uh, you know, the sort of that classic pyramid scheme thing where you only get to see what you have to buy in, in order to be able to see how, or whether the technology works. Like it's, it's fully black box, right. So it's like, it's once you've already bought in, it's sort of too late, uh, you know, because you've, you've sort of committed, you know, financially and sort of spiritually to the, the, the thing. And so it's completely unclear what science, if any, it's based on, uh, there's nothing there just, it's like, we'll tell you about it. If you buy the product,

Sean Dockray (00:40:33) - Am I sort of like imagining this penal colony conclusion to it right. In their demonstration and kind of talking about the product, the product is to turn back on them. It's for reveals I'm to be a fraud and that cast them right.

James Parker (00:40:51) - They made, uh, millions of dollars and, um, um, prevented hundreds or thousands of people from accessing social services that they desperately need.

Sean Dockray (00:41:02) - Well, one thing is what you do bring up, which is kind of interesting. We were talking about agnosticism, like data, you know, that whether it's whatever image capture of voice capture, you know, it kinda all ends up in the same place or represented in this kind of format. Uh, but there's also the similar, like a tool agnosticism or something where it developed some products that, you know, um, will tell if, if, uh, you know, if you're nervous or not, and then, uh, you know, COVID comes along and like, Oh, my, my nervousness tool is also really good for like, diagnosing. If you've got, you know, this, this, this disease that's incredibly difficult to test for. So the fluidity of these solutions, it kind of goes alongside the other thing. Yeah. The portions of various tech industries and other ones who've seen the pandemic moment as an economic opportunity is fascinating, but especially the tech sector, because, you know, it's, they've got that.

Mark Andrejevic (00:42:08) - I don't know, you know, tech solutionism capability built into them or that promise built in, but also because of all the forms of mediation that are associated with managing the pandemic, but with the voice stuff, I, you know, th there was a moment that I, I noticed that it's, it may extend to the present, but I was really interested in for a, in the, the role that body monitoring systems were taking in promising to bypass the vagaries of discourse. The idea that at, at a time, and this speaks to the question about paranoia that the Chan brought up, but, you know, at a time when, uh, forms of, you know, social reconfiguration and the breakdown of social trust in various ways, pushed in the direction of the savvy subject to knows that discourse is manipulative and, um, politicians are lying and everybody could be lying. And so on the promise of some modicum of authenticity to be got at, through various ways of reading the body. And this was part in part my interest in, you know, reality television because of its, it's got that kind of.

Mark Andrejevic (00:43:22) - Promise of authenticity through forms of contrivance, right. You know, put people in the lab, experiment with them and then extract some, uh, authentic emotion from them. And what's authentic is not the situation, but the, but the reaction, but reality TV really quickly incorporated some of this technology. Uh, and so there was, uh, an MTV dating show. I can't remember what it's called now. Um, I don't know if you ever came across this, but it might be interesting historically for the work that you're doing, where you'd be set up on a blind date. Uh, and you'd be doing an interview to kind of like, it was, it was actually, I think you, there were like three candidates that you would interview. Uh, and then at the end you decide who you wanted to go out on the date with. And the interview would be conducted Mike, because, you know, it's a TV show, but the mic was connected to a voice stress analyzer in the van.

Mark Andrejevic (00:44:17) - And there would be like the buddies of whoever was doing the interviews and they'd switched the gender roles. You know, it seems to be a woman interviewing three guys or a guy interviewing three women. And then back in the van, they'd analyze the results, you know, and they'd go like, Oh, she's lying now. You know, when asked a particular question, you know, like, Hey, so do you think I'm cute? Oh yeah, you're beautiful. In the back of the van, they'd be like, no, that's a lie. Um, and then before deciding who to go on a date with the Interview, we would go consult with the, you know, the backstage crew, but it had all the, you know, the kind of cinematic vocabulary of the surveillance apparatus, you know, like the people in the panel van with, with the machines, you know, watching the, you know, something from the conversation, um, you know, watching the voice stress.

Mark Andrejevic (00:45:04) - Um, and then there was a, there was a really sadistic one. I wish I could remember what it was called. It was one of these, uh, I can't remember it's called w where you'd be asked a series of questions. And if you answered truthfully all the way to the end, as measured by some combination of biometric devices, it was like, I think it was voice stress, but also, um, you know, um, pulse rate and skin galvanometer, all this stuff that they use. Right. Um, but they'd ask you that, you know, these kind of horrible questions in front of, you know, they'd dig up things about your past and then ask you about things that were potentially very embarrassing. But if you, if you told the truth all the way through, you'd win the million dollar price, but, you know, like, have you ever had an affair there you are with your family in front of you? Um, you know, if you tell the truth, you'll get the million dollars, but if you lie, you'll be found out. So there was, or at least that's what they led you to believe, but it was, it was, uh, you know, it was kind of a bodily torture show, but an interrogation show anyway, but that notion that the body tells the truth, uh, even though the speech lies, but it's not.

James Parker (00:46:16) - So the body tells the truth, the body tells the truth and only the Machine can know the truth, the body, because, you know, there's an intuition that we can sort of tell when somebody's tone of voice or whatever, but it's sort of not verifiable or something. So th so, you know, obviously kind of, you can trace that history back to the stethoscope, uh, and the sort of, you know, that, that, um, media oscultation and so on. But, but the specific paradigm now seems to be that we, we, we sort of Intuit that the body probably tells the truth that we might maybe think sometimes a little bit that we can discern, but the Machine therefore must definitely be able to discern the truth of the body. And that's just simply an act of faith, a faith that immediately becomes kind of profoundly political as these technologies get taken up and sold in every imaginable context.

Mark Andrejevic (00:47:16) - Yeah, it's true. I've been thinking for a while, it'd be interesting to do a genealogy of the Iraqi color as a way of thinking about the decisions that are made by, um, these automated systems, a kind of non-negotiable decision that has that's nonetheless actionable, you know, w we'll act on it. Um, and as you say, there's a, there's a kind of faith-based commitment, um, based on, I suppose, they have all kinds of literature right there, you know, we've done this research, we show this, um, but it's, it, it's something I don't know. I I'd have to go back and look through this, but it's, it's going to come to rest at some point on self-report. Right. Um, I suppose, uh, so there becomes a kind of circular element in it, but there was a, you know, there was a proliferation of these shows. I can't remember what they're called now. The Tim Roth show where he takes Paul Ekman's, um, micro-expression stuff.

Mark Andrejevic (00:48:18) - And, um, and here's the guy who can read the micro-expressions. So he becomes the Machine character. Um, he's been trained for years in this, and so somebody speaks, he can see the Twitch in their throat or the, you know, the micro expression in there. Um, but, but he becomes the machinic figure. And then there was the Australian actor was in one of them, the mentalist, and there was a third show. I can't, but they're all about people that were kind of like highly trained in body reading and were therefore able to kind of see through the deceptions of discourse. And, but, but they were kind of miraculous, machinic figures, right? Impossible figures.

Sean Dockray (00:48:54) - One thing I was thinking also to connect this through that there's a tooth to the body that's not necessarily, you know, perceptible through through language, or when you were saying about bypassing the vagaries of discourse, is this a thing that we've come across around what James is labeled wake worthlessness

James Parker (00:49:15) - Riffing on framelessess. Yeah. Wakewordlessness,

Sean Dockray (00:49:20) - Which is, uh, up until now the, the, that the wake word has been this kind of like, you know, moment of oral consent where you sort of speak and you say like, I'm, I'm prepared, I'm ready. I expect. And I know that I'm going to be listened to from for some period of time. And you know, this is on, started in this one, um, patent from, or from Apple wasn't the Apple watch that it would, uh, begin recording. Um, once, once, uh, uh, had registered some vigorous oscillations, you know, to, to register whether you're washing your hands enough. Uh, so at some COVID kind of monitoring, you know, it's wrapped up in all in all of that stuff, but in a sense, like it's the movements of the body and that becomes a wake word. And you can just sort of imagine that the wake word is kind of necessary thing for people to accept that these devices are going to be with them, but it's temporary, right? That there's this general chipping away and erosion of the wake word, uh, two different forms of consent than kind of ... one. And so I guess, you know, some of what you're talking about, I think could connect to different possibilities for like, well, your body consented, even if you didn't say anything, your body sort of consented to us to begin recording, or the moments consented, you know, that consent sort of might take all these different forms,

James Parker (00:50:44) - Just read out what the precisely, what the vice president of technology at Apple says about this thing. So it says the new feature would automatically detect when aware of started washing their hands. Our approach here is using machine learning models to determine motion, which appears to be hand-washing. And then using audio to confirm the sound of running water or squishing soap in your hands during this, you'll get a little coaching to do a good job. So I think there's also biofeedback through the or haptic feedback or something through the watch or sort of oral thing. So, yeah. So just as a kind of a specification of what Sean was saying there.

Mark Andrejevic (00:51:24) - Yeah, that's fascinating. I mean, I think you're right. The tendency is to, I, this is, this has intrigued me about another book project that I'm working on is, is called the fate of interactivity. Uh, and I'm, I'm interested in, I guess, one of the things that prompted it was thinking about the ways in which interactivity at first invited forms of active participation. And then it reached such a fever pitch of, um, requirements to interact that interactivity had to be automated. Uh, and the automation of interactivity raises some interesting questions about the so-called, you know, the promise of interactivity, which was kind of a participatory one, but this idea that, you know, it's maybe asking, it's asking too much to ask us to interact as much as these systems and devices require. And so they developed strategies for figuring out how to automate interactivity and this kind of reading the body and inferring, well, we know that you're signaling something by this, so we're not gonna rely on the conscious register.

Mark Andrejevic (00:52:35) - Um, we'll just, you know, use your implied bodily, uh, actions as a, as a form of interaction. Um, and that's you telling us, you know, to prompt you to wash your hands correctly, but all of these ways in which interactivity gets automated, I kind of maybe belies some of the rhetoric that underpinned the promise of interactivity, which was, you know, a kind of conscious participatory, um, I don't know, collective form of interaction, but yeah, but I, I think you're right, right. The tendency is just thinking back to those Google patents that fascinate me about the smart speaker and the smart home. There's very little.

Mark Andrejevic (00:53:17) - If any reference to the system being off it's just on all the time. And it's on all the time in terms of, you know, for diagnostic purposes, for example, in order to be able to engage in early forms of diagnosis, it's almost got to be on all the time because it has to capture the full pattern of, of activity. And because it's diagnosis before you know that you have any symptoms, you don't have the ability to say what time counts for symptom monitoring. It's got to have, it's got to be able to infer that itself, but it's not

James Parker (00:53:56) - Just the, you know, the duration of onness. It's the, sort of the quality of on this, right? So, so when, when the mic is on, even if you've said the weight word, the now, now Siri, or whoever can listen or whatever can listen to a body, the body, and the way that you, in a way that you weren't expecting. And it's an always expanding sort of litany of ways in which it can listen to and about and around the body. Right? So it's sort of the, you know, the sort of constantly reiterating and expanding, um, paralinguistic analytics or context awareness, or what have you. So that the sort of what it means for a mic to be on is always a necessarily sort of beyond what you anticipate and could sort of possibly know, or have consented to. It's not just that the wake word is disappearing. It's that the frame of what's being there was being listened to within the frame of the wait word is always changing and expanding. Yeah.

Mark Andrejevic (00:55:04) - Yes. The awareness or the wokeness of the device is, is expanding in ways that, um, are, uh, yeah. Continue to develop and, and re, and for in many purposes, the goal is not necessarily to inform you of, uh, of how it's Listening. And it, you know, that connects with maybe this concept of operational ism, which have been kind of grouping in as one of the tendencies of automation. And that one's, that's a pretty vexed concept because I receive a fair amount of pushback against that. But, um, you know, what I'm trying to get at with that is, is, uh, the distinction between what you say and what you mean, which is, I mean, that's a, that's a distinction that's made in certain psychoanalytic context to mean something a little bit different from how I'm using it now, but, you know, Google isn't interested in, you know, what you mean, what it's, when it scans your email or listens to what you're saying in the house, it's interested in how, whatever it can glean from what you say, which could anything from recognizable words to voice tone, to verbal, um, hesitations, uh, to even the change in configuration of the sound of particular words, um, what that says about you in a way that's actionable.

Mark Andrejevic (00:56:36) - And so the meaning, and it it's, it fits really well with a certain data science paradigm. So that the guy, Alex Pentland, who's one of the big data scientists at MIT, his big stick is don't listen to what people say, look at what they do. And it's, it's not a nice thing to say about people because it, I mean, he's not interested in communicating with people, right. He's interested in analyzing and prediction their behavior. Uh, and for his purposes, what they say is he's not interested. I mean, I'm sure at a cocktail party, he is right. But, but what are his, uh, you know, what are his primary interests? Would

James Parker (00:57:20) - You maybe say a little bit about why you would describe this phenomenon you're talking about in terms of operational prism, um, where's the operation, could you sort of rewind a bit?

Mark Andrejevic (00:57:32) - Well, yeah, so the, the backstory for that term is, is, you know, it's borrowed from Trevor Paglen who borrowed it from the filmmaker, Harland foci. He calls folkies, um, I'm Machine series develops his approach to what he calls operative images, which are images that are created to be part of an operation. And this is the series of films where he goes and looks at machines and the images that they generate to provide some type of human interface for what they're doing. So machines like, I, I can't, you know, things like rivet detectors or, you know, metal, utensil, um, strength indicators, you know, at one point they had these screens where you could kind of see the scanning results of.

Mark Andrejevic (00:58:18) - Of the Machine. And what interested in was, you know, these weren't meant as he described it to be representational, although for the humans, we're looking at them, you know, clearly they did have some representative function, but really they just showed some operation that was taking place in the Machine. And Trevor Paglen goes back a little bit later and tries to reproduce, you know, an update to the Machine series. And what he claims he finds is that machines don't generate those images anymore because they're doing things that don't have any really useful, uh, visual interface for humanized because the operations are too complex or too multidimensional, or, uh, there's, there's no really meaningful, um, visualization. And, you know, he says these images have kind of disappeared into the operation and that notion of an operational images is an interesting man, an operative one, because it's, it's really not an image anymore.

Mark Andrejevic (00:59:19) - It's, uh, it's the image that obliterates itself. You know, the example that I always think about is the Machine vision system that he trains to recognize it, to recognize objects, and he trains it on the classic image of representation, which is the, you know, the agreed senior position peep, but he uses the Apple one. There's a very similar ones to senior pass and pump. And he just shows how the image, how the vision Machine. He trains it on the image that says this isn't an Apple and the machine says, this is an Apple. And, but, but what he's getting at is the reflexive space of representation. Um, it was kind of raising the question, you know, the space that's missing from the Machine is the, is the reflexive operation. That operation may not be the word I wanna use here is the, the reflexive recognition of the representational character of the, of the image, which is to say the recognition of the non identity, uh, between image and, and, you know, um, indexical reference. I, uh, the images for all practical purposes, the thing that's what it acts on, uh, and the idea of some kind of space, um, between that. I mean, I shouldn't say that's what it acts on. That's what it acts upon as an input. Right? If that makes sense. So what I mean by that operational approach is that is the kind of lack of that reflexive moment around the shortcomings of representation. Sorry. So

James Parker (01:00:54) - When you say reflexive, do you mean sort of something approaching cognition or thought, is it, is it a, or are you trying to bypass that, that problematic, you know, which sort of sweeps out too much conversation, you know, uh, is the AI really an intelligence, whatever, but it seems like when we're talking about representation and reflectivity, we're sort of unavoidably getting at that question. So is, is the vision a vision and it would operational vision would be vision without sort of a reflexive understanding when you say reflection reflectivity, is that where you, is that where you're, where, where that's going?

Mark Andrejevic (01:01:33) - Yes. I mean it's, yes, it is, right. It it's, it's raising, I think it raises the really big question, which is why it's a really vexed set of claims, but it's the, it's that big question of, again, because of kind of the psychoanalytic framework, I tend to frame it in terms of desire. Um, but it, it, it has to do with, um, yeah. What were notions of what it might mean to think of the machine's conception of the meaning of the image is there is, you know, imagining that there's a gap between, um, there's a gap constitutive of representation, where the representation is always non-identical with, with what it represents. Um, and that's, uh, you know, how do you, where, how does one locate that space of representation? I suppose if you take, if you take that Apple image, right. I suppose you could train, right. You could train the Machine I suppose I don't see why this would be technically impossible to recognize the representation of the Apple versus an Apple. So you could, you could probably train Trevor pagans vision Machine to say that's a picture of an Apple, you know, you'd have to develop whatever the capacity is for it to recognize, I guess, two dimensional versus three dimensional or something like this, but could you train it to recognize, you know, the difference between, you know, a three-dimensional wax, Apple, and an Apple, uh, and you know, how would you.

Mark Andrejevic (01:03:14) - Recognize that difference. How would a human do it? I want to take a bite out of it, or, you know, I'm sure the Machine could insert a probe. Right. But in the end, what you're really getting at is why do you want to know what an Apple is? Why does it matter? What is an Apple to you? Um, and why does it matter to represent an Apple? And when you're doing it, who are you doing it for? But those are the questions to me that get to that constitution of, you know, the relationship between desire and subjectivity. So

James Parker (01:03:45) - Operational, because, because the machine has no ability or interest or capacity to even inquire about that, it's simply a matter of receiving the data and then producing a kind of an output, which is the declaration Apple. Yes. Yeah. So how would this work and you've, you've talked about operational Listening, could you walk through the steps of what operational Listening would mean?

Mark Andrejevic (01:04:12) - Oh yeah, I guess by operational Listening I was thinking about the ways in which, um, I don't, I mean, I've got to reconstruct what I've got to think about what was at stake in my, the whole operation. I spent so much time thinking about operational ism. I think, to, to get at that question of what operational ism is, is really to imagine the difference between a form of sense datum, that kind of results in an automated outcome versus a form of sense datum that's the object of particular types of, I dunno, reflection or contestation or deliberation. Um, but I, you know, operational Listening, I guess what I mean by that is thinking about how capturing information enters into a chain of automated responses. And, you know, one of the things that you pointed to was this formulation I use of the cascading logic of automation.

Mark Andrejevic (01:05:09) - And what I mean by that is, um, once you start with developing systems for automated data collection, you generate so much data that, that you need to process in an automated way. And once you start processing in automated ways, then you work almost automatically towards some type of automated response. And so, you know, if you imagine forms of Listening that get implicated in this type of cascading logic of automation, um, the ways in which they get incorporated into re into particular types of responses. So think, I, you know, I, I guess I was thinking even of this trivial applications, like the car that listens to you, um, and decides whether you're sounding stressed and then, you know, modulates the controls accordingly, or, you know, feeds you music that's meant to distress you. Uh, but that, that is responding in automated ways to the automated data that it collects about.

Mark Andrejevic (01:06:13) - And I, you know, I think one of the elements of operational ism, uh, or I don't know if, if it's an element, but one of the connections that I tend to keep thinking about when I look at operational ism is its relation to a preemption. And I guess another way to get it, the character of operational system is to think I juxtapose it to representation, right? So that that's the kind of opposition I'm interested in an image that represents, you know, an image that says something or show something or, um, a sound that indicates, you know, that, that, uh, carries meaning, uh, versus, um, a sound that does something, um, or an image that does something. And to give you a somewhat extended example, one of the areas that I work in is surveillance studies kind of broadly construed and the general dominant paradigm in surveillance studies for quite a while.

Mark Andrejevic (01:07:18) - It's been, I think, a representational paradigm, one in which the standard Foucault panopticon model functions. And it's, it's a very representational model in the sense that it, it relies on the internalization of symbolic meanings. So the way this, the standard operation of kind of panoptic logic is, you know, that you could be being watched in any time. So you internalize the imperatives of the Watchers, presumably because there's some potential sanction, if you don't. So that form of surveillance relies on representation, um, as is indicated by the, you know, in the malls, those domed things in the ceiling, and you can just buy those domes and stick them up, right. And they can serve as symbols of, of surveillance. So you get those signs, smile you're on camera, right? That the symbolic reminder that you could be being watched and the panopticon itself, right. Bentham conceived of it as also a symbolic, it was a spectacle. All the surrounding folks were supposed to come and look at the powerful apparatus. Uh, and, uh, and in, in that sense, it, you know, it, it operated as a kind of symbol of the power of.

Mark Andrejevic (01:08:29) - The surveillance operation. Uh, but what's interesting to me is the way in which certain forms of surveillance no longer rely on that symbolic logic. It doesn't care whether, you know, you're being watched, the idea that, uh, or that you internalize in a disciplinary sense, the imperatives of, of the Watchers, right? Because the paradigm is actually, maybe we don't even want you to be no, think that you're being watched all the time. We don't want you to change your behaviors in the way that disciplinary model of surveillance would, because we need to know what your actual behavior is actual in quotes, right? Your non, non disciplined behavior in order to find out the truth about what your future action is. And then we control you not by, uh, getting you to internalize things. So that was a very Bentham, utilitarian idea, right? How can you do it with the least harm?

Mark Andrejevic (01:09:22) - Right. You know, if they, if people will actually internalize this, then they'll all behave, then you're, then you maybe won't even have to punish them after the fact, like least harm, you know, most good utilitarian logics. Now the logic is quite different, right? We watch you and you don't have to internalize anything. You don't need to have any relation to the symbolic, but we know what you will do in advance, and we will stop you before you do it. So if there was a kind of hands-off logic to disciplinary surveillance, there's a very much hands-on logic to what might be called. I don't know, posts, post pan optics surveillance. And I can give you just a very concrete, a concrete example. Peter Thiel is funded. This company called Athena that's meant to address the problem in the U S schools of school shooting because, you know, on a legislative level, they're not able to address that.

Mark Andrejevic (01:10:13) - And it's, it uses these cameras equipped with machine learning, to be able to detect suspicious behavior and to act preemptively in response to it. So some somebody is not behaving, right? The school doors get locked down. The authorities get called, but the idea is stop. The action that's going to happen before it can happen. And the marketing literature for that, one of the things that it claims to be able to do is to detect when somebody's going to throw when, when somebody, after they've started throwing a punch, but before it lands, they can identify that a punches being thrown. And that space isn't really interesting space, I think because it's a space of automated intervention, right? Like there's nothing, there's nothing that humans can do in that space. Once they're notified by a machine it's all over, but it creates a space for machinic intervention.

Mark Andrejevic (01:11:04) - And I hope that makes sense in a way to think about the difference between representation and operation. In one case representation functions in a symbolic way to instill a kind of behavioral response on the part of a conscious subject in the, in the second, you don't need the conscious subject or the symbolic representation. So you don't need either side of the symbol of the representational relationship. All you need is you've got an input that's collected that enters into a Machine and operation that then intervenes before something can happen. And it's the difference between a kind of, I don't know what you might call deterrence and preemption and deterrence is a logic of stasis, right? You know, I know I'm being watched. I better not do this. Nothing happens. The crime doesn't happen. The intervention doesn't happen. Whereas preemption is a logic of ongoing constant, uh, active intervention.

Mark Andrejevic (01:12:05) - Stop this here, stop that there, stop this arrest, this person shoot that person. Right? So, so one is, one is really in a sense asymmetric and super active. Um, and the other is a little bit more symmetrical and static. Uh, and you can see the drone logic in there, right? That was, that was the figure of the drone that got me thinking about that. The difference between kind of cold war stasis and drone hot war, um, with the drones we can go in, we can preempt, you know, find out who the potential risks are using data profiling, take them out before they do anything, because they're not because the claim is in this logic, the opponent is not amenable to the symbolic or the representational intervention in the way that kind of major powers of the cold war were right in the, in the kind of active version

Sean Dockray (01:12:55) - Could you talk a little bit about the kind of like recursive dimension or maybe in all your terms of you like a feedback, um, to, uh, kind of, uh, automated intervention and to prevent something from happening something all the majors, you know, happens, uh, that reshapes the kind of situation. Uh, so that, that the one thing doesn't happen, so, you know, chooses another path, but then, you know, that that creates a new ground on which, you know, future actions might happen, you know, so there's a. I guess I was just thinking that as a kind of recursive element,

Mark Andrejevic (01:13:35) - and this is where I think the drone stuff comes in, right. Because of the somewhat terrifying logic is a rampant acceleration as a feedback effect of, of what's being described. Right? So in the absence of what's possible made possible through the process of representation, what's substituted for representation is preemption, but preemption has to be, I think, increasingly generalized, right? So you imagine a kind of the presupposition of preemption is a kind of ongoing process of, I don't know, social breakdown, which requires ongoing preemption and in the end, total preemption, um, the, the drone logic of course, right? If you go in and you, um, right, this is, this is the critique of drone warfare. If you go in and you preempt, you're actually engaged in a process of asymmetrical warfare that doesn't even look like warfare, there's no battlefield.

Mark Andrejevic (01:14:37) - They're just people who've been, you know, statistically or for various profiling reasons considered to be potential future acts who are taken out. Uh, and then the result of course, is, um, ongoing resistance, which in turn feeds ongoing preemption. And that the same might be said for once you automate this process of preemption, you kind of presuppose the breakdown of the symbolic, which then seems to feed into, okay, if there is no symbolic, then in a sense, all things go, which means that you need to have, you know, whoever's in the position of authority, it has to engage in kind of escalating forms of, of monitoring and surveillance. Ben Anderson, who writes about drones has a really nice phrase. He says, you know, if, if risk is everywhere, then surveillance has to be throughout life without limit. That gets to frameless as, as we discussed earlier. But if you attach to that, the element of creation, it also gets to kind of preemption, um, throughout life without limit. And there's a weird death drive in there, right. You know, somehow, how do you, how do you, as your antenna get more sensitive to risk through various sensor apparatuses, the fantasy is that risk can be managed more effectively this way, but higher sensitivity detects more and more risks, which detects more and more preemption. And eventually you come to the conclusion that, you know, life is risk. Uh, and how do you preempt that

James Parker (01:16:06) - Elimination? So I've got a kind of a dumb question in mind, um, which is basically what's the problem with automation or what's the problem with whatever synonyms for surveillance capitalism or tech, you know, the technological present that we Machine, that we want to, that we want to use. In other words, what's the problem is the dumb question. The reason I'm asking that question is because I guess a lot of the stretcher that I've been reading recently about, uh, some thinking, um, you know, Sue Bob's book, um, there's um, that book, um, dadada, colonialism, uh, and you know, most of the most things you read, they, they end up with the problem being autonomy, even if they don't think they're being liberals. Alum is basically that, you know, whether it's preemption or, you know, they're sort of recursive sort of interventions in your life, but you know, where you're sort of in their sort of cybernetic loop with the Machine systems or what have you, um, or just simply kind of advertising and shaping your, how are you going to vote or what you're going to buy.

James Parker (01:17:26) - And so on, they always seem to end up with autonomy as the problem. And I've just got a feeling, a suspicion politically speaking, that that isn't enough that we're not going to overcome it. That's not enough to, you know, break up Facebook or to break up the systems that are in place here, which are kind of, yeah. And, and then at the same time, I think, well, the sort of the other main alternative critique is effectively capital. So you basically say, um, well, Jeff, Bezos's on 76 billion earned, sorry, precisely not. And has somehow increased his, uh, personal wealth by $76 billion since the start of the pandemic. And, you know, we have unprecedented, um, you know, I think Apple just reached $2 trillion valuation. You know, we have an insane, uh, values, uh, insane, um, levels of inequality and corporate power. And it's. Effectively extractive technologies and they're kind of the present paradigm that's allowed that to happen. So you can sort of say, well, you know, the critic, but not that doesn't sit that critique doesn't really seem to get at the kind of the epistemic and, you know, the kind of the real paradigm shifts, which do seem to have taken place in what it means to know and what it means for technical systems to know, and to act in the world, which sort of, uh, forms of power that extend beyond just capital accumulation and corporate power and so on. So I just don't find either of those two critiques that satisfactory. I mean, obviously you could combine them together, but just thinking in terms of like, what, what, what, how do, how do we frame the problem in such a way that it's going to get some political purchases? And I, I feel like neither the autonomy nor the critique of capital on their own are really going to do it.

Mark Andrejevic (01:19:30) - Huh? It's a big, yeah, it's a big, big problem. And the big question, I mean, in this, I don't know, maybe there's a way to think when you put those two together, I suppose there is a configuration which they're really two sides of the same position if one critiques capital also from that position of, of autonomy, I think where the work that I've been doing, you know, the direction that it points in is really about a concern about the fate of the political, uh, and the fate of sociality, and probably wrapped up in that the fate of collectivity. So I don't know, you know, whether or not that's going to have political purchase is always a wager the way politics is. Um, but yeah, I, I agree with you about the, uh, the autonomy critique. I mean, again, to me that it comes to arrest all too heavily on the notion of a kind of hermetic, um, subject, which as, as we've discussed earlier is, is incoherent, I think politically, um, and leads to political incoherence.

Mark Andrejevic (01:20:40) - And, you know, the, the moment that we're, when I look at the U S these days, um, this is one of the places where I worry about political incoherence, right? You know, this, the, the institutions and practices that made possible a kind of, you know, a sense of political, uh, action and activity have dissolved to the point that politics has become, I don't know, it's, it's become something that's inseparable from conspiracy. And as soon as you start to, to metal in it, you find yourself caught up in, in those logics. So does that mean in concrete terms? Ha I don't, I, you know, I think one of the things that I've mentioned before that I'm really concerned about, about the deployment of automation in contemporary context, you know, maybe could line up what, what some elements of the critique of, of automation would be. Certainly, certainly one of them to me would be, um, the background and misrecognition of the social.

Mark Andrejevic (01:21:40) - So very often the decision-making processes that come out of automated systems are displacing or replacing processes that through participation in them reinforced the sense of, of the political and, uh, the sense of the social. Um, so I just to give it again, a kind of silly concrete example, there was that guide MIT whose stuff I liked, try it out every now and then Cesar Hidalgo. He's not there anymore, but, um, he was trying to imagine a solution to the political problem of information, which in, you know, various forms of social theory, um, it's nothing new, but the idea is that in a democratic society, um, it's just hard to be informed enough about the issues to participate meaningfully, uh, and you know, ongoing debates over what that means, but his solution was unsurprisingly in the current context, get a bot that figures out for you, what your political commitments and concerns are.

Mark Andrejevic (01:22:41) - Um, that bot has the capacity to go through all the available information. All probably not, but, you know, whatever it decides is the credible meaningful information, and then decide for you who you should vote for, and then maybe eventually just vote for you. And that to me is like, it's, it's misreading what the problem is, right? The problem is, has to do with what it means to engage politically and exercise a process of a collective judgment. Um, and the idea that Machine could do that for you better than you could kind of completely misses the point it's, it's the wrong solution to the wrong, to, to whatever it perceives to be the problem. I don't know some of the other things that I've thought about when it comes to automation, um, and this really does get to kind of, you know, power imbalances and, um, that seemed worth.

Mark Andrejevic (01:23:31) - Thinking about, and I'm not really sure what to do with this problem, but to the extent that automated systems can generate knowledge, that's actionable, but, but not explicable complicated particular paradigm, which imagines that, that, um, explanations should be subject to forms of transparency that make them comprehensible. But let's just take it as an assumption for the moment that it's actually true. In some cases that these systems can generate this type of actionable knowledge, then you kind of have an asymmetry over who has access to that knowledge and is able to ask the questions and has the apparatus to turn the system towards asking those questions. And that knowledge looks fundamentally, you know, if you, if you're going to call that knowledge in quotes, it's, it's, it's non sharable, right. In, in ways that other forms of knowledge are supposedly or meant to be or understood to be shareable.

Mark Andrejevic (01:24:27) - And what does it mean to have that form of knowledge monopolized by particular groups? I mean, I think in that sense, the concern about, um, who has ownership and control over the apparatuses for collecting the data and querying it and generating insights from, it seems to me to be a huge political question. I don't know what the answer is, but I, I, I think that, um, that type of knowledge to be concentrated in the hands of a few, very powerful, basically an accountable commercial organizations is incompatible with what, you know, forms of, um, democratic civic life. That might mean that those forms of civic life or on the verge of distinction extinction. But I, you know, I there's, I I'm beholden to them. So I'm, I don't know. I, you know, in terms of practical political terms, I think that means challenging, you know, the ownership and control over those, uh, over those resources and also challenging the version of the economy that we've created, that, that runs on that.

Mark Andrejevic (01:25:34) - I don't know if that it gets to, to, you know, what, how do you generate, I mean, in a sense, the thing that I keep bumping up against is the processes that I'm talking about need to be contested by the practices that they threatened, obviously, but the resources to contest them are increasingly undermined by their ongoing exercise. I'm not sure of the way out of that the book ends with, um, you know, when you write a critical book and you've got that moment at the end, right. There you go. But there's hope, uh, and, and, you know, because why would you write a critical book if there were no hope, right? Why would you spend all your time doing that? But I, it kind of ends up that gesture of there's always hope, right? Because there's always history until there's not, but it was hard for me to conceive of it other than a kind of rebuilding on ruins. You know, the ruins look, I don't know, at this moment it looks kind of inevitable going through that process. So, yeah, I don't know.

James Parker (01:26:38) - I was gonna ask, uh, in that case, the question that I keep coming back to, which is, is the politics of automation, or should the, should the politics of automation effectively be abolitionist? So like, of course in the context of Machine Listening, there are, you said, yeah. You know, applications and use cases that are valuable and hard to, hard to take issue with, but they just seem to be so dwarfed by a sort of seemingly unstoppable logic, which will systematically use those use cases as its wedge for, for starters and, and just overwhelm them in any case that I keep finding myself in, uh, in our, uh, pushing up against the kind of, yeah. Kind of abolitionism like that. I, I just, I can't imagine a world of Listening machines beginning from anywhere close to where we currently are. There isn't a dystopia. And so I just kind of, my, my, my, it seems to me that, like, it might just be best to try to say let's smash the whole thing, but I mean, maybe that's just, um,

Mark Andrejevic (01:28:02) - Um, yeah, I mean, I guess the framework that I've been thinking about is one in which it's, you know, it's the relationship between the technology and the social context and, um, there's no, there's no way to meaningfully change the deployment. I shouldn't put it that strongly. It's going to be difficult to meaningfully change the deployment of the technology without significantly transforming the social context.

Mark Andrejevic (01:28:32) - So there's got to be a change in the way, um, you know, power is, uh, allocated and controlled and reproduced. If we're going to imagine some kind of change in the way in which these automated technologies, service, power, um, and, you know, failing that, it's just hard to see how these technologies don't continue to consolidate, uh, um, and concentrate existing forms of power. Uh, which I, I mean, I'm guess I'm not against, I mean, I w it would be interesting to try a series of, uh, you know, if, if it were possible within the existing social context. Uh, I'm one of the things, one of the things I worry about a lot is cause I'm, you know, mostly immediate studies is, um, the role that the media play in all of this kind of the degradation of the social tendencies that I'm thinking about. And here, when I talk about media, I'm meaning a little bit more now, early, um, in terms of, you know, what we we think of is mass media and social media communications media.

Mark Andrejevic (01:29:39) - And, you know, I think it's one of the real pathologies of the contemporary media system is the way in which automation exacerbated. It's the hyper commercialization with which it's continuous. So it's, whenever I see these kind of, I don't know, like the Fairfax press ripping on social media, I think, you know, it's, it's great for them to have a certain kind of safe scapegoat, but it's okay. Continuous the development of the commercial media into social media, they've been, they, they feed on the same logics and they feed, you know, they feed back into each other. Um, but what if you imagined a different model for the media, you know, uh, kind of, um, public service, social media platform, um, the one that didn't rely on the forms of data collection and hyper targeting, um, ones that didn't privilege algorithmically, you know, engagement and provocation over, you know, accuracy and deliberation, and I'd be game for trying something like that.

Mark Andrejevic (01:30:41) - Um, uh, you know, that would be, we require a kind of wholesale reinventing of the economic model that we use to support our communication systems. But that seems not inconceivable within even the current political arrangements. I mean, very difficult. Yeah, no, nearly impossible, but maybe not inconceivable. It does seem possible if you reach a point where political dysfunction has, um, Galvin, I asked the people to respond in some ways that require recovering the ability to function politically in meaningful ways that you could, you could take those types of actions, uh, you know, abolition. I mean, I agree with you, the abolition looks like I'm really tempted sometimes just to think automation is violence and, and therefore, um, uh, you know, you get an abolitionist stance. Um, but I, you know, I do have a commitment to the idea that automation can function differently in different social contexts. And abolition just looks impossible as a political stance. It looks less, less, it looks less impossible than fundamental transformation of social relations.

James Parker (01:32:00) - I think those are

Mark Andrejevic (01:32:01) - Two rocks,

Sean Dockray (01:32:04) - Uh, there's a Fred Moten quote about like apply it to automation. It'd be like, not, not, it's not about, uh, abolition of automation so much as a society that makes automation possible. Right? So I think the abolitionist stance, it's like very, very sensible one in that sense, but typically the boundaries of the parameters. So the horizon of abolition is on particular problems. Like this device, that's listening to us, let's, let's abolish Listening devices. And yeah, I really agree that like, if those are your horizons and they're gonna, they're gonna fall hopelessly short. Uh, so I'm quite committed to abolition as a, as a okay. And in a certain sense,

Mark Andrejevic (01:32:52) - I believe. So that makes automation possible.

Sean Dockray (01:32:56) - I think that's a perfectly reasonable horizon, uh, but in the interim, um, you know, so I always feel sort of like yanked in two directions, which is like to maintain a certain horizon, a certain like, yeah, that, that I honestly don't have a huge amount of, of seeing in my lifetime. And then, uh, alternatively, like, you know, so, um, what, what do we do in the interim? Um, and those kinds of things are like, I guess, Would take some very non abolitionist forms, like, uh, political engagement and, um, you know, like arguing for regulatory, certain regulatory, um, power, all this kind of stuff. Um, so I often find it really hard to engage in this kind of conversation just because I feel so bifurcated, you know, like, uh, by the, uh, from, from, um, uh, from the get-go. But, um, yeah, and I would agree also with what you were saying, Mark about like, if it ever, if the society ever, like, this is kind of like a profit through them, uh, exploitative society ever does kind of come to the point of abolition. It's not going to be a pretty process. Um, it will, it will take us through some pretty dark and difficult places like collectively as a world and some more than others. And, you know, it's, I think in that sense, some of the work we can do now is sort of like building structures to kind of recognize those moments when things are kind of falling apart, um, to, to activate a new and better, um, set of relations between each other, a better world.

Sean Dockray (01:34:50) - You know, I feel like the people who are most prepared to capitalize on, um, things falling apart are precisely the wrong people. They're like, uh, the ones who are all jumping into the pandemic with lots of, lots of answers. Um, but yeah, I wanted to, there was one thing it's a real huge change of subject. It goes back to the paranoia question and it's just like, one thing I wanted to talk about was, um, throughout the entire project, whenever we're talking to people, whenever I'm thinking about Machine, Listening always approached the subject and I just feel really paranoid when I start having conversations about it. Like something about participating in a conversation around which you Machine Listening instantly makes me feel like a paranoid subject. Like I, um, I don't know that I'm like fantasizing about this like overwhelming power of this thing. I don't really know and understand, and I'm killed connecting so many dots between invisible players and, um, I think James can describe it more articulately than, than that, but, um, I guess I'll just want to return to the paranoia discussion a little bit, particularly in terms of how do we, how do we even think and talk about Machine this thing without becoming paranoid lunatics, or if we're going to become, you know, have that subject position, then like what we do with it? Like what, how do we, how do we talk about Machine Listening I guess,

Mark Andrejevic (01:36:14) - Huh? Um, I don't, I, I mean, my experience about talking about these forms of, um, monitoring is that from relatively early on being considered a kind of, you know, paranoiac, um, you know, who was it Hofstetter? It was it the paranoid style in American politics? Uh, I, early on it friended at the university of Iowa said, you know, you practice the paranoid style in academic discourse. And, um, you know, that feeling, I wrote the stuff early on that was, you know, to my mind, really dystopian and, and really extrapolated from the logics that I was seeing when I look back at that stuff for which I was, you know, kind of, I felt kind of disciplined in academic contexts, like, well, you're dystopian, like, you know, there are those tech people who are like dystopian or utopian, but it's more complicated than that. And like, you're really on the dystopian side, I look back at those things and what I predicted was actually relatively tame compared to what happened.

Mark Andrejevic (01:37:16) - So, you know, like those moments when I thought like, man, should I put this in? This sounds really crazy. And I go, okay, I'll roll that one back. And then 10 years later, it's like, I should've put that in because you know, that's, that's what happened. Um, it's very, just at least if I look back on that relatively short period of time, it's really hard to underestimate the dystopian potential for these technologies. And when you're told that you're overestimating it very often, the people that tell you that are incorrect. Uh, so what does that mean for paranoia? Um, the danger is, uh, and, and this is what, the one that really freaked me out was relatively early on when I was talking about this stuff. Um, I was doing some lectures, um, in Europe, in Slovenia, uh, and, and the students after listening to me very nicely, it was, it was like a compact series of lectures. Like for hours brought me, um, the series of movies. I can't remember what they're called now. Uh, they were pretty big then, and they were, and they said, we think you'd be interested in this. And they were like the equivalent of what Q Anon is. Now. There is conspiracy theory, mashup that just pulled together everything, Illuminati, Rothschild, um, you know, nine 11. Currency, but everything. Uh, and, and this, so I started to watch it and then I realized, wait, this is just conspiracy theory mash. And I asked them, why did you give me this? And they said, well, it just sounded like what you were saying. And, and the, the line between critique, as I understood it and conspiracy, it was completely non-existent for them. I thought I was saying something completely different from what this movie was saying. They thought it was the same stuff. And, uh, that inability to distinguish between conspiracy theory and critique looks to me to be like the impasse of our moment, you know, because I can S I start to see why they thought, you know, I imagined that what distinguished, what I was doing from conspiracy theory is that, um, my stuff was, you know, both evidence-based and therefore refutable, you know, whereas conspiracy theories, you, and, you know, potentially refutable like we could.

Mark Andrejevic (01:39:34) - But, um, but that inability is something that seems to me to be symptomatic of the breakdown of, you know, the, the kind of institutions for thinking and evidence giving and, um, verification that we relied on from a social perspective to adjudicate between those two things between conspiracy theory and critique, when those institutions break down, when they're no longer functional, um, you can't have recourse to something like, well, yeah, mine's true. And yours is crazy. It just doesn't mean anything. Um, what means something is the institutions that are, and the practices and the, uh, and the shared, um, dispositions that allow you to make those kinds of claims. And when those are gone, the two become indistinguishable. Uh, and, and, and so I guess what I'm trying to say is I think that feeling of paranoia that you have is less maybe a function of, um, your own subjective disposition than of the conditions under which you're trying to make that argument,

James Parker (01:40:40) - Just to give a concrete example of that. I mean, I don't know if it's really a problem of scale. Uh, you know, when you say conditions under which I can remember the exact, the turn of phrase was, but you go on, you go, you find your way onto a com the website of a company that does Machine Listening, and you'd never heard of them before. And it turns out that you can quickly find that they got X million dollars in venture capital funding only six months ago. Uh, and then you go on their, um, partners section of their website. And it turns out that there's 15 other partners, um, several of which are connected to universities, um, who you think you read an article by one of the researchers who seem to relatively benign, uh, you know, academic, but it turns out it's country and this competitor, and actually the other, one of the companies that they're related to are in fact related to, um, are funded by the Israeli military, and actually, um, and then, and it's not very hard for you to spend, uh, your evenings going down these black holes, um, of true and real networks of funding, research, you know, capital flows, um, government contracts to deliver COVID, um, voice detection tools.

James Parker (01:42:04) - I mean, that's what it, that's what happened yesterday. Basically I found an article from the Mumbai government had, um, was saying that it was going to be able to deliver a COVID voice detection app soon. And I was like, what, how, and then it turned out that it was this us company and, and, you know, and then I followed the threads, right. That experience is sort of like, in terms of the, kind of, almost like the gamer experience of Q or non fandom, or kind of, you know, it's really similar structurally, it's really similar. I, I go, I went on the internet, I found a link to a thing. And then I, when I got to the thing, I was like, Oh my God, this thing's crazy. And then I get linked to another thing and you, you know, you're drawing connections and, you know, to me, that's research, but you only have to change a couple of the nodes, uh, in the network that I was mapping for myself. And it suddenly your, the question you're following, um, you're certain that bill Gates is going to inject nanobots in a, you know, in a, in a, in a vaccine. And, um, so the sort of, it's almost like they're kind of partly the structure is, uh, an entailed by, you know, internet and hypertext reality or something as, uh, at scale or kind of what sort of trying to draw links between flows of funding and research is it seems like there's.

James Parker (01:43:33) - Yeah. Part of my experience of my paranoid experience and researching Machine Listening is an experience of web surfing, which I imagined to be really, really similar to what Q Anon folks are doing every night.

Mark Andrejevic (01:43:48) - W w yeah. You point out something which I think is really interesting about, about the internet, right? One is the structural, you know, the web structure that you invoke the kind of chain of connections. Uh, but the other is, uh, I don't know if this is right, but like there's, uh, but, um, the world is a really messed up place in terms of power connections, and, you know, um, folks who we think of as being in one position, actually have a whole range of subterranean connections, all that is true and real prior to the internet, it was much so hard to extract and find, um, that w that we had this kind of potentially useful symbolic fiction, that where we could imagine things were better and people were better and structures were better than they were. And, you know, the truth is they're not. And how do you deal with the reality of that truth, which, which, you know, once you see the, those kind of relationships between power money, um, I don't know, research implementation, once you trace all of those, if to some extent there is a functional symbolic, um, you know, symbolically efficient fiction, uh, that allowed us to behave as, imagine that we're better than we are, and to behave in ways based on that, um, the internet breaks that down and means that you actually have to then confront something different, you know, the conspiracy theory responses that it that's kind of intractable and ungovernable, um, and it looks intractable and ungovernable, right?

Mark Andrejevic (01:45:32) - Because you could, you call it a rabbit hole and, um, how can you possibly cognitively map that set of relations it's too much. Um, and I suppose that's the concession of the conspiracy theorists, right? If it's too much, then in a sense you can just choose, um, the story that works for you. And, um, and that, and that way you manage the, you give yourself a kind of filter through which to make sense of, of all of these things. But I think that's one of the aspects of the, I don't know, social transformations that seem to be really connected to the media environment, which is precisely that many of these connections are true and that, you know, many of the disillusioning, um, realities of the world are foregrounded in ways that, you know, they were papered over in, uh, in other ways. Yeah. So, I mean, I don't know, I I'm, I'm really committed to the difference between critiquing conspiracy theory, but I don't, I don't quite know what to do with the fact that the chain of connections that you're talking about is so large as to be probably, um, not fully mappable or governable.

James Parker (01:46:42) - I mean, another, we, we have to let you go, but we're speaking to Vladan Joler, um, recently about the anatomy of an AI system project, where we had literally attempted to map, uh, you know, an AI system. This is one with great Kate Crawford and, and, um, it looks really similar to the conspiracy theory maps that you sort of going around on the internet. And, and he said, you know, one of the problems is that these are pretty much not mappable anymore. You know? Well, of course, of course they would never mappable, but th th th that sort of the, the technical difficulty of doing it, even to people as well, resourced as those two and as smart as those, it's just, it's just not possible. So, so the mapping exercise then like assumes a slightly different kind of role it's meant it's, it's mapping is a kind of as a symbolic, uh, thing it's like meant to point you into some directions, but it's not, it's not true. I mean, well, there's no such thing as room anyway. I actually probably should probably wrap up now because I think I'm, I'm on the barking down, uh, uh, fucking, I can't even speak is the point.

Mark Andrejevic (01:47:54) - I don't think we should worry about worry too much about like, the fact that it's formerly a stretch release, similar to conspiracy theory stuff, since that's the whole point, but conspiracy, if there is like adopt an appropriate, like forms of representation that are kind of, um, like, like maps or like proper institutions, or, you know, that they'll adopt forms of respectability and all that. So I don't think running away from it because it's structurally similar. It's like, that's sort of the, yeah, that's the point.

+ 132
- 0
content/transcript/lawrence.md Voir le fichier

@@ -0,0 +1,132 @@
---
title: "Halcyon Lawrence"
status: "Auto-transcribed by reduct.video with minor edits by James Parker"
---

James Parker (00:00:00) - Well, thanks so much Halcyon. Could You begin by just telling us a little bit about yourself And your, your work,

Halcyon Lawrence (00:00:08) - Sure. So I have, um, Well, my name is Halcyon Lawrence and I, I have a PhD in technical communication and information design, and I'm constantly trying to figure out what all of that means for my, my formal schooling was done in both and to be who I am from as well as in the U S where I did a master's and a PhD, um, as always taught in STEM, you know, whether it's an engineering and computer science, this is sort of the first time that I've been teaching in the humanities. So that's been an interesting transition. Um, my work specifically, and the work that we're talking about is that I do research into speech technology use. Um, I am, I'm very interested in the week that we don't often think about design that speech and sound is, uh, it's a medium that can be designed. The met that needs to be designed carefully for a specific audience and a set of users.

Halcyon Lawrence (00:01:23) - And I found that tension coming out of my PhD program, um, because the field of technical communication and information device design has done this wonderfully developed this wonderfully rich set of standards and guidelines for the creation and deployment of the written text and visual text for users. And yet when I look around, I don't see any commensurate with done in the area of speech and sell. And yet if we went to the airport to the, you still have difficulty hearing, um, what's being said on the PA system, or if you're on the tree and the announcement comes on, it's unintelligible. Um, many of the devices that we use that employee speech or sound a problematic, and you have no standards for, for design exists. So, um, I'm interested sort of very broadly and how we can start thinking about the design and speech and sound.

Halcyon Lawrence (00:02:34) - I'm specifically interested in how plus assistance devices like Siri and Alexa. So, um, uh, what I, what I tomb disciplinary devices that they're are doing to engage with everyone. Um, although I am, for example, uh, uh, a native speaker of English, um, these devices have a lot of difficulty in terms of understanding and then last mean she changes to the way that I speak. I don't get to engage with them the way others might. So that's sort of the more specific research that I do, but more generally, I'm very interested in how we think about the design of cell as an information medium.

James Parker (00:03:16) - That was a fantastic introduction. Um, thank you. Um, you know, just the idea that sound needs to be designed. You know, it's just such a and S and S and speech and voice interfaces need to be designed in such a provocative way already to think about or framing for thinking about voice assistance. I don't, I don't think I've ever heard of anybody, but it that way before,

Halcyon Lawrence (00:03:39) - And that came out of a personal experience, you've probably heard me share it at some of my talks, but, um, because I was living in the U S and my parents were back in Trinidad, my parents were handling sort of my financial responsibilities for me. Um, as I was living a week, I got this panicked call one day from my mother who said, you're crap. You know, I got a call. Um, it was one of those automated voice assistance. It was from the bank. And it said that you had eight transactions on you. I'll call it wins. And I was like, surely I've been hacked because I wasn't using my, you know, my card. And so I went through the process of having to make an overseas call logged into the system and heard the seam announcements that she had. And what I heard was each transaction, single transaction. It was, it was, it was not a Trinidadian voice. It was very clear to me at that stage. I had been doing a lot of work in terms of linguistics and sort of, I understood why my mother would misunderstand eats transactions, but it occurred to me. It was one of those aha moments that, that entire message could have been designed differently to avoid any confusion. One would have been an appropriate choice instead of E transaction, for example,

Halcyon Lawrence (00:05:03) - And so when I thought about all of the inconveniences that went along with it, uh, it was clear to me that this is not something that we're thinking about. Um, and that very often the concern that we have about the design of speech technologies is about whether it works or not. And we aren't asking the question about what, for whom, uh, who gets to participate, who gets left out, who gets represented, who gets heard, um, who gets in here that wouldn't voices, um, and the kinds of inconveniences that people who are acting on the margins of these devices experience.

Sean Dockray (00:05:45) - Do you get the feeling that part of the reason that's given for not thinking, thinking about the design of these very fully in the way that you've described as is that, uh, the technologists are just sort of saying it's so hard to even get it to work in the first place. Why are you making me think about all these other things

Halcyon Lawrence (00:06:06) - I can say it to you Sean! I mean, yes. I mean, it may be more complicated than that, but I remember, so I did my post doc at Georgia tech in Atlanta and I, I, for three years I taught a capstone course. It was a co-taught course. I was teaching tech comm. I had a computer science colleague in the classroom with me, and I don't remember what the subject matter was that D but speech technologies came up. And my colleague who's been teaching for years and computer science said, well, but we've got speech technology figured out. It was just like, and I knew what he meant was that the technology and that there is something to be acknowledged in the fact that we have devices that speak. I think that often gets conflated for revolutionary. And, um, you know, that if somehow signals sort of this major, major change, uh, in the way that we interact with technology.

Halcyon Lawrence (00:07:21) - But, um, I don't know if you've come across the work of Marr Hicks, she's a historian of science and technology. She says, um, and I use this almost as a way of sort of asking myself about technology. Every time I encounter it, if a technology is reinscribe bias, it cannot be revolutionary. And so to simply say, the technology works is not enough, um, for whom is it working and for whom is it, you know, it's on the fringes. Those questions have to be answered. And I have seen time and time, again, my own experience in the classroom with the next generation of, of, of Google developers and Amazon developers. There's a lack of willingness to engage with those questions. And that's the fact that these devices work is often sufficient. And that in itself, I think is very problematic.

James Parker (00:08:23) - Can you give us an example of a specific form of interaction with a speech, uh, with, uh, with a voice user interface or, or, or, or an assistant that is problematic in the way that you describe and perhaps speak to some of the specific political and technical, um, dimensions of that?

Halcyon Lawrence (00:08:45) - Well, I mean, I think about a couple of years ago, a couple of years ago, I was in Trinidad and this is, this is probably maybe about five, six years ago. Now I was in Trinidad and I was listening to a friend interact with Siri and she asked for some information and Siri didn't respond sort of the standard response. And I think my friend sort of just caught herself and she repeated the command using an American accent. And Siri came to life and I was flabbergasted because I couldn't pull off an American accent. If I wanted to, it was, it was one of those shocking moments for me because there's so much tied up in identity and accent and language. And it was one of it was more months when, if there was a dialogue between Siri and not use a, it would have been, I'm not going to respond to you until you speak to me in the way that I have been designed.

Halcyon Lawrence (00:09:55) - And there's no negotiating that happens. And there was, and there was interactions. And so there's a politics of, I think one, for me, it was sort of sitting down in Trinidad and realizing that this was a new form of imperialism. This was an export of cultural standards in the form of a device. And having grown up and inherited a clue, your experience, the issue of language and language discipline is a rural for me. Um, you know, try that and to be who has gone through Spanish occupation, French occupation, British occupation, right up until the time of independence. And so you, you sort of, you go through my Island and you see sort of the markers of language of colonized, isle colonizers. And so to have this single device sort of do all of that work for you in such a short interaction was just absolutely shocking.

Halcyon Lawrence (00:11:07) - And I, when I say I'm concerned, I'm concerned about the way in which we are, those of us who don't get recognized by these devices are robbed, divide them to T and trying to interact with them, that they replicate the same kinds of subtle biases that exist in society with regard to language and accents. The story that I tell is that in Trinidad and then the Caribbean, I think the two that might be used in other islands as well, we have a two called freshwater Yankee. And the term is to Rob the tree in that, it's it, you refer to anybody who sort of goes away and comes back, speaking with an accent other than a Trinidadian, not to be Gwynnie an accent, um, as a freshwater Yankee, but that was, that was something that, that happened when you left and came back home, you know, there was a leaving and a returning and the sort of see and hear that fresh water, um, in some of these living room with a device was just so problematic for me.

Halcyon Lawrence (00:12:13) - I have a, I have a niece who studies in, in the us, and she has said on a number of occasions that she speaks, she moves between accents, um, as a matter of survival, but she feels very often it feels so inauthentic. And I imagine not as the experience that we have once we have, once we have to make those switch between those acts and still engage with the devices, there's an inauthenticity. And I think our being of identity that I find problematic, that's a great answer. And I was just thinking about the sort of mundane

Sean Dockray (00:12:56) - Example that you gave a little while ago about your interaction also with Alexa, I'm just trying to set your alarm, right. And then in the end you have to give up, uh, and just wake up 15 minutes earlier than you were intending. And just thinking about that due to its inability to, uh, recognize your voice is that you were drawn into this, uh, extra process of negotiation. And I just think about that also the proliferation of bureaucracy, you know, that we often find ourselves in, but, uh, you know, it's also quite, um, uneven in the way it's distributed it, you know, it disproportionately affects people of color too, you know, and I think of like traffic, traffic infringement, um, you know, just the way that almost that these sort of minor infringements, I just sort of weaponized to take up people's time. Um, and just, uh, thinking about how all that extra red tape comes with all these personal costs. And I realized it's a very mundane example, just like Alexa, not setting the alarm properly, but to me, it's sort of connected to this whole regime of just, um, you know, the way that bureaucracy just, um, yeah, it comes with all these personal costs, uh, for certain things

Halcyon Lawrence (00:14:09) - And mundane for some, but not for others. And I think that's one of the arguments that I try to meet, but for this, there are people for whom speech and sound become sort of a primary mode of communication. And so if these devices don't get it right, there's more police smell costs. There's more, there is more at risk. And so I'm always careful about the examples that I use because I am not dependent on, on Siri and Alexa. So, you know, navigate my daily life. But for some people, these speech devices, not necessarily just please my systems, but to complete a banking, um, transaction or to book a flight or any of these devices that use speech and so on, they are dependent on good design,

Sean Dockray (00:14:59) - Just in that question of good design. Like I think we'd expect that when it starts to go wrong, that there's some feedback or there's some recognition or some way of communicating to a person. Could you describe a little bit about when the speech interactions going wrong. What happens? Like how does the device tell, tell someone that it's going wrong,

Halcyon Lawrence (00:15:20) - But I think that's the disciplinary nature of the device, that there is no negotiating if you and I were in a conversation. And, um, and this happens in my daily interactions and human human interactions, that if I said something that perhaps is confusing or that you don't understand my accents, even before you see something, maybe there's a facial expression that says you didn't quite get that. And I can repeat it. I can see to them another, we, they all have these strategies. For example, like I said, I couldn't, I couldn't conjure up an American accent if I wanted to, but having lived here for 12 years, I understand clear speech strategies. I can slow my speech down. I can hype articulate all of these things that I could do, um, that allow me to continue communicating you doing puts upon me in deal interactions or demand of me that I change my accent so that we can continue to, to communicate.

Halcyon Lawrence (00:16:22) - I don't get to negotiate with Alexa. I mean, five 30, five, 15, five, 15. We went on, I lost that battle. I lose that battle every single time. And she's so polite and she never gets frustrated. Like I knew. Um, so it, is it just this constant losing of a baffle, um, with a very polite, I'm sorry. I, you know, I do. I don't understand what you mean. I think the other thing that's sort of striking for me is I didn't think I'm going to get into hot water one before seeing this, but it's those moments when you begin to see the unintelligence of the devices, that if I like I'm going grocery shopping tomorrow morning, I need to check that list before I leave here, because I guarantee you, if I pull that list up in the grocery store, I have no idea what Alexis put on that list for me.

Halcyon Lawrence (00:17:25) - Um, and I have, you know, sort of screen captures and docu, um, what you recordings of, of who puts in whole human beings on my grocery shopping list, without question like, John, do you really want to order John? Um, there's something so unintelligent about that if you and I were talking and I said, Sean, you know, can you put John on, on the grocery list? It's like, what's going on? And I'll see, you know, and so there's this, there is more months when she doesn't work for me, that signal V artificial unintelligence of these devices

James Parker (00:18:03) - You happen to know as a technical matter, whether any of the companies that produce these things, um, you know, gather, you know, are they, can they work out that they can't understand you? And, you know, does somebody at Google app Amazon receive that data and then go, okay, well, you know, like I'm imagining how that might play out. And it almost certainly doesn't, but you know, well maybe, maybe there's a market opportunity. Yeah. If only we could, you know, cause one of the, one of the sort of political responses to the accusation of bias is always, Oh, we'll just, unbias it. And you know, that has a politics too, because inclusion also means kind of proliferation in a certain kind of way so that the, you know, if we could just make, um, more, um, you know, every accent in the world and every, um, every dialect, um, you know, comprehensible to machines, then we would have, you know, eradicated by us sort of impossible on its own time, especially in the context of the unintelligence you're talking about is political it's economically probably not, you know, rational quote unquote for, you know, these companies.

James Parker (00:19:14) - So how, how do you respond? How do w w how do we confront as a matter of education advocacy policy, the problems that you're identifying, I don't know, decision, you have all the answers without resorting to, uh, sort of, you know, an omnivorous, um, all of the accents, all of the dialects, uh, kind of, um, politics, or kind of, of a politics of perfection in response to accent bias and, and other forms of language and interaction biases.

Halcyon Lawrence (00:19:44) - That is, that's probably my biggest question, Mark. Um, in part, because I doing work in industry and I don't have an insight into how these decisions get made. My trucking of the development of the languages allows me to meet.

Halcyon Lawrence (00:20:03) - See certain kinds of humans, for example, what English has get developed, um, and what dialects within the English language get shoes, um, suggest to me that the market is driving on the one hand, many, maybe not all of the decisions with the fact that standard UK standard, Australian standard, British all of, you know, sort of the Englishes that have already been developed for markets that are far larger than a trend out in English or Jamaican English. You know, you sort of recognize the different market is the decision. If market is going to be driving the decision, then, um, I probably will never see a Trinidadian accent, um, which is one of the reasons I, I argue that those sort of dialects are going to be developed by independent developers who are interested in hearing, uh, and being represented. And that in itself is problematic because of the massive undertaking, um, to be able to develop accents and dialects for these devices.

Halcyon Lawrence (00:21:18) - Um, but it's not just about the market is because when you look at some other devices and you see far smaller populations that speak sort of like the Singaporean English, for example, that I think Google has developed for it clearly is about sort of buying. I mean, it's not just about the size of the population is what I'm seeing. That there's something else that's going on, male, um, that we have to examine. I don't have an answer for you. I do. And part of what I am trying to, to figure out, I'm trying to answer the question for myself. Do we need the fact that we're able to walk around in some ways myself undetected as a minority in the U S may not necessarily be a bad thing. So that while it's an annoying thing to have to wake up at five 15 in the morning, instead of five foods, if you run my voice through unmatched, it's the some civilians device, you know, is it a bad thing that I don't get detected? Or I do it? No, if, while I critique it, I don't know if that is the Google a lot is what I should be advocating for, for people of color. I knew I did not see a question

James Parker (00:22:39) - That's so fantastic. That's a fantastic point. And I was just thinking, as you were saying it about how the kind of portability and accountability thing just plays out so differently in different contexts, you know, I'm thinking of the voice biometric databases that have begun to grow up around prison populations, which we know are, you know, re re heavily racialized, but not just racialized and, you know, the way in that PR that, that produce a certain form of audibility to Machine listeners. Um, that is kind of what was different too, but profoundly related to the kinds of audibility and enable in order to ability that you've been talking about, you know, and, and as, as Machine Listening gets embedded through the so-called smart city and, you know, and becomes less a matter of consumer, you know, consumer toys, these problems, you know, compound and proliferate.

Halcyon Lawrence (00:23:32) - Yeah. I wrestle with that on the one hand. So you're quite right. You're starting to see these devices being used, um, uh, as surveillance in prison populations. Um, I don't feel aware that there have been some schools that have been using them as emotion detectors, um, to sort of anticipate, uh, angry speech in a more highly emotional speech. Do we want to be represented in the Wisco pro? Is it what happens when we are and what happens when we are? I, I, I don't have, I don't have answers for this question is that I'm very, very concerned. Um, what is sort of the threshold of representation? I think added to which there is, we are also sort of dealing with the development and the proliferation of these technologies in a highly unregulated environment where at least to meet lawmakers in the U S have no idea how to keep up with what's going on. And so, you know, there are no laws, there are no regulations. It's the wild, wild West for developers out here. So where it go is, is anybody's is anybody's guess. And I think the food challenge that compounds it's concerning for me is that we invite these, these devices very often into our spaces of it, that we are complicit in very often the invasion of privacy. And we are sort of key with giving up Susan Cain's of rights and privacy for the convenience of services. And we aren't often thinking about.

Halcyon Lawrence (00:25:10) - That we're providing data for is data that can be aggregated. Um, and that can expose patterns about ourselves that we may not even be aware of or aggregated and expose patterns of what communities that were at palazzos. So I think there's a lot think about I'm burdened by that. It's one of the questions that I'm bitten by. Um, do we represent, is it better that we all underrepresented as people of color? Is it, is it problematic that beyond?

Sean Dockray (00:25:42) - Um, I, I was wondering, like, just following up on that, whether it's interesting to look at, um, how companies have come to address problems of bias. I think it's only been in the last few years that it's sort of, I don't know, it seems like it's come out as a publicly acknowledged sort of problem for machine learning to the point that I think even the companies are sort of recognizing our yes, we recognize that the data sets that we're training our models on, uh, that there's a latent bias within them. Um, I mentioned that some companies have tried to address that over the past few years since it's sort of, um, been identified as a, as a problem. And maybe by, you know, have you found that in your research that companies have trusted?

Halcyon Lawrence (00:26:32) - I think one of the challenges, one of the challenge, even before we get to the point of identifying companies identifying and acknowledging the need to address bias, I, I think we're sort of now, well in the city, not enough, but part of the challenge is that the, the makeup of these companies, uh, who gets to make decisions and who codes and who designs I think are problematic in and of themselves. And so, um, the choices that we see that manifest themselves in the design of these devices, I think are a direct outcome of, you know, who gets to sit at the table. I think the matters of inclusion and diversity are just as much about sort of who's sitting at the table, um, thinking about design as they are in terms of the decisions that sort of get filtered. And so to address address the design of the device feels almost impossible unless we sort of getting to the real issue of, of what's going on in tech and that conversation.

Halcyon Lawrence (00:27:42) - I think, um, it's coming to the forefront. I think of work by, for example, Sophia noble. Um, who's been exploring and talking about, um, um, radical, radical change that's needed in Silicon Valley. And, um, in the way I've been thinking a lot about, uh, in our education system. So I work with a colleague of mine, um, Dr. Liz Hutter, we are having, we both with the Georgia tech for a couple of years, and we were sort of struck by, uh, this orientation, this language of paternalism that emerged in our classrooms. And we're curious as to where students, um, that there's almost a devaluing of anything that isn't, um, tacky, you know, teams like pre like primitive or really basic. Um, but we will communities, um, that they're willing to, the students are willing to dismiss, um, because there's a technological solution. Um, I'm considering them and Dr. Hudson, I noticed this, that, um, sort of a reductionist approach to problems that somehow we can take a communication that's so messy and so problematic that we, as humans will have difficulty figuring it out and to reduce that, uh, a series of ones and zeros just feels so arrogant to me. Um, and yeah, so I, I think until we can address the human problem of, um, hiring and training and education, I don't know that we can any, anything with regard to the divine design of the device view superficial to me. Yeah.

Sean Dockray (00:29:39) - That already answers sort of my, my follow up, which was thinking that if, if the way that these companies are, are recognizing and addressing bias, you know, leads to this, there's kind of like ability to recognize more voices, you know, then there's kind of dialectic of like kind of inclusion and exclusion just shifts, uh, because there's still okay. We, there's more voices that are recognized, but there's still, uh, exclusions. And you're identifying that, that those, those are happening just in this question,

Sean Dockray (00:30:11) - Of the design process in the prior to even, you know, thinking what does this device need to do? You know, who's asking the question who gets to ask the questions and pose those questions and be at the table. Um, so that was a great,

Halcyon Lawrence (00:30:26) - Yeah, probably we and how do we keep, how do we keep the needs of the use at the scent of design very often at, I think that was my, my greatest challenge in teaching in computer science that, um, very often, um, the people for whom these devices will be designed on consultant, they aren't considered the, the history and needs there. You know, everything about the design just felt. So acontextual the example that Dr. Hutter and I, uh, uh, have been using, but also sort of now researching, cause it keeps coming up over and over again is the design of sign gloves that do signing. It just seems to be a pet project and computer science people, our students constantly win prizes and I give them money. And every time it comes up, somebody from the deaf community will come forward and say, that is not signing.

Halcyon Lawrence (00:31:27) - Who is this for? And they, and if you ask them what is a solution, they will say to you learn sign language. Um, and the, these communities are often not looking for technical solutions. Um, but there's something in the orientation to computer science and coding that suggests that everything has a technological, a technological solution. And it's problematic for me to say that out loud is so shocking. I grew up around technology. I didn't realize that was a privilege. My dad worked at IBM when I was a child growing up in 1981. I had a PC in my home. I didn't realize other people didn't have a PC in my home. Um, so I've always been comfortable around technology, but in some ways the things that I'm seeing in Legion makes me feel like a Luddite. I'm always sort of just no stop. Do we need this?

Halcyon Lawrence (00:32:24) - You know, I have, I sort of tag new, um, ideas as has another solution, the problem that doesn't exist. Um, and I'm not something that we still, we would see that students create problems for which they need to then design a solution. So as a, as a teacher in the humanities, I think it's one of the, was the marginalization of the humanities in these fields, I think is particularly problematic because we no longer students no longer get to ask those questions about the value of, um, these contributions and who benefits and who's who's who and, and who else has access. Even if you have the best intentions, how can these devices be used in ways that you have not necessarily thought about?

James Parker (00:33:21) - Well, I was thinking I was going to ask you the question about the humanities. Cause you said that you had just started teaching in the humanities context. And as I was listening to you, I thought, I always think that you know, of this problem as a, as a kind of, or some of these problems as, uh, you know, quite humanistically basically that, that, that if, if only we had, you know, broad humanities based educations, if only we could come up with, you know, we could, I mean, the humanities are under attack in Australia right now, very explicitly in price pricing signals that the government has just introduced. Anyway, it's a whole thing, but I wonder if you could just talk about your experience because I'm, I'm, I'm, I'm teaching from the humanities and I want to defend, but, but, but is it, um, you know, we've just established a new center, so supposedly supposedly interdisciplinary center, um, for, you know, what is it, AI and design ethics. I think it is. And it's meant to be, you know, but when I look at it, I, I don't, I don't know if that's the solution or part of the problem ethics, washing and so on and so on. So, um, I just wonder if you could, if you could elaborate a little bit on your experience at moving into or around the humanities, um, cause it seems like a lot of hope is being placed in that at the same time as the humanities are being dismantled.

Halcyon Lawrence (00:34:47) - Yeah. I, one of the things that I really enjoyed, um, in the last couple of years was being able to sit in the humanities and to engage with technologists and recognize that we speak a very different language. We think about problems differently. Um, one of the things that I love about the field of technical communication.

Halcyon Lawrence (00:35:12) - Is that we become almost to these advocates who do the work of, uh, negotiating on behalf of the user with technical experts. And so I do think I do you think there is a rule for technical communicators, for example, doing this kind of the advocacy work, which is why I raised the alarm in my discipline, that if we aren't paying attention to speech and sound design this week is happening without us, that the rollout of these technologies are happening without our intervention. So I think that's one of the, one of the ways, um, but the humanities can help mitigate these issues. Um, if I sit at a table with a technologist, they don't fit to me. These don't feel like basic questions, but I'm always amazed to see people stumped. When I ask the question, who is this for that sometimes they've gotten so far away from just a discussion around audience and people and communities, or maybe not even have that conversation that having somebody, um, sort of just interrupt that world and ask other questions.

Halcyon Lawrence (00:36:36) - I think, um, an important intervention, I think the other kind of intervention at the level of education, um, how do we begin dismantling these patriarchs? Okay. Thinkings around technology and technology development months. Um, how do we, I think ethics, I think ethics actually in some ways has created some of these problems. Um, particularly patriarchal thinking. There's an article, very short article. It was very provocative. Uh, I forget it's by Williams and she's actually looking at patriarchy in engineering and suggests that part of the, the, where she sees perhaps the beginning of patriarchal thinking is wrapped up in these codes of ethics, that engineering and computer science codes of ethics suggest that these disciplines are here to see, to protect, to look after the human good, you know, to produce these devices that will better the lives of people. And so you come into an inherit, um, this thinking about technology that suggests that you are the savior, you are the patriarch, um, and the guardian of, of humanity, and that your devices are going to always go out and do good in this world. And so I think only about ethics and not think about issues of diversity who is inclusion, where's the power distributed. I don't know that ethic, just sort of thinking about ethics is, is going to be sufficient. Um, and that kind of training has to start if, if our students are being taught to code at four and five years a little, I think those conversations have to start at the C major as well, right. Alongside the skills we have to have conversations about who benefits from these, these technologies.

Joel Stern (00:38:44) - Um, thank you for that answer Halcyon, and, um, just sort of to, to return to, um, the quandary that you, you kind of, um, grappling with earlier about, you know, whether our imperative is, is to sort of improve these technologies. So, so that they, um, sort of cater for a more diverse group of users, um, or whether to resist them, you know, um, in the understanding that the sort of disciplinary and policing kind of, um, implications, um, at the end of the day, uh, uh, worse rather than better, especially for, for, um, different communities. And I suppose one of the things I was thinking about in relation to the question of ethics and inclusion also is that we're talking about devices produced, but by some of the wealthiest companies in the world, um, Amazon and Apple, et cetera, um, who, who have risen to the top of, of a sort of capitalist system, a platform capitalism and, and ha have, you know, um, raped a kind of unimaginable amount of money from sort of extracting data from communities. And so just sort of going back to the earlier kind of thought you were having about what would constitute a revolutionary technology, you know, would it necessarily be. Uh, technology, not, not produced by a company like Amazon or a company like Apple, um, but rather a voice user interface that was grounded somehow in a, in a kind of not-for-profit context. So I wonder if you could, you know, say something about, um, what, how voice user interfaces could be more revolutionary, um, not just in their design, but in their political grounding. Yeah.

Halcyon Lawrence (00:40:45) - There is a tension there Joel that, um, on the one hand, I think that the, I would make the argument that a revolutionary voice into fees would be, would come from within a community of color, um, for that community of color, the challenge and the tension of course, is the resources needed to develop. That kind of interface are often not available to communities of color. So the fact that we've been training our voice interfaces on a Midwestern accent for the last, what 50 years is part of the challenge, where do we begin to start collecting voices of community of color communities of color and not have that and not have that, um, initiative? Co-opted I also think even before we get to asking the question about, I think he, revolutionary device is one that a community has said that they will benefit from that there is there's no company organizational piece and making those kinds of decisions, um, and making an argument that this would revolutionize the way that our community does something or accomplishes X.

Halcyon Lawrence (00:42:22) - Um, so that, that I do think it needs to emerge from within. And that's one of the reasons why I think it's, I think that sits with the initiative of individual developers who, um, perhaps wants to hear their, um, their voices represented their community's voices represented and not to is problematic. That that doesn't necessarily mean that an individual developer understands what's that risky. Um, but something about being pulled out of that community suits and me getting set dress, some of the concerns that we have about inclusion and, and representation, I think there's an economic cost to the development of these devices in a weave that, that other kinds of software might not cost. And so I think, I, I think the community fleets has to figure out if that is something that they want. Um, and to then as a community, they begin until, um, two week towards the realization of how that might might happen at that feels like a cop out answer.

Halcyon Lawrence (00:43:34) - I do on top of battle one, I don't have a battle one for you, I'm thinking about. And I return to the work of Ramsey Nasser all the time who has been diligently, just plugging away at, um, developing a code in Arabic. That is the work of an individual who wants to be able to cool it in his native language. That is never going to be the concern of, um, a large company, unless it is, it is economically beneficial. And even then there's the politics of, of, you know, that sort of leads to certain kinds of decisions about what, what languages get developed. So I think it sits within the video renegades who want to upset the theater school, but to do that as it is, it's such personal sacrifice.

James Parker (00:44:35) - I was just thinking, as you were speaking about, you know, the, the, the discourse of rev revolutionary technologies is so profoundly problematic in the first place. I mean, what precisely is being revolutionized other than the technical paradigm and, you know, so much, I mean, w w w w we've been thinking a bit with this, you know, well-known work anatomy of an AI system by Kate Crawford and blood and Joel, uh, which maps the, um, the sort of economic and labor and human costs of producing a device like Alexa, and in the case of that, that work and, um, And it's all for the sake of convenience and what minor conveniences. I mean, I'm always astounded when people are like obsessed with the idea that they don't have to turn off the light switch, you know, because they can just use it unlike, but that there has never been anything less worth reinventing the entire technical paradigm of like global society then than to save you the energy required to turn off the light switch. And so the moment you think that, you know, so much work has been, has gone into making that seem like, you know, revolutionary, I feel a bit of an idiot saying this, cause it's sort of so obvious, you know, what do we want automated cars or do we want public transport? You know? Um, but on the other hand it just remains as true as ever like, like what has been revolutionized, revolutionized other than the technical system itself, which enables the further sale of devices and, you know, it's

Halcyon Lawrence (00:46:12) - And the food of a widening of the gap of access. Absolutely. But I would hear all students would make these pitches all the time and every time they made a pitch, it was going to revolutionize. It didn't matter if it was a smoothie app. It didn't, it was always revolutionary because that is the language of the industry. It's how you get funding. It's how you get the support. It has to be revolutionary. And I think the media, I am teaching a class close science and its public audience this semester. And we're looking at the role of the media in picking up these stories and running with it and not critiquing them sufficiently enough that the headlines see very often that it's revolutionary. And then there isn't enough investigation, I think in these articles about what's problematic and what's being reinscribe in these devices. So I, I I'm, I'm concerned about is one of the things that I wanted my master's students to, to have an appreciation for is how do we talk about these, these innovations in ways that I think all in, in ways that allow the readers to really understand what's at stake? I don't think that's happening in many of these articles that we read.

Joel Stern (00:47:38) - I think we were, we were interested in, um, knowing a little bit more about the forthcoming, um, piece of writing, um, Siri disciplines. And, um, you, you, you've said a little bit already about the disciplinary, um, functioning of these devices. Um, but I w I wondered, um, you know, if, if you wanted to share a little bit more about the, the sort of ag argument that you make there, and, and then we, we, we also had a question about your, your core, some series progeny, um, and perhaps, you know, those two answers are strongly connected in some, in some way, because, uh, well, I've got two little kids and so discipline and progeny interrelated questions, but yeah, if you could, um, expand, expand on that a little bit, that would be fantastic.

Halcyon Lawrence (00:48:33) - So I'm really excited about this piece. It's called series three disciplines, and it's part of a, an edited collection that is coming out later this year, um, that these chapters and the titles themselves of each of these shops as a series of provocations, that when me additives sort of came together to imagine this collection, it, we sort of imagine that there was a challenge to what was going on in Silicon Valley, asking of the next generation of technologists to consider, to be able to read the humanistic concerns in the design of the technologies, um, in many ways, everything that we've spoken about. And now I think about that, that article probably is trying to do too much, because I think about all of the ones that I'm trying to make. Um, so I, I do think there are, uh, many of the things I spoke about, uh, the arguments that I read that I, I made the argument, that speech technologies in a way, revolutionary fairy, inscribing bias. Um, I mean, the argument that speech devices are disciplinary, I talk about sort of the inability to negotiate, um, with these devices. Um, I do talk about sort of the very imperialistic nature of these devices about, um, that in itself is what for me makes it a disciplinary device. I also make the argument that, that what's at stake. And at the time I had conceived that peep, it was a couple of years ago. I was only thinking what was at stake was identity. Um,

Halcyon Lawrence (00:50:19) - My research has led me to sort of understand and see far more, especially as I see these devices being ruled out as civilians devices. Um, I am increasingly concerned about, um, what's, what's the cost, but I think the, I think the, um, the, um, the conclusion that I come to in that work really goes back to my dissertation research. And I make the argument that there are conditions under which accented speech can be used and deployed in speech technologies that are not problematic. And that I make up call to technologists to start thinking about the use that accented speech is not unintelligible speech. There are certainly conditions where it may not be optimal. Let's say in the case for the emergency and people have to process speech very quickly. Maybe you don't want to deploy accented speech in, you know, in a device that kind of device, but there is value in allowing people to hear a range of accents in these devices, because one, we knew that all is going to get Sri and, um, but that we need to normalize accented speech.

Halcyon Lawrence (00:51:45) - The research shows that accented speakers are seem to be less intelligible. They have different kinds of outcomes in the courts. They have different outcomes and housing and education that all of these, uh, discriminatory practices that people experience in sort of a day-to-day life as, as accented, because we are seeing replicates in the use of these devices and the, there may be value in expanding the range of accents that are available. When you hear an announcement, a P a system, for example, let me get, and still normalize the use of accents, because you all have been looking at the news. You've been seeing, for example, in the U S just, um, you know, people being tooled to, if, if they want to speak Spanish, go back, you know, to go back to Mexico and just really horrific, um, steep months about that, that sort of understand that that linguists and acts and bias are alive and well.

Halcyon Lawrence (00:53:00) - So those are the, those are the things that I, the arguments that I make in that series, progeny actually was a, it was a cause of co-taught because it actually isn't connected to what I'm teaching now. And I have been sort of thinking about ways in which I can bring that I recently moved to Towson university in Maryland. That core series progeny was a co-taught core design course with a colleague of mine, Dr. Lawrence. And it was interesting. It was cross-disciplinary. And we also brought together two different levels of classes, their composition class. And there was a theory, a technical communication class, and we smashed them together. And we wanted our students one, because we were dealing with composition and writing. We wanted our students to sort of be exposed to the domain of speech and sound as a domain of composition. Um, with traditionally they've been taught just about writing, um, in the very, in the very narrow sense that we wanted to broaden that idea of composition for them.

Halcyon Lawrence (00:54:10) - We wanted them to develop Listening, literacies and sound literacies. Um, but we also wanted them to reimagine speech devices that were inclusive. We wanted them to reimagine to think about audiences first and what they needed as central to the design process. And so while we went off from them to build, I mean, many of these students would have gone on. And as my thoughts, um, Dr. Niece still has a couple of students who writes years later and see, you know, I've designed this thing. Um, we wanted students just to think conceptually about the way in which, uh, speech and sound design could be different. So I haven't yet found a space for that to tell us, but it's coming, I'd take that course. It was wonderful as a motto for it. I remember the time, um, um, we're not sure how he found it, but the series manager at Apple, uh, roots.

Halcyon Lawrence (00:55:19) - Um, to us to see, I've just seen your course design and, you know, I think it's provocative. It's asking the questions that we, we would like to answer as well. And yeah, so yeah, I, I do think there is value in, um, targeting this challenge at, uh, at the stage of education and by the time it hits industry, I think we've tested in battles that we've lost. That sounds really fatalistic, but I can say that as an advocate, so yes,

Sean Dockray (00:55:55) - Yes. I don't, I don't know if it's an inappropriate question sort of at the end. Um, I just want to thank you again so much for your time, and it's been a really rewarding experience for me to listen to you talking about all these things. Um, you can sort of, uh, take this or answer this question, how you will, but I was thinking about this, this idea of emergency that you just brought up a few minutes ago and just thinking how crucial it is that accented speech is recognized, of course, in emergency contexts and especially, uh, in these situations where we depend on the devices and we depend on these systems and just thinking about how, like, you know, maybe in triaged or call centers, and there's more automation, you know, going hand in hand with a reduction of, of labor forces and things like this, which brings me to disability.

Sean Dockray (00:56:48) - And, um, just thinking about communities, um, of people who might sort of benefit or depend on, on these devices and, uh, yeah, the importance and recognition for those, uh, communities and getting it right. Um, and having them be sort of able to be recognized and have their voice be heard. I guess I just wanted to hear you talk a tiny bit about, um, both the way that disability is both like, uh, an actual community of men. I mean, multiple communities of users, but at the same time kind of functions as a, as almost like a rhetorical figure for these companies. Like, um, like in a way the student that you mentioned about who is, you know, treating sign language as if it's just something you do with your fingers and not a whole ongoing evolving practice, you know, that involves just, or more broadly in face and, you know, um, but at any rate that, that, that it becomes like almost a way to justify the technological development, but at the same time, it, it, it functions a disability functions as a real practical, um, an important, um, community of users for whom yeah. It's essential that these technologies work.

Halcyon Lawrence (00:58:07) - Absolutely. I will say. And I know you're interested in, in, at, in any other people that, that you can talk to whose work might be of interest. Um, I don't know if you've come across the week of Meryl Alsper. She has a book called, um, it's M E R Y L A L P E R. And she has a book called giving voice that I think, um, would Marin would definitely be able to speak more comprehensively to this and to the question that you've raised. And I recommend, I recommend her work direct, recommend you chatting with who? Um, I think one of the things that I appreciate about marrow's work is, I mean, all of the conversations that we have been having to the sort of think about communities broadly. I think once you start talking about disability, you're talking about very niche communities. And that requires in my mind, I'm thinking about Meryl's week requires a different kind of engagement.

Halcyon Lawrence (00:59:18) - That means that you have to sit with these community youth. You have to be able to really understand what the challenge is, what the problem is, but you also have to ask the questions of the community. What do you want and what is necessary for you? And Meryl's would specifically look at, um, speech devices, uh, for children with, with speech impediments, speech disabilities, dyslexia, and so on. So I think work, I think she would do a far better, um, off a far better response, but even as you were asking the question show, and I also think of very recently, um, I had the privilege of sitting on a dissertation committee for a doctor, um, Alex Ahmed, A H M E D. And what I love about Alex's work is, um, so she does she designed and built.

Halcyon Lawrence (01:00:26) - Sort of a beta app for voice training for the trans community. So that we, one of the, one of the challenges of transitioning in that community is often the voice. It is sort of what is considered to be the giveaway. Um, it's also speaks to identity, you know, sort of having a voice that represents, um, how people see themselves and how they feel they want to be represented. And so Alex, as a trans researcher sits with this community over a period

Sean Dockray (01:01:03) - Of months and months, and months

Halcyon Lawrence (01:01:05) - And code designs and cool develops this app for the community. Um, and so this idea of it's not even, it's not even ethnographic work, it is participatory design, um, and there is something that is so humane and so respectful about the way in which she goes about doing this work that shows up in the design. Um, for example, one of the things that strikes me when I look at how, when I looked at her, um, app was just the flexibility. Um, because one of the things you begin to recognize that even though you're designing for specific community, this still not a monolith. And so that idea of, of, of flexibilities is sort of embedded in design. So I think those are the two people that I would, I'm not suggesting that the trans community, the disabled community at all. I'm just saying that I think there's a research method there that we, that we need to consider that even as I describe it, you begin to realize it's time consuming. It requires a certain kind of orientation to receive that we typically do and often see in STEM. So I think sort of looking at the work of people who do this participatory research would be interesting, but I also suggest looking at Merrill Alper's work on giving voice. I think there was a two, um, researches that you can, you can check out and I can, I can email you. Um, and if you'd like an introduction, I would be very happy to facilitate that

Sean Dockray (01:02:39) - Those are great leads. Thank you very much.

Halcyon Lawrence (01:02:41) - No problem. And allows me to cop out now that I haven't thought about it. I just don't think I'm the best person to answer those questions.

Sean Dockray (01:02:50) - Thank you so much. Um,

+ 128
- 0
content/transcript/mattern.md Voir le fichier

@@ -0,0 +1,128 @@
---
title: "Shannon Mattern"
status: "Auto-transcribed by reduct.video with minor edits by James Parker"
---

James Parker (00:00:00) - Could you begin by introducing yourself and your work just in general, before we get on to the specific questions and Machine Listening and so on.

Shannon Mattern (00:00:10) - So I think I'm probably, uh, I would like to think a pretty, um, principle, principled, undisciplined person, intentionally undisciplined. I came from a non-academic family. My dad had a hardware store. My mom was a special education teacher, majored in chemistry and switched to English. And then, uh, when I thought I was going to go work in advertising, but instead I was encouraged in my last semester of undergrad to go into grad school, which is something I hadn't even considered really and got into a PhD program and media studies, but took a lot of classes in architectural and urban history and theory and landscape theory. And then, um, uh, became interested in information studies, uh, which not really, um, library information studies, not training to be a library and, but just kind of the more theoretical and aesthetic dimensions of that field and then did a post-doc at our history, which again, I, some not something I targeted, but was the art history department in the post-doctoral program kind of chose me, which was really exciting to be exposed to that field for a few years.

Shannon Mattern (00:01:07) - And then, um, went to move to the new school, into a department of media studies like 16 or so years ago now, but taught a lot with designers and that's a wide variety of designers at Parsons school of design. So I have either co-taught or had students in my classes from architecture, urban design, uh, design and technology, ground communication design is still a wide variety. Um, and, uh, and my work has been inspired by all those different collaborations. My specific interest in sound as probably comes from the fact that I was, I dunno, trained sounds a bit too kind of overblown, but I took a lot of music lessons. I actually think I was a pretty good flutist. I was thinking of going to conservatory. If I hadn't gotten into a traditional kind of college, um, college pathway and then also play the piano and violin, um, uh, didn't pursue those professionally. Uh, but then I was had the good fortune of being paired with a colleague. When I started at the new school in 2004, I shared an office with a musician slash composer and just start everyday conversations really made me realize that a lot of my work about info about architecture and media was very, um, ocular centric. A lot of the architectural criticism, uh, historical scholarship really focused on the building almost as if it was objet d'art, an object of art. That was something to be seen, not something to be heard or something to be walked through or experienced. It was almost as if it was being read as a painting. And my conversations with my colleague, Barry Selman really helped me to realize all of the inaudible components that weren't being addressed in contemporary criticism in scholarship. So that really planted a seed or laid kind of a ground note. I don't know what Sonic metaphor you want to use here. And that has been a thread or a reframe that has echoed throughout a lot of my work over the past decade and a half or so.

James Parker (00:02:58) - Great. Thank you. Uh, I mean, what an amazing eclectic, uh, uh, past and trajectory, I mean, I, I kind of, I kind of want to dive into all of that biography and everything, but maybe, maybe let's maybe let's leave that and see what comes out. Um, you know, you, you wrote this, you were saying before that you, you, you had a coronavirus and that you were, you were already thinking about Machine Listening and then, then suddenly the pandemic context gave you a way of thinking those different things together. Um, you know, could you, could you tell us a little bit about, um, you know, what, why and why thinking about the pandemic or pandemic Listening, you know, ends up being so closely concerned with Machine Listening or, or maybe also, you know, what even is Machine Listening or whatever way you want to get into that question?

Shannon Mattern (00:03:54) - Sure. So I will say that a lot of my research comes out of prompts and invitations to contribute. So last summer I was contacted by, um, Joel, you might know some of these folks over burrata and Leslie Hewitt, two artists who teach at the Cooper union and they were organizing the interdisciplinary studies, some seminar that they invited me to be a contributor to. And they told me that I'm just looking at the email here. They had three themes. They wanted me to choose to be a part of either the expansion, the counterpoint, or the dreamwork theme. And I just couldn't decide between any of them. So I decided I'm going to write something that was on all three. And because I had in the past thought a lot about Listening to infrastructure about Listening as a methodology, to help us understand logistical systems and infrastructures. I thought, um, I wanted to find an application that would allow me to weave together those three themes for the lecture series. So this is really productive for me to have either, um, A, a framework of a term to bounce off of a lot of my work. Doesn't just erupt from my brain. It usually comes from somebody putting a constraint out there. And then I figure out like where my interests in, what I can do well would bounce up against this, um, kind of prompt that I've been given. So that's where the article came from. I kind of hatched the idea of last summer before coronavirus or anything was kind of a glimmer in anybody's. I finished the piece, I shared a dance with it at the Cooper union in early December. I think it was. And then, um, I write for places journal, which is an open access venue. A lot of, most of my writing has takes part through places and it's a couple months review process, uh, really intensive editing with, um, my, the team there. And by the time it was being prepared for publication and early spring, the pandemic was that upon us. And I realized that a lot of the ideas that I was thinking about back in December had new resonance and this new context. So I took a lot of what I already had produced or put together in December and add it, wove some more pandemic or kind of a quarantine related themes throughout that existing piece.

James Parker (00:06:02) - So, so if, if the pandemic didn't supply the immediate context or the media inspiration for the essay, but then was it Machine Listening and you know, what do you understand? Machine Listening to be, I mean, one of the things that interests us is how to name the problem that we're concerned with. And, um, you know, in the, in the research that we've done, it seems that Machine Listening has a couple of different sort of origins, you know, um, comes out of a scientific discourse where it's one of the terms used by scientists to talk about the application of AI and machine learning techniques to audio, but it's not the only one. And then it seems that there's a, there's a sort of a parallel or related stream into thinking about Machine Listening via, um, computer music. And, um, there's a number of composers who work with computers who talk about Machine Listening. And so we've been trying to think about, you know, what, what is this object? And I'm just, I was just so struck by your essay that it names this object Machine Listening, and that it seems to have so many concerns, similar concerns to the ones that we do. And I was just wondering, you know, how, how do you think about Machine Listening how does it come to you as a problem? Why, why should we be concerned about Machine Listening um, in, in the context of the pandemic or otherwise,

Shannon Mattern (00:07:21) - I think maybe the origins of this piece were very similar to what I mentioned earlier, um, uh, in terms of ma the birth of my interest in sound in space, because I was realizing that just by talking to a musician colleague that this was something that was not echoing and a lot of the scholarship and criticism I was reading, uh, all throughout 2019 and the prior years I was hearing so much about Machine vision. Um, so, uh, concerns about automation, about facial recognition, about the inherent biases and injustices that are kind of rooted into programmed into this technology. Um, uh, I thought there's parallel stuff happening in the Sonic world that isn't really being as dressed as much. So just hearing the, um, absence, the silence on relative silence on this compared to the proliferation of, of research and criticism, and I think kind of general public growing public understanding of the political problems of machine vision, I kind of wanted to contribute to help people to realize that other sensory modalities are being automated, um, as well. And we need to maybe transport or translate some of those critiques apply to machine vision over to, to see how they work in the Machine Listening realm as well.

Sean Dockray (00:08:34) - Why do you think that, why do you think there was a lag between your sort of computer vision and Machine Listening as a object of interest?

Shannon Mattern (00:08:43) - Uh, well, I think that it's part of just this general critique, the fact that, you know, a lot of the founding figures in sound studies when they had to justify the existence or the need for such a field were critiquing this long kind of Western enlightenment tradition, that's privileges of ocular centrism. The fact that we kind of have this epistemological bias towards sight as the quintessence or the, uh, paradigmatic or quintessential mode of knowing. So I think that's one kind of larger macro skill framing for why not much attention is relatively as paid. There's also the fact that we don't have necessarily a cultural literacy or even like the linguistic elasticy you talk about sound is the way that we use to talk about vision. Um,And I, I think it probably is not as well known, given how much press has been offered to especially facial recognition and the fact that many cities have actually officially outlawed it. This is in major newspapers, um, and the, the lay public, or kind of general readership knows to some degree that the machine vision and all of its various manifestations exist. There's much less press, uh, given to kind of the Sonic components also. So why these larger historical epistemological disciplinary framings, and then I think kind of the general news agenda is there's a dearth of attention to these. So those are just two factors why there might be less in this story.

Sean Dockray (00:10:06) - Um, you mentioned that your previous research in, in, um, infrastructural Listening or Listening infrastructure, um, as like, uh, as one of the earlier threads that brought you into right. This most recent piece, I was just wondering if you maybe could walk through, um, well, what, what you mean by that, by, by listening to infrastructure and then how that connects to how that brought you to, um,

Shannon Mattern (00:10:29) - Sure. So I think I started by, again, those early conversations with my colleagues 15 plus years ago, and I was writing a lot about libraries. So my dissertation, which I did in 2001, I think is what I did the field work for that was at the Seattle public library. So I wanted to, I was essentially an ethnography of the coming into being of a building. So I looked at how the plan's evolved, how it responded to how you had this kind of mixing of cultures, this you upper European avant-garde architect coming to Seattle that was establishing itself on the world map, thanks to the presence of Microsoft, et cetera, and how a new kind of stylistic vocabulary came into being through those negotiations. And I didn't talk much about sound in my dissertation. And then when I turned it into a book, I realized like this was a real gap in my initial field work.

Shannon Mattern (00:11:17) - And I realized that a library is a building typology that is defined in large part by its specific sonnet conditions. You know, the historical myth is like the shushing environment, which is not really so much the case anymore with more noise making activities happening there. So I wrote a couple articles and incorporated quite a few, uh, library and archive related themes looking at either Sonic collections, Sonic archival collections, or, um, uh, ways of listening to archival materials or the archival residences and acoustic conditions of information architectures themselves. So that's one way. And then I started to get a lot more interested in infrastructures when that word became a lot more popular in the Academy, especially in media studies, which is the field that I guess I most identify with and, um, was thinking about what we can learn there, there was so much, um, talk of making the invisible, visible.

Shannon Mattern (00:12:10) - That was honestly annoying me. It was so prevalent. And I wanted to know again what we could learn by about infrastructure by listening to it. So I wrote a couple pieces about artistic work methodologies, either work in like structural engineering, where people are putting contact microphones on bridges and dams, et cetera, to essentially hear structural weaknesses that are not visible, um, or are not detectable in other ways. And then more recently I wrote a piece about, um, uh, Listening to logistics. So a lot of that work in media studies on infrastructure has now expanded or morphed into an interest in supply chains and logistics. So I wrote a piece, a chapter a couple of years ago that still isn't out yet. I think it's probably gonna come out in the book of 2021 because academic publishing moves painfully slowly, but that was about what we can learn about supply chains by Listening both up close to machinery docks, historical kind of vocalizations of stevedores, et cetera, and at the macro scale. So it would distant Listening B to an entire kind of global supply chain, for instance. So that's like a larger scale manifest. So I guess I started at the architectural scale by Listening to a reading room or listening to an archival collection, then scaled it up to infrastructure. And then even further to think about what we can learn by listening to supply chains and logistical systems. And then this urban oscultation piece was the most recent piece. And that I think kind of crosses crosses some of those scales

Joel Stern (00:13:34) - Shannon. Um, I wonder if you could say something about, um, the kind of distinction between, um, human and an auditor's in relation to infrastructural Listening and whether you see that as sort of a hard distinction, but you know, between human Listening and Machine Listening or whether it's always sort of somehow blended and, and interdependent human and non-human, um, auditors' um, as you put it in the essay, um, in relation to infrastructure or Listening, um, and you, you kind of, um, talk about how those sort of, um, two forms of Listening often interdependent or, or blended. Um, and, and I suppose when we were, um, thinking about how to define Machine Listening, you know, as often Listening in the absence of a heap of a human auditor, or it has to be fined again against the human, um, somehow, so I'm just interested in, in thinking with that binary.

Shannon Mattern (00:14:35) - Sure. So this is kind of not very funny, but it's a joke with my editors and everything always comes back to a piston ecology for me. So how, kind of the ways we know the world are manifested in the way we design a materially, everything from the scale of an object to a piece of furniture, to an architecture and an infrastructure. So for me, like the biggest and most interesting distinction between human and non-human and Listening is the different epistemologies ways of knowing modes of sensation, how they're operationalized in the world, which we'd like to think of as kind of being pretty distinct. This is part of the critique of the, of hyper automation that we think that there so much nuance and poetry that is lost when you rely on Machine vision or Machine Listening, but there's actually a lot of kind of coded regimented stuff that happens with the way human ears listen as well.

Shannon Mattern (00:15:22) - So I think maybe looking at the different ontologies methodologies that are inherent in those, um, and then the cultural specificities as well. I mean, this is something that anthropologists and archeologists have to offer and realizing that there's not one naturalized motive Listening or not one epistemology or ontology that is constructed through the practice of Listening. So, um, just seeing all the differences that can be illuminated and Harry I'm illuminated, what's the Sonic equivalent of that kind of rendered audible through kind of those disciplines and how we might find them parallel with the way we kind of build algorithms and machines to listen for us. Um, so it, it, it draws attention to, and helps us to better understand like both. And again, it's not a binary, but the, if we want to just simplify for the sake of this conversation, it can shed light to here.

Shannon Mattern (00:16:11) - I keep going on with the ocular centrism, I'm just going to do it. It illuminates what it means to listen as a human and what it means to listen to the Machine when you can kind of have them have a counterpoint, be counterpoints to one another. And we also recognize that they can be in a productive relationship. It's not a matter that all automation is necessarily bad. This is, again, something that I think we can learn from the critical discourse about machine vision and that rather than the, just entirely doing a way with algorithmic governance or with a facial recognition, there are ways that, for example, like heat mapping or some type of machine vision can be useful, helpful for public health officials or, um, ecologists, for instance, to be able to identify areas that then you can, um, pinpoint more kind of on the ground thick data kind of methodologies.

Shannon Mattern (00:17:01) - I think a similar thing could be applied in Machine Listening and that Listening doing that distant Listening, Listening kind of at scales beyond the capacity of human ears, even collective human ears can help us then to determine how we can better deploy or more effectively employ humanistic methodologies to better understand particular phenomenon. So that's just one example of ways where we can think about the different affordances of human and non-human ears to be used in tandem, rather than looking at the machinic as an, a ratio of, or a threat to the more poetic, humanistic, um, ways of going about things.

James Parker (00:17:41) - Could you maybe give a couple of, uh, concrete examples of, um, Machine Listening that we sort of, that should be valorized and, um, you know, I can think of a couple from the essay, but, um, I don't know if there's anything that you particularly interested in discussing or you find particularly powerful.

Shannon Mattern (00:18:00) - Um, well, a couple that I didn't mention in the essay were kind of ecological applications where you're Listening at macro scale for, um, acoustic ecology. For instance, you could see how, for instance development in a particular area kind of real estate development or construction is perhaps driving out particular species. So the other remaining species are kind of shifting the pitches of their bird calls or the kind of symphony of animal non-human voices is changing in relation to human activity on the periphery, that kind of larger things in a Machine Listening 24 hours a day over the course of months or years that a human couldn't sit on a corner and do, for instance, could then offer an interesting kind of sampling. It could provide an interesting or helpful mode of sampling or, uh, the capacity to choose a really particularly rich field site for a human researcher or a team of research.

Shannon Mattern (00:18:55) - Then go in another area that I was just reading about tonight, there was an article on efflux about kind of aquatic Listening and underwater soundscapes. So particularly thinking about how a lot of the underwater extractive work that's happening and Naval research for instance is changing, um, acoustic ecologies aquatic, acoustic ecologies. That could be a way where kind of, uh, specifically strategically deployed Machine Listening sensors are then helping humans to better target their activity. So those are just a couple examples. And then also I mentioned the structural engineering, so monitoring the security of things like dams and bridges and other types of really important kind of physical logistical architectures.

Shannon Mattern (00:19:38) - Um, you probably, for the sake of the maximizing public security, you want to have multiple ways of, of, uh, monitoring the structural soundness of these things. So both having cameras trained on them having kind of periodic inspections by trained inspectors and potentially having round the clock Listening to making sure the machines are working as they should, that the bridge is vibrating in a consistent way. And then if it's not, then, you know, to deploy a team to take a closer look or listen to it,

James Parker (00:20:10) - Those are great examples, quite, quite, I don't want to say not optimistic. Um, but listening to you, you talk about Machine Listening that way I'm just struck by, by the fact that the predominant forms of Machine Listening at the moment are not, and I always worry a little bit, you know, in my own thinking it's often been, you know, disability, discourses and things, you know, it's so obvious the, you know, the role that a voice assistant for example, might be able to play, uh, Machine transcription and so on, you know, it's not possible. You can't, you can't have a politics and Machine Listening, that is kind of, you know, Chuck it all out. But at the same time, I feel that like those kinds of, um, applications do a certain amount of work politically to smooth the, um, the entry of Machine Listening forms of Machine Listening that are so overtly tied to capital that the whole methodology is kind of bound so deeply up with, you know, platform capitalism or surveillance capitalism or whatever you might like to call it that I always feel, I always feel a bit nervous actually about, about foregrounding, the benevolent applications of Machine Listening because, um, because I just feel like that the, the juggernaut it, you know, is coming, uh, and the, the, the scale of the challenge is so significant that I, yeah, maybe it may be, I'm just commentating on my own psychological processes now, but like, you know, I, I worry, I worry about, I'm just, I'm just really worried about the beast that's coming. And I, I kind of, I'd be interested to hear your reflections on, you know, the politics of Machine Listening more generally, and some of those, you know, you list many nefarious applications, uh, in your essay and how to think about those together, um, with the, you know, the more benign ones.

Shannon Mattern (00:22:14) - So, absolutely. I mean, part of it maybe is the fact that I'm so immersed in all of the, um, uh, critical and in some cases, alarmist literature. And actually, I don't even know that it's fair to call it alarmist because these things are already here, they're already happening, and we need to be aware of them and the kind of the really frightening and nefarious applications that sometimes I want to think about rather than throwing the baby with the bath water, recognizing there are potentially kind of, pro-social not benevolence because there's always kind of an under a potentially exploitable undercurrent there, but, um, more positive for lack of a better term applications, but you're right. There are myriad, um, uh, exploitative extractive, pick your negative adjective kind of applications of these types of technologies to, for surveillance purposes, for policing, for, um, given your work on borders and migration, thinking about how to determine the fit of particular asylum seekers, for instance, to assess the veracity of a claim for migration or asylum seeking, which is something that I know that several folks are writing about and that Lawrence Abu Hamdan's artwork is kind of is, uh, is addressing.

Shannon Mattern (00:23:30) - So there are lots of contexts aggression detection, um, uh, gunfire detection, uh, we could go on, but this is to the ones that are coming immediately to mind right now. But yes, there are myriad cases that are not just even speculative applications that are actually in practice right now today and are doing harm, even things that feel a benevolent or kind of innocuous like, um, voice assistance. This is something I think some of the other folks you're going to be including in your curriculum, like shockingly you're talking about is the, the default, the training set for a voice assistant pro, um, and it involves so many kinds of acoustic assumptions, which naturalizes or normalizes certain voices and renders anyone with an accent. For instance, often kind of a model called communities of color rendered inaudible to the Machine, uh, which makes, uh, perfectly functional bodies feel abberant, um, and their whole politics of disability, um, that are woven into a lot of assistive technologies as well. So these, again are things that are framed as being benevolent applications of the technology that have perhaps in some cases, unintended, or maybe Trojan horse, uh, kind of type of, um, more nefarious dimensions or applications, uh, on the flip side,

Sean Dockray (00:24:51) - It's quite a, quite a knuckle knuckleball of a question. I think also it was a trap because I was like, can you please tell video the good things about procedures? And now let me tell you why it's actually bad, those good things actually dictate society for, you know, um, it also made me think of like, when did you know which she wrote in, um, control and freedom, which is in a certain way that these alarmist narratives also kind of normalize existence of these things, the presence of these things in society. So like, like even we might feel good about being quite critical and alarmist and everything, but at the same time, it like that almost that almost makes us that almost acclimates us to the presence of these things around us, almost as much as the kind of utopian narratives. And she kind of says that we should be looking at the actual, uh, the actual functioning of how these things operate in society, which I do think that your work Shannon is.

Sean Dockray (00:25:55) - Um, although, although you do, you are quite aware of the ideological kind of role of a lot of these things. I think that, you know, that it's something, your work is often deeply materialists and it's looking at how, how these things are actually implemented and how they actually work in the world. Um, that this isn't a question, I guess I was just sort of following on, uh, just thinking about that exchange and that kind of like that tension between, um, between the alarmist and the utopian, you know, like, and then we get into this like stalemate, this kind of like loggerheads and ... offers. That is like one way out is to look at the actual function.

Shannon Mattern (00:26:39) - I'm glad you think I do that too. I mean, I don't know that it's even something I intentionally started projects saying that like, I must have an explainer part of this article, but I think that might be my intention or that big, my nature and that, because I'm interested in ways that ideals are operationalized, which means the methodologies they're kind of the theoretical methodologies and then how they're actually deployed with available technologies. Um, and, um, and then again, what epistemologies they embody? I think just the fact that those are questions that are always in the back of my head for everything I'm working on, that it does lead me to want to understand how concretely something is operating. And I think this is part of, kind of the whole, this is not a new concept, but the whole movement towards infrastructure literacy that Lisa Parks and others talk about, um, in order to understand kind of like how satellites are embodying a particular geopolitics or how our cell phones work, all these things that are naturalized. And invisibilized because of their ubiquitous seamless presence. We really need to understand the physics in many cases in the mechanics and the electrical engineering by, and the, and the regulations and economic policies through which they operate, because there are value systems and etiologies and modes of governmentality that impact our everyday lives and kind of politics and, um, any, um, uh, in, in, um, kind of reinforcing and modes of inequality too. They're built into all of those seemingly bureaucratic technical things. There's much more at stake there. And just the wonkiness.

Joel Stern (00:28:12) - Um, we, we were chatting with, um, of Vladan Joler, uh, a few days ago about his, um, work with Kate Crawford that the, um, anatomy of AI and, and one of the things he, he was saying is that, um, one of the difficulties of, um, sort of producing a material critique of, of a voice assistant is that, uh, saved intangibility of, of the voice, um, and of sound and Listening that it's sort of hard to represent structurally in material terms. Um, how have you sort of found, um, that side of things and, you know, what, what are the, um, difficulties of writing a kind of material critique of sound and Listening

Shannon Mattern (00:28:56) - Sure. So this is what some of the earliest scholarship and sound studies, I don't know, again, there are, people are writing about sound for a long time, but I'd say like early two thousands when the field was kind of burgeoning. So for example, like Emily Thompson's book, the soundscape maternity, she is essentially rewriting urban and architectural history saying, how can we rewrite this history, uh, without necessarily having access to vast archives full of record of urban recordings? So it's a way of finding ways of Listening at media of arc of records in other modalities. So how could you listen to a photograph? This is something like Tina camp has done in her book, listening to images too, which is really kind of a really valuable contribution to kind of critical race studies as well. So what are the different politics that can be revealed by trying to discern or extract extract is such a, uh, a rapacious verb, but it's trying to kind of pull out or productively, um, uh, uh, interpret.

Shannon Mattern (00:29:56) - Sound from a tactical or a visual medium, for instance. So I drew a lot of inspiration from Jonathan Stern's work. Emily Thompson's work teeny Campton more recent years. And then for my book, which came out in 2007, I had two chapters about sound. One is about kind of the, uh, city of the wired city, the city of telecommunications. And the other one is about the city of the voice. So how kind of, uh, the oral culture, um, and the acoustic demands for it, informed architecture and urban planning for thousands of years. So there, again, we have some writing about it, kind of the Truvia and other kinds of architectural historians, um, even before they were called that we're writing about acoustics and we have kind of material traces of these cities that can give us some clues as well, but we don't have audio recordings and make cases to rely on.

Shannon Mattern (00:30:45) - So how can we use in some cases, speculative methodologies, how can we read literature or look at artworks or photographs to, um, hear what is represented visually in those records? So this is a matter kind of an interesting mode of triangulation that requires triangulating methods. And in some cases, speculative methods, there are folks in archeology who practiced kind of Archeo acoustics, where they, um, think about how a space was made, what Machine, what it's, um, material composition was the dimensions of that space and what type of Sonic activities might have happened. There were from which you can then draw kind of inferences about modes of public engagement, the type of governments, the type of, kind of religious activities that were happening. So again, some were hardcore positivists archeologists are critical of this mode because it does require some poetic license and speculation, but I think there's something kind of really beautiful about this triangulation of methodologies.

Shannon Mattern (00:31:45) - So these are some of the challenges methodologically in addressing sound, something that seems like an ephemeral and immaterial medium. Um, but also I think the potential for this is something that Sean's were kind of addresses as well of alternative modalities republication. So again, over the past 20, 30 years or so, maybe even longer, um, scholars, artists, this is where again, what artistic work becomes very useful thinking about sound artists work, um, the use of interactive publications that allow you to actually incorporate virtual reality or sound recordings, um, can, uh, create different modes of an effective argumentation that a traditional textual publication can't. So these are some other ways of making an argument about sound, um, that, um, newer technologies make possible that warrant perhaps kind of in, in centuries or decades past.

Sean Dockray (00:32:38) - I'm wondering if like, with, um, with that answer, that was really interesting to, to imagine, to see how scholarship can, can sort of study the sounds of sometimes we have no recordings of, and I, and, um, so to me that, or for me that opened the door to thinking about also the silence, like what would the silences be of, of some, um, some of these other times and places and dislike, um, that could be approached in a similar way, and also just, uh, thinking about historicizing silence as a, as a project and how that shifts over time as to what silence means. Exactly. And I was thinking about that at the beginning of your essay, you talk about the new sounds and silences of the pandemic. And I think in that case, the sentences are referring specifically to the fact that there's a lot less activity on the street and, you know, it's just quieter.

Sean Dockray (00:33:29) - Um, but I think, yeah, I was wondering, um, if you could expand on that a little bit, because I, I feel like the new silences have a regime of Machine Listening like, do mean a different thing than the silences of, for example, you know, this burgeoning industrial city, that's like, you know, where there's tons of construction happening and they have to invent the decibel to like the discussion of loudness that you have, which seems very particular to a particular urban context that quiet and silence might mean one thing, but I wonder what Clayton silence might mean now. And if ultimately, like, I wonder if, you know, silence even makes sense to a Machine or what if it does then what silence means to, um, a Machine listener.

Shannon Mattern (00:34:13) - Uh, that's a great question. Um, I'm thinking about, and I forget the name of the researcher who has been, um, for a large portion of his life who has been tracking kind of like the a hundred square feet of silence or something. He's looking at the fact that, you know, as we are cha um, global supply chains, air travel is essentially, um, infiltrating even natural soundscapes. So there's really no silent, um, corner of the world anymore, which presumes that the lack of human Sonic interference, that condition equals silence. When it doesn't, there's a rich kind of potentially loud ecology there that a machine or a human era, if it were present would hear there's just not the Machine excels. There's not the sound of kind of industrial infiltration. So the fact that noise itself, and there's been a lot of theorization about noise as being this highly kind of culturally specific subjective.

Shannon Mattern (00:35:04) - Definition that is rooted in race and class and gender to some degree. I think what constitutes silence to us is similarly kind of culturally determined. I'm just following that thread thread a little bit further to go back to what I was saying earlier about methodologies and the available historical records and resources. We have to piece together historical soundscapes, or even contemporary soundscapes. There's been a lot of research and kind of archival studies and history about our Carville sciences silences or, um, or, uh, gaps or absences. So what voices, and this could be both literal voices or metaphorical voices in terms of subjectivities are not represented in the archive. Um, on these are cases where sometimes you, you use speculative methods. So DIA Hartman proposes the method of like critical fabulation where you use some kind of narrative visitation or speculation to imagine based on what the gaps are, what the contours of the boundary of that gap would be.

Shannon Mattern (00:36:01) - For example, slave narratives, for instance, voices that are so marginalized that they were not regarded as, as worthy of or necessary to be preserved for posterity. So how are some of those silences then kind of either kept open to Mark those historical absences, or how might they be filled in through kind of speculative or triangulating methods? So that's going again, more along the cultural methodological thread. And then in terms of the way I was talking about it in the article, I was thinking more of just the, um, uh, I don't want to say the word objective, but the decibel level of the city has gone down dramatically. And this is again, we're kind of, naturalizing normalizing the decibel as this unproblematic way of measuring some kind of quantitative or quantity of sound. But these are the kind of the, some of the silences I was talking about.

Shannon Mattern (00:36:50) - In fact, there were many fewer cars in the city that people weren't going to work that cities were noticeably quieter. There were some articles about how seismic researchers were able to do some things over the past few months that they haven't been able to in a long time, because the world just quieted down to some degree. But I don't know that anything, if we can say that there's ever kind of a zero point of silence, because even before human ears or, um, mechanical ears were present to listen for certain things, there were other, I've kind of talked myself in a circle here. I'm not where I'm going, but, um, I don't know that there is such a thing I would actually, I'd be curious to hear what you have to think about your own, but how you would respond to your own question. But I don't know that aside for outside of an echo outside of a kind of acoustic, was it called not, we say that again,

James Parker (00:37:42) - Anechoic Chamber

Shannon Mattern (00:37:43) - It outside of that, I'm not sure that it's possible or outer space. Um, if in a realm of, or kind of organic life, if silence is a possible condition, I don't know.

James Parker (00:37:57) - We're in a, we're fast getting to a tree falling in the woods territory.

Sean Dockray (00:38:03) - We there's one, one very particular Place we can look at in the, in the article, like for me. And that be the, um, well, in the, in the sense of like silence as like that, which doesn't exist in the archive. If we take that as like one definition of silence, then the kind of like total total kind of like distributed Listening and accumulation of data presents like almost no gaps, you know, if we take it to its frameless, this kind of conclusion, um, but you kind of mentioned, um, at one point that there might be the opportunity to choose not to listen to certain things, right. That somehow in developing these psych, um, systems and Machine Listening, we should, we should, as a S as a society or whatever, determines that there are certain things we want not, we want to not listen to that should be inscrutable, I think is the word that you use. So I was thinking about that inscrutability as a form of silence, but that silence is not a natural thing. It's something it's like a political thing that we have to strive for. So I guess I was thinking like, maybe that would be one, one Mo mode of thinking about silence in this condition. Yeah,

Shannon Mattern (00:39:13) - I agree. And this is again, drawing some inspiration from work that's happening in the visual. Machine looking Machine, um, vision realm where, um, some cities are deciding just not to record, not to put up cameras or to, um, uh, produce, um, artistic modes of resistance, cultural used to call culture jamming, um, or to, um, uh, practice, um, uh, to engage practices of refusal. I mean, there are different kinds of political communities that use these different words of resistance refusal. Um, uh, non-use all of them have different political balances, but these are ways to kind of produce silences. Maybe you could say that they're a principal choice to just not participate to either not render your face visible to Machine, to not render your voice or other kind of Sonic emissions.

Shannon Mattern (00:40:06) - Audible to a Machine or to just not have the Machine altogether. And there might be some other kind of alternatives within, um, kind of within that span, but these are ways that we're kind of we're producing Sonic absences or silences in the record. I guess we could say

Joel Stern (00:40:23) - One of the things that, um, Lawrence Abu Hamden, who has done a lot of work, um, thinking about the politics of silence and, and silencing, um, said in a recent lecture unsound blade, was that, um, with Machine Listening and sort of forms of forensic Listening, um, that we have now, the distinction between background and foreground has really sort of been dissolved to a degree that what we used to think of as background noise is now justice easily assimilated and sort of analyzable and, and, um, operational. Um, so it perhaps, you know, our understanding of noise and silence is also tied up in these notions of foreground and background, which have a lot to do with, with human audition. Um, and they start to, um, become sort of much blurrier with non-human audition.

James Parker (00:41:18) - Could I just actually follow up on that, that, because I was going to say, if we think about inscrutability and foreground background, you know, a pattern I was looking at recently that, um, is for a form of Sonic branding, um, they give the example of, um, uh, soft drink cans where the, um, the, the pattern was for a meth.

James Parker (00:41:42) - Basically you would, you would design the ring pull so that it would make a sound that was particular to that brand of soft drink. But that, that, that it wasn't, that wasn't, that wasn't, that sound was not designed for human hearing, so that there's no necessity that a person be able to distinguish between a Coke and a Pepsi or, you know, whatever, but that the sound would be legible interpretable and comprehensible to the machines that were Listening in the home. So, you know, then, and so that immediately makes me think of, um, you know, this idea of operational uh Listening you know, which you kind of comes out of Trevor Paglen, thinking about the operative image for archi. And, and that sounds that are not in tech no longer need to be intended for human is right. That where's the, where's the foreground and where's the background in the, in the pulling of that wrinkle. Right. Um, that, that, that flipped, um, for, for me, it's the sound of forthcoming refreshment and for the Machine it's crucial brand data that, that, you know, it can be linked to whatever it is that that terrifying regime is linking it to. Yeah. So just, yeah, just in terms of thinking about, you know, science inscrutability foreground background, it seems like that's relevant.

Shannon Mattern (00:43:09) - Absolutely. I mean, this reminds me of another piece I wrote a couple of years ago about it's called things. That beep it was about the history of Sonic branding. It was just a short piece, but I also wrote it in kind of an online venue so that I could include audio files and video and kind of video of historical advertisements. So I'm looking at things like the Sonic branding of espresso machines and vacuum cleaners and car doors, which Kristen bistro, um, kind of, um, what's her name? Karen Bijsterveld has written about, um, all the wood chip bags and then up to kind of the beeping and the Sonic branding of our machines and how certain aura or persona are attached to particular platforms based on the beeps and chimes that they make. So, but yes, thinking beyond that, to how this type of branding still applies, if you're kind of producing sounds for Listening machines, but, um, yeah, that's a really interesting direction to take, to sort of extend that research.

Joel Stern (00:44:05) - I mean, I actually, I'm just sort of tempted to, um, bring us back to, to sort of a slightly earlier point in the conversation. And, um, we've been talking, um, James, Sean, and I about the need sometimes to ask sort of dumb questions to get, um, you know, the, um, in a way, a broader sort of answer. And I guess I was just thinking about, you know, the pan, the pan acoustic on, and this sort of, um, experience of, of living in a society, um, in which you feel that you're constantly being overheard, um, and your Sonic worlds are sort of captured.

Joel Stern (00:44:44) - Um, and, um, scrutinized and instrumentalized. And, uh, I just wonder if you could say something about how you understand the, kind of the social experience of living in a, in a panic, acoustic sort of context, and, you know, in terms of human relations and behavior, and, and I guess it comes back to, you know, we were sort of talking about, um, certain dystopian and utopian horizons of Machine Listening and, um, if we sort of think about, um, the pan acoustic context in social terms, and in terms of its social impact, do you have a, a sort of, a way of summarizing your, your kind of sense of, and, and feeling, um, as to what, what that, what that entails?

Shannon Mattern (00:45:29) - I wish I had thought about that more, but, uh, I just thinking about it specifically, in terms of the pandemic, remember there was some discussion on, I actually did see shortly before I published this piece. I wanted to see if anybody had yet patented kind of a pan acoustic technology that will allow for kind of diagnostics of Listening for a particular kind of cough or any types of Sonic indicators of illness that could be used for COVID-19. And sure enough, some people would put up on archive X, some examples of kind of systemic diagnostic Machine Listening on of technologies. So this is something where I can imagine a lot of kind of stifled sounds, um, uh, use of technologies to kind of, um, garble or mask the voice. Um, just as we, as we see the rise of both techniques and technologies to thwart recognition through visual technologies, I would imagine similar things would be happening, or we could have more strategic silences people just not talking in public spaces where they're pretend where there is the potential for eavesdropping.

Shannon Mattern (00:46:37) - So again, there's that potential for using society using silence and refusal strategically, or using kind of these techniques and technologies for masking. So those are a couple examples, and those could be at multiple scales of design, they could be gadgets, you know, face masks that have kind of Listening and speech altering capacities to the long history of using kind of architectural acoustics to do these types of things, to do the incorporation of kind of, um, ambient technologies within an architectural space or an urban spaces. So there could be multiple scales of deployments of these types of potential masking technologies. Again, I got a hesitant haven't thought so much about that myself again, I'd be curious to hear if any of you have any thoughts about these potential short or longterm social applications for the Penn acoustic con,

James Parker (00:47:26) - Could I, could I just ask as a follow-up, um, where you think a sort of more systemic critique rather than a kind of responsive, reactive, um, masking, um, sort of response to audio surveillance might, might begin, or what do you, you know, you said before that you've read A lot of stuff from, you know, Machine vision and so on, and some of that's beginning to get traction. I mean, I guess one of the things we are interested in is what, how, how could we, you know, help to produce a systemic critique of Machine Listening? And I wonder if you have any reflections on how strategically you might go about that?

Shannon Mattern (00:48:05) - Sure. Well, there is the potential again of regulation refusal. This is what some people are when they're, when they're calling for a systemic approach to, or resistance to Machine vision or surveillance, rather than just asking individual people to produce kind of individualized consumer responses there instead in calling for regulating, breaking up big tech, et cetera, holding big tech accountable for thinking about the potentially nefarious applications of their technology. But then there's also the larger root issues that some of these things are getting at like the policing applications, the fact that we're using machine vision for asylum, for instance, these are larger things that go way beyond the use of the voice as a diagnostic. We have to think again about like larger patterns of like climate change in, in forced migration patterns or the breakdown of criminal justice and social justice infrastructures in our society. These go way beyond this band-aid application and Machine vision, and Machine Listening as, um, stop gap measures to fix broken systems. So these would be kind of going even beyond just critiquing the system or fixing the system of Machine Listening itself to fixing the larger entangled systems to which Machine Listening is an insufficient and partial proposed solution.

James Parker (00:49:26) - I mean, it sounds like that's a critique of sort of techno solution is, um, you know, in the context of the pandemic that's, you know, just being so clear the techno sort of the drive towards techno solutionist responses, you know, in relation to contact tracing apps. Um, you know, you mentioned before COVID diagnostics, we've been, um, Looking at a number of the, these organizations. One, for example, could VOCA AI put out in March a call for people to provide voice samples, um, for the purposes of producing a diagnostic tool and they had 30,000, um, they had 30,000 samples generated within a couple of days, right? So it's not just that the company, you know, the companies, uh, sort of, uh, pushing, um, techno solutionist responses, but there's a sort of broad cultural buy-in to people wanted to do that bit, right. So they provided their samples and, you know, the, the legal terms of that provision were extremely murky. And this is a company that provides voice assistance to call centers. Um, most of the time, you know, and so it's not at all clear what the possible impact, I mean, not trying to sort of impugn them it just to say that it wasn't clear at all.

James Parker (00:50:39) - And, but there's a desire to contribute to these kinds of technical and political responses to an extremely unsettling situation. I mean, and there's the question of the role of the way in which big tech is moving into the health space? Um, more generally in relation to the pandemic, I was really interested actually, the way that sort of, it seems like, you know, you, in the piece, you talk about the way in which the city is often metaphorized as the body. And so if we think about Listening to the city, then, um, you know, you talk about the stethoscope and then immediately it becomes a kind of a diagnostic, um, relationship and just Listening as a form of diagnosis. Um, it just seems like there's like, uh, there's something interesting going on Listening as diagnosis and the way that specifically unfolding in the pandemic context and the way that machines are increasingly work, moving towards diagnostic or kind of forensic applications.

Shannon Mattern (00:51:36) - And the diagnostic that is used is, um, the first step towards predict kind of predictive applications, right? So, um, and then there are, again, plenty of nefarious applications there, but just this idea that people feel compelled, or even, um, as if it's a civic duty of, to contribute a voice sample to these, these applications. I mean, these are, I think a couple different phenomenon that are converging here. One of them is the fact that the use of technology there've been several people who've written about this, the use of technologies and crisis situations tends to normalize them. And it allows for their convenient application that go well beyond their initial use for a crisis. So this is the case where you have convenient mission creep, or you have something that is actually institutionalized for the long-term that was originally deployed under an ostensibly kind of delimited application for a specific crisis context.

Shannon Mattern (00:52:29) - And then you also have kind of the magical thinking that all the allure of isn't it amazing what machines can do. I mean, I remember even you had mentioned that, um, Trevor Paglen, his work, and you also mentioned flattens work with Kate Crawford. So the project they did together at the, was it the Prada foundation where they were using? I think it was, I forget what image now. I think it was imaged debt. They were using kind of image net, um, a data set to look at facial recognition and lots of people were contributing, I think, at the exhibition and online contributing their faces to say like, isn't it interesting or funny to see how wrong the Machine or how right. The Machine is about me, but there was also a discussion among a lot of African-American folks are sorry, are black, not only African-American, but black community on Twitter that I don't want to contribute my face to this.

Shannon Mattern (00:53:17) - It might be a fun kind of diversion or a, um, a novelty to folks who aren't already typologized and surveilled to see how Machine sees you. But if you're a part of a marginalized community that is already, already, always already typecast by this type of technology, I don't want to voluntarily contribute my face to the data set too. So this is, these are some of these compulsions that there's the novelty and entertainment value of contributing just to see how the Machine looks and how right or wrong it is. Plus also the crisis context that gives, gives people a sense of this is almost like a civic duty to, to give up your data.

Joel Stern (00:53:57) - No, that's exactly right. And I think, um, you know, why 30, that's why 30,000 people would offer up their coughing samples to like an AI and in a couple of days. And also because the allure of, um, being able to get a diagnosis instantly over the phone, rather than having to risk, you know, going to a hospital or a doctor is such a powerful one. So they kind of imagined payoff is sort of, you know, very seductive. Um, and it's very tempting not to.Sort of think in, um, a broader infrastructural way about the problematics of participating in ceremonies.

James Parker (00:54:40) - Um, that book by Bernard Harcourt where he talks about expository power and, uh, you know, it does seem like the power to not exactly force, but to promote exposure of oneself, you know, whether auditory or otherwise is an increasingly important front politically.

+ 200
- 0
content/transcript/pelly.md Voir le fichier

@@ -0,0 +1,200 @@
---
title: "Liz Pelly"
status: "Auto-transcribed by reduct.video with minor edits by James Parker"
---

James Parker (00:00:01) - Welcome Liz! Perhaps you could begin by telling us a bit about who you are and what you do?

Liz Pelly (00:00:23) - Sure. My name is Liz Pelly. Thank you for having me in this program. I am a freelance writer. Um, I write mostly about music and I'm involved in other music and media projects currently and have been involved in different music and media projects over the years. But right now, most of my time is spent writing, um, on a freelance basis doing some teaching, um, more recently working on a newsletter project, um, as a lot of other writers have been recently as well. Um, for the past four or five years, I have been writing about music streaming and the music industry, um, and various issues related to how music streaming impacts music communities, um, uh, different yeah. From different angles.

James Parker (00:01:23) - Um, great. And how did you end up working in like w w what was it that drew you to music streaming? I mean, am I right in saying that music streaming has been the, kind of the focus of your work or is that just the way it appears from people who read Baffler and so on?

Liz Pelly (00:01:42) - Yeah. Yeah. The focus of my writing stuff for the past few years has been mostly music streaming. Um, so I've been writing about music since I was a teenager, um, for the local alt weekly in the area where I grew up, um, and just did college radio, random music blog have been involved in various, um, independent music media projects over the years, in addition to freelancing and working, um, on staff, different place bases, and also have been involved in various community art spaces and, um, organizing venues and booking and, um, putting on shows and events and stuff like that. Um, and I, yeah, and I play music also. Um, and yeah, I think he was about, it was in around 2016. I was basically just, you know, reflecting on some things I wanted to write about. Um, it was when I was deeply involved in this arts collective that was taking a lot of my time and balancing that with, um, freelancing and it, you know, just occurred to me that, would it be interesting to do an article about a major label influence on Spotify?

Liz Pelly (00:03:03) - I think because of being involved in, you know, independent music communities, um, uh, both as a writer and also a participant, um, you know, a lot of the work of being involved in what has historically been called an independent music community, um, is inherently tied to us sort of interrogating the centers of power in music in one way or another, um, or thinking about what it's independent from, or like, you know, alternate, what is it an alternative to? Um, and sometimes, you know, the, those are complicated questions. Sometimes what presents itself as independent and alternative music has not been very much of an independent outlet or alternative at all. Um, anyway, so in 2016, I was kind of thinking about, you know, streaming services as these like new centers of power in the music industry. And you would hear that, uh, what ended up on Spotify playlist was like really influenced by major record labels.

Liz Pelly (00:04:05) - Um, kind of like major labels had these back doors and to Spotify playlist, I wasn't even really a big Spotify user. I just thought it was interesting. Um, so I was clicking around Spotify one day, looking at like playlists and decided to reach out to some people I knew who, um, either worked in the music industry or like knew people who worked in the music industry and through some, uh, emailing ended up being able to do an interview with someone who worked at a major label anonymously, who basically like explained everything to me about the relationship between major labels and Spotify. I mean, you know, at the time it felt like everything. Now, I know it was only a pretty small, um, uh, perspective, you know, experience, but it was this person's experience and it was very illuminating at the time. So I wrote this article, um,

Liz Pelly (00:04:58) - About major label influence on Spotify playlists. And, uh, you know, it was basically like wrote that story. And then after writing that story had ideas for like five more story ideas. And, uh, the Baffler at some point reached out about doing an article about Spotify. Um, and I sent them these like five story ideas that I had as followups, the previous one I had done. And they said, this is all interesting. What if you wrote an article that incorporated all of this? So, um, then I ended up taking these, like, you know, I don't know if it was like four or five story ideas and kind of combining them and writing this piece, the problem with music, which was the first piece I wrote for the Baffler and yeah, it was just, you know, each piece reveals that they were kind of like more stories to tell and more angles to come at the issue from. Um, and that led to doing a column, which is, uh, been, um, recurring based more like a series, um, uh, that I've been working on since then, that was in 2018. So there's been like five or six installments since does that. I don't know if that answers your question basically. Um,

James Parker (00:06:21) - I mean, I kind of want to go back to the beginning and ask what you ask you to say a little bit about what you found about the relationship between Spotify and independent, uh, labels, um, labels you mean? Sorry. Yeah. Uh, major labels. Yeah.

Liz Pelly (00:06:35) - Yeah. Um, yeah, so that a lot of what was covered in that piece, I think is like kind of more common information now, which is that, you know, or at the time, you know, there were already a lot of people who worked in the music industry who knew all of these things. Um, and, uh, that, you know, in order for streaming services to launch, um, they had to have all of the major label material licensed in order to get those licenses from the major labels, they had to sign pretty secretive contracts that involves all sorts of promotional agreements. Um, which accounts for a lot of the reason why major label material is so heavily promoted and like the playlist and the banner ads, like they all have free advertising space essentially. Um, and those contracts are very secretive, so it's not, you know, efficiently known what the terms are. Um, and to date that is something that, you know, artists and artists, advocates, um, uh, you know, are always demanding more transparency on, but that probably won't ever happen. Um, and yeah, that was, uh, you know, it remains, I think, an important part of piece of the conversation and the relationship between the major labels and the streaming services.

Joel Stern (00:08:02) - I was really amazed in, um, the, um, social, uh, streaming socialism of the recent, um, article from a couple of months ago, how you're talking about the origins of Spotify, kind of, um, drawing music from the personal collections of it, of its employees that had been, you know, downloaded from pirate Bay and other file sharing services at, at the outset. So it kind of just, you know, made me think of them as sort of oligarchs, you know, in the, in the way that at this sort of moment of transition structural kind of change, you know, you know, the, the people who can sort of steal and then, you know, in some ways mobilize that property very quickly into, into a new kind of legitimate structure that they sort of have a, have a massive headstart. Um, I mean, w you know, the, could you say anything more about the sort of origins of Spotify as a sort of illegitimate kind of platform that's, uh, that somehow becomes legitimate at a, at a certain sort of moment for particular reasons?

Liz Pelly (00:09:11) - Yeah, so that particular anecdote, um, was something I learned through reading the book, Spotify tear down, um, which came out a couple of years ago and was authored by a group of academics and researchers based in Sweden. Um, and I think that that book section on the history of Spotify is really, um, illuminating. It also is through reading that book that I learned a bit more about the backgrounds of Daniel and Martin learned, learned sin who were the co-founders with Spotify, and, you know, learned information that like when they patent the name Spotify, like they knew that they wanted to create, um, a platform for distributing media, but it wasn't even really clear what type of media they would be distributed thing. They had backgrounds in advertising, um,

Liz Pelly (00:10:01) - So, uh, I definitely would like refer people to, um, that text for like more information on that. But I think that it isn't, you know, important to keep in mind that like, you know, music is not something that was ever like central to the goals of this company, um, that it's now a, you know, 50 plus billion dollar publicly traded corporation that, uh, has such outsize influence on so many musicians abilities to make a living and so much influence on how people, um, relate to music and how music six circulates. Um, it, yeah, you know, it was coming from, uh, individuals whose background and goals were like in ad tech now, is, is it

James Parker (00:11:01) - The biggest streaming platform? Um, I mean, my memory is of Spotify. You know, it was sort of a, um, 2010 or so something like that, but the great jukebox in the sky, you know, it's coming, it's here, it's in Europe and it wasn't yet in Australia. And I don't, I don't remember any of the others, you know, and there was all about, we need the venture capital people, you know, this is the way of getting, you know, past piracy and so on. And so on as a Spotify sort of is, was in my memory that kind of early player. But I don't know now whether it's sort of the biggest player or been most influential in pushing because on one level your work is about Spotify, but on the level it's about streaming per se and the am, I suppose, you know, not just music streaming. Yeah. Like the sort of the platform economy, and we can get to that, but, but, you know, w is how do you situate Spotify in relation to this broader kind of, um, network of platforms and both historically and kind of politically, I suppose.

Liz Pelly (00:12:06) - Yeah. It's, it's interesting. Um, I, Spotify is still the biggest streaming service and, uh, I think that's one of the reasons why, um, for me personally, it feels worth kind of following the thread of covering Spotify as an entity, even though, like you mentioned, like when in the writing that I've done in the stuff that I'm working on, like writing about Spotify, it's like, you're not just writing about Spotify. Like a lot of these issues are really issues of streaming, more broadly issues of the music industry, the issues of how, um, you know, music is valued in society. Um, there issues of labor surveillance, um, you know, these are like, they're like much broader conversations. Um, and yeah, it's interesting. One of the, that you had sent me ahead of this was kind of like about, you know, uh, why, why focus on Spotify? Or like, why do, why, why did I start writing about Spotify?

Liz Pelly (00:13:11) - And, um, I think for me, the reason why I like started writing about Spotify is a little bit different than the reason why, like now I am continuing to want to write about it. Like, like I outlined before, you know, starting to write about it was just kind of like one story idea leading to another and thinking he was like an interesting thread to follow. And now I, when I think about it, I think of it more as sort of like thinking about my role as a journalist and like what role I can play and like the broader conversation that is kind of like, um, happening regarding music streaming. There are a lot of different types of people who are doing research and writing on these issues like academics and artists and activists and people who are involved in like shaping policy and stuff. And I think that part of what I can contribute as a journalist and someone who writes like relatively accessible, short, relatively short in the grand scheme of things like articles is kind of like trying to connect the dots between those voices and that other work by like bringing in, um, you know, perspectives from academics, interviewing artists, not just popular artists, but like artists set up all scopes and practices.

Liz Pelly (00:14:20) - Um, you know, uh, being in conversation with, um, activists who are working around both holding streaming services accountable and imagining alternative visions and total, and sort of, um, keeping all of that context and then, um, uh, bringing that all together, like in ways that are, um, accessible to a general readership. And, um, from my perspective, because so many people use Spotify and because it's something that, um, people are sort of like always interested in reading about, I think there's ways to sort of like frame the narrative as, uh, you know, like about Spotify, but like really about a lot of other things too.

James Parker (00:15:05) - If that makes sense. Totally. I mean the most recent piece makes that argument. So explicitly, I mean, I sort of sort of jumping ahead, but you know, you say Spotify, you're making an argument about the socialization of streaming as a kind of Trojan horse for the possibility of socialism, you know, I mean, it's, um, it's, you know, it's, I think, I think that works really, really well. Like if we can't get these platforms, um, in check or, you know, w we, we need, we need to be imagining the whole, the whole beast, uh, not just, uh, not just the technological kind of, yeah. Just not that, not just the technology, but I mean, I don't know if this is a good time to get into the socialized streaming thing. Cause it, it's sort of one of many political kind of angles that you take in your work.

James Parker (00:15:57) - And I wondered if a nice place to start to get into some of the detail of your writing would be these, these amazing experiments you run, because I think that's partly, you know, you were saying about the accessibility of your work. Um, I mean, I think that's really true. I think you both, your writing is, is fantastically accessible and really, you know, amongst the most sophisticated I'd read on, on Spotify. And one of the things that is really compelling in your writing is these experiments that you've done on yourself and your own listening and, um, kind of the Spotify platform sort of almost feels like you're sort of attempting to work out what's in the black box, um, from the listener's perspective or something. I just wondered if you could, could you tell us about a couple of those experiments and what you found and what, how they made you think differently about Spotify or, um, what politics they revealed, um, or what have you,

Liz Pelly (00:16:51) - Yeah, for sure. I think, you know, I think that someone who wrote about one of these articles wants to refer to them as micro studies. Um, and that always, like I thought, I always thought that was, um, interesting. And I like to also reiterate that like these, because the Listening experiments that I've done, like are so small scale, like it's just like me with one account, like Listening and, um, do you know, it's not like a very robust research project, like someone could do if they had the backing of, uh, academic team or research assistants or something like that. Um, so they should be kind of like thought of as like, uh, raising, I think, I think they like raise questions more than they provide answers in a sense. Um, but one of them that, uh, was the first kind of like listening experiment I did was in 2018, um, around gender bias on the most popular Spotify playlist.

Liz Pelly (00:17:50) - And it was pretty simple. Um, basically for a month I had, I started a brand new Spotify account and spent a month just on that account, listening to just the most popular playlist on Spotify. Um, like today's top hits and new music Friday, um, rock this hot country and a couple other ones. And, um, I would, yeah, like, you know, play through those playlists whenever, um, I can't remember it was when they were like updated or if I just played through them like every few days or something like that. Um, and then every week I would, uh, download a spreadsheet of what was on the playlist and I would fact check the gender identity of the artist. And I basically like had, um, this look at the gender breakdown of all these most popular playlist. And then I also looked at, um, every week, my algorithmic recommendations on discover weekly.

Liz Pelly (00:19:06) - And I also listened to those algorithms or recommendations to, um, the details of the micro study of sorts or, um, more, uh, specifically articulated in this article, which was called discover weekly, but it was like week with like w E K. Um, so at the end of the study, basically I found that the playlists were extremely male dominated, like the, basically as male dominated as the mainstream music industry. Um, and that also the percentage like the average percentage of, um, women artists on the editorial curated playlist that I listened to was like almost exactly the same as the, um, gender breakdown of the algorithmic recommendations. So I believe it was, um, um, the editorial playlist.

Liz Pelly (00:20:05) - I estimated there was an average of 13.8% women artists. And then on the algorithmic recommendations, it was about 12%. Um, so that was interesting. And I think for me, that was kind of like, you know, I wanted to basically provide an example of something that I think now is pretty obvious, which is that algorithmic recommendation reinforces bias. Um, um, or that it, you know, algorithmic recommendation, uphold social norms, um, and creates echo chambers and reproduces the same types of bias. Um, so that was, uh, yeah, that was in 2018. And then the other listening experiment that I did was in 2019, I think, and it was, um, I was looking into Spotify, um, like mood playlists, and I was looking at, um, sort of like mood data surveillance and how it affected the types of, um, advertisements that are shown and the types of playlists that were recommended. Um, and also sort of tying that to some investigation, into some deals that Spotify had made with some advertising and marketing companies about like selling people's mood data. Um, so that was in, uh, yeah, that was a couple of years ago too.

James Parker (00:21:53) - Could you say a little bit more about the experiment though? Cause I mean, uh, this whole mood thing is so it's so fascinating. So Spotify sort of pioneered the kind of, uh, naming playlist by mood specifically in order to have a new, uh, category by which to market themselves, to advertisers as far as I understand it. So we, Spotify, we have this thing called mood data, which is basically you clicked on a playlist titled get up and go or like sad boy or something. And now we know something about you, which is a very weird thing to sort of claim to know, like, what does it mean to click on a playlist called sad boy, I don't know what that's meant to me, but they think it's incredibly valuable and this is the point of distinction. And then all of the other streaming platforms sort of followed suit, but they've got this supposedly incredibly rich database of moods, which are really just the huge deciding to click on the names of playlists they've made up. And then you did this experiment. What, what was the experiment and what did you find? Yeah,

Liz Pelly (00:23:05) - I do always like to kind of like, um, provide that like, disclaimer, when talking about this article and this experiment in particular, because I think that a lot, you know, a lot of their claims about like what they are capable of doing with your mood data or like, um, completely blown out of proportion in order to like sell their advertising space essentially. Um, so, um, I think it's like, yeah, important to keep that, that in mind is that like a lot of it is kind of like, uh, just over-hyping their capabilities, but I was interested because so much of the, um, so much of the language that they use in their leg, advertising materials, um, their Spotify for brands, materials, like the stuff that is like, um, Spotify going to brands, trying to sell ad space. Um, they talked about Spotify is like a space of, uh, sort of like mood boosts.

Liz Pelly (00:24:05) - They talk about it as a mood enhancing space and how it's like this app that people go to when they're happy. And they talk about, um, like, uh, how people think of it as, uh, a place for, um, yeah, like feeling good, essentially, which you know, that is kind of in line with like the interests of advertisers, I guess. Um, so I was looking at like, um, I also happen to be doing this research for, um, a conference where the theme was music and death. Um, so I was looking, I was mostly listening to the coping with loss playlist and then kind of like looking at, um, what was recommended to me, like after I listened to the coping with loss playlist. And in my experience, I, I noticed that like a lot of the playlists that I was, um, uh, recommended after listening to that playlist were like more upbeat. Um, they were like, I think, I think it was something like warm, fuzzy feelings or like, uh,

Liz Pelly (00:25:08) - I think also I was recommended like mother's day and father's day playlist. Um, and yeah, so I thought that that was just like, I thought that was very interesting at the time. And I remember the emotional tenor of the advertisements that I was getting, being like, there ain't a psychologic, but again, like I mentioned like that, that experiment in particular, I think was like one that kind of like opened questions more than it necessarily answered them and would be something that would require a lot more, um, research to one, you know, see if the, um, idea like amounted to anything, but also, um, I think that, yeah, further exploring that could also like shine light on the extent to which their claim of, um, being a mood boosting spaces, like, you know, just them hyping themselves up in order to sell advertising space.

James Parker (00:26:10) - It's really interesting, the different ways in which they're working in, in the two examples you've given because in the gender one, the argument, presumably if they bother to make it is we're simply reflecting back your preferences world. Um, you have your preferences that they are biased. Well, we do our best because we've invented this new genre called women's music and you can click on that playlist to, you know, uh, um, you know, so that's there kind of, you make that point about the gender becoming a kind of a genre and that, that, that doesn't really that that's not exactly progressive. Um, so it's, um, that's in that model, the algorithm is merely reflective, but in the mood one, you know, it's about the production of a certain kind of Headspace and, you know, the, I mean, you could just have a cynical reading of it, but they'll say exactly what they want in order to serve their best interests. But I mean, it's a totally different model of understanding what the algorithm is meant to be doing. One is reflected from one's kind of productive or more kind of, uh, and, um, yeah. About engineering Listening as opposed to reflecting, listening back on you.

Liz Pelly (00:27:21) - Yeah. And I guess like, you know, just to, um, respond to the idea of it, like, you know, just being reflective, like I think that it's important to remember that like the, nothing about discovery on these playlists, these platforms is like neutral in any way. Like even the idea that like, you know, algorithmic recommendation is just reflect what you're listening to. Like, um, it was reflecting back what a user, you know, what a user would get in algorithmic recommendations if that user was listening to the most popular playlist on Spotify, which are also like the ones that are serviced to you when you open the app and you don't have much of a Listening history, like, and they have like of not only major label content, but, um, also like, um, uh, playlist friendly music, um, there's like all different types of, uh, values embedded into like the type of music that ends up on the, um, most popular playlist.

Liz Pelly (00:28:25) - And also there have been like, you know, a lot of conversations about the big, most popular playlist on streaming services. Um, over time becoming sort of, uh, filled with music made by what some people call fake artists, but, um, you know, music that comes not from, um, musicians out in the world, involved in artists, communities are trying to make livings as, um, artists in society. But like, um, people who work at these companies that make music for TV and advertising, and then those companies are kind of like seeing if they can like milk some extra money out of these tracks by like throwing them up on streaming services. But they're really just kind of like background tracks, um, that were made by advertising and film licensing companies, um, music licensing companies. Uh, so the fact that like that type of music ends up being on these really popular playlist and major label music is on these types of playlists and keep artists that work with the, you know, have the right, um, are employing the right artists services companies who have the right connections with the right person and have the streaming service to like get their music on these playlists.

Liz Pelly (00:29:46) - Um, yeah. Like even when those things all contribute to music that ends up on algorithmic playlists anyway, because so much of that, like what, and how music ends up on these, um, discovery tools has to do with.

Liz Pelly (00:30:07) - Um, you know, how many user generated playlists they're on and how many streams they have to begin with, um, and different ways in which they're like circulating through the platform and the chances of songs being on. Um, a lot of user generated playlists are being streamed a lot to begin with, like is greater if they're on those curated Really popular playlist. So, um,

Liz Pelly (00:30:36) - Yeah, I think that, that also, like, you know, one of the, sort of like driving influences and like continuing to write about shrinks or write about Spotify for me also has been kind of like debunking a lot of the ... of streaming services and Spotify in particular. Cause I think, I think one, um, narrative that they have like continued to double down on over the years is the idea that, um, streaming services are like these like neutral spaces and the music that is popular as popular because a lot of people listen to it. Um, and that, you know, is something that they've been saying for years. But even as recently as like last week, there was this really big, um, uh, very big like global day of action organized by the union of musicians and allied workers. Um, with all of these like demands of Spotify in particular, it was a day of action around their justice at Spotify campaigns, just as big musicians union, um, that launched this campaign asking for like more transparency and higher pay and end of payola, um, you know, bunch of, uh, uh, they're also asking for like proper crediting on tracks and Spotify responded with like a few days later with this website that they made kind of, um, sort of breaking down the economics of streaming and explaining how payments work, but it included like no information that wasn't already known beforehand.

Liz Pelly (00:32:04) - Um, and in his string of tweets about this, like Daniel continued sort of like hammering home this narrative, I think it was like, you know, he laid all this out and then his last tweet was like, but fans ultimately decide what thrives in the streaming era. And I think that is the, that was certainly like the statement that caught my eye because I almost like can't believe this is still something that these companies are like trying to, um,

Liz Pelly (00:32:35) - That point that So making when it's like become so clear, um, to artists and music listeners and people in music communities, like how untrue that is. Um, especially, you know, in, in recent months they have also, um, rolled out like new forms of payola, essentially. Like there's this thing that is of particular interest to, um, I think the Machine Listening project, there's something on Spotify now called discovery mode where record labels can, um, agree to lower royalty rate in exchange for being boosted algorithmically, um, on the platform, um, which is essentially, it's like a new form of payola. Um, and there's yeah, like other,

James Parker (00:33:20) - A little bit about what payola is for people who don't already know.

Liz Pelly (00:33:24) - Yeah. So, you know, historically pale was like the practice of major record labels, paying cash to like have a song played on mainstream radio. Um, and it, uh, there has been like government intervention in the United States around payola. Um, I imagine probably in other countries too, but, um, uh, it's, it's not legal, but, um, the same laws that have been created are in payola on the radio, like don't apply to, um, streaming services or digital platforms. Um, so even though like in their terms of service, they like, you know, say that it's illegal to, or not. I'm sorry. They say that it's against their terms and conditions to like pay for placements, um, on certain playlists. Um, it is okay though, like when labels are essentially compensating Spotify for it,

James Parker (00:34:25) - I mean, it's doubly crazy because even if you debunk the idea that, you know, the algorithms are neutrally, reflecting back what people already prefer, which is weird because how did they listen? How do they know what they prefer? Well, it's because they listened to the music which was fed to them. So it's clear. Cause it's like in that cycle, it's like, it doesn't really work anyway. Uh, and you know, obviously there's this new discovery algorithm you talked about and so on and so on, but, but it's also a very specific way of thinking about the value of music to say that the more streams that you get, the more money you make, that's not that there are many different ways of saying fans decide.

James Parker (00:35:07) - And, and I think that that kind of argument, you know, covers over the fact that, that on some level that kind of the biggest move was to make payment dependent on clicks, um, or, or streams. I mean, and, and that it's, I mean, I'd be interested to know, like, um, what, what potential you see for kind of moving away from that model, because on some level, like, as long as you're in that kind of in that, if all of the argument takes place within that channel or down that road, you've already lost a very significant part of the battle, especially for experimental musicians who, you know, as everybody knows that, you know, you might listen to rarely, um, but not because you value it any less, uh, or you know, difficult music, you know, that, um, you know, I, I basically, when I use streaming platforms, I use it to listen to music to work, to, to be honest. And that's, that's, that's not because I value it more if anything is because I've helped you at less than a certain way. So, so yeah, that we've already lost the battle if we play it in the territory of, um, paper stream.

Liz Pelly (00:36:24) - Yeah, totally. I mean, this is something that, um, uh, Holly Herndon has articulated really well, and

James Parker (00:36:32) - I should acknowledge that. That's what I was thinking. Yeah.

Liz Pelly (00:36:36) - I mean, I just remember like talking to her about it on her podcast and the way she articulated it, like it's, you know, it was just really powerful, um, this idea that yeah. You know, just because something is a piece of music is something that you'll want to listen to on repeat, does not mean that it inherently has more values than something that you might want to listen to you once, or, you know, the idea of per stream valuation, like, uh, will never work for artists that, for example, like release 20 minute, ambient tracks are like, you know, released very long soundscapes, um, the way it will for artists that make like a two minute and 32nd pop song. Um, so that in and of itself is another, I think reason why it's really important to be conceptualizing of, uh, new ways of distributing and compensating, digital music outside of streaming services. Um,

James Parker (00:37:42) - Is the union making any demands along those lines?

Liz Pelly (00:37:46) - So I think that, um, well, when I think about like change in the music industry and change and music streaming and, um, what would be really useful for a healthier music culture? Um, I think it really is like a combination of sort of like both holding the services accountable while also, um, you know, imagining alternatives. And, um, I think that the stuff that the, um, Uma campaign and like other groups like music workers Alliance and those based here in New York, and I know that there are other, um, international unions that have like an org musician organizing efforts over the past few years, um, that have popped up. Like, I think that all of that work to hold, um, services accountable and improve the way things work now is an important piece of the puzzle. But I think it's kind of like separate from the work of like imagining bigger picture alternatives, um, which is kind of like, you know, stuff that I was writing about in the article about, um, socializing streaming and talking about, um, the American music library project and like ideas around a, um, you know, uh, government funded, taxpayer funded, um, streaming service alternative.

Liz Pelly (00:39:06) - Um, I think it's like both are important.

Joel Stern (00:39:12) - Yeah, I really enjoyed that article is just reading it over the last couple of days. And it actually, um, it may made me think of the way the national film and sound archive in Australia used to work in the, in the seventies and eighties and spoken to a lot of, um, experimental filmmaker, friends who, who have told me that they, they would, they would make a film on, on, on 16 millimeter, you know, Sally Lloyd, and then, um, the archive would pay them, um, the equivalent of striking of the cost of striking two prints. And one print would go into the archive and one print would go into the filmmaker's co-op where, where it could be continually hired by a film societies. And the one in the archive would then circulate for educational purposes. But you know, that, um, system of, of the national archive purchasing the equivalent value of two 16 millimeter prints was what, um, allowed the filmmakers to have the confidence to, you know, keep making films, knowing.

Joel Stern (00:40:13) - That the, that the value of them was sort of, um, understood and that there was, uh, a kind of, um, sustainable economic model for it. And, and that's why in the seventies and eighties, the experimental film scene in Australia was a really vibrant, you know, amazingly productive and prolific one, you know, but then, um, when the model switched, when film, the filmmakers sort of moved into video, um, and Sally Lloyd was sort of phased out by sort of video art in a certain sense, and the modes of streaming and digitization and things like that changed that that model collapsed. Um, and it sort of returned to a much more precarious and sort of IX, not exploitative, but a model where it was much harder to establish the value of the work, um, or to have a sustainable kind of economic system to, to, to keep it going.

Joel Stern (00:41:11) - Um, so yeah, uh, it's um, sometimes when reading your work, it's sort of hard not to feel misstep nostalgic, um, for, you know, that period and also for the music scene that, um, I grew up in which, you know, revolved around record stores and, um, you know, really great locally produced magazines and, uh, kind of, um, community that, that sort of valued music and sort of really specific, um, and, and important ways. Yeah, it's just, it's sort of, so getting to see music itself, um, be kind of hollowed out and, um, you know, elite leveraged, um, for this sort of data brokering kind of project.

Liz Pelly (00:42:07) - Yeah. I think it is very sad also, and that is part of why I think it's so important to imagine alternatives and not just imagine them, but also do the work of trying to figure out how we make them happen. Um, but what you said about the, um, archive is so interesting because I feel like in the socialized streaming article, I was looking at the idea of, you know, government funded, um, taxpayer funded streaming system, and also this, um, these smaller projects that have popped up across the United States and Canada, where public library is, are, um, creating these small local library run stream programs where they'll purchase a five-year license for local musicians, music to have on this, uh, sharing platform. And what you just outlined kind of, to me, it's like, Oh, what if the idea like this library idea was, you know, something that was being done by like the library of Congress or something like that, where they're like paying a flat fee for, um, the ability to like license artists, music for X amount of years to have in their own archive.

Liz Pelly (00:43:16) - Um, I think that that also would be really interesting because kind of what we were just talking about, like, especially for artists whose work doesn't lend itself to any kind of like per stream payment, like even the idea of a government funded streaming service that still relies on a per stream royalty, um, would only go so far. So if you like, could tie something like that to also some sort of archive where, um, uh, a flat licensing fee could be, um, you know, use to compensate artists for having their work in like an archive or library for educational purposes. Like that would be such an incredible piece of the puzzle, um, in thinking about yeah. Public funding, um, and music, I do wonder too though, like these ideas, like if they're more likely to, um, be tried out in other countries other than the United States first,

James Parker (00:44:12) - It's funny though, that Spotify is a Swedish company, when you think originally anyway, when you think of the, you know, the way in which Scandinavian countries are kind of always kind of held up as a kind of Paragon on that front. But I mean, I just wanted to,

Liz Pelly (00:44:27) - I tear down also the book like starts with like a pretty interesting chapter, um, examining like Spotify as a Swedish country and like, you know, um, trying to like put it into that context. I'm looking at more of like the political context of it is really interesting.

James Parker (00:44:45) - Just wanted to drill down a little bit on the kind of, let's say we're imagining socialized streaming. So one of the benefits is on the labor end of things. Right? So that it's a diff we can imagine artists getting paid for their work and not being paid necessarily per click for, you know, go the licensing model or something like this. But it's also really important to notice that a socialized streaming platform could decouple streaming from surveillance.

James Parker (00:45:13) - I mean, that's a really important point that you make in the piece that there's no, there's w you know, we, we have been sort of bred into a system where streaming, you know, whether it's Netflix or wherever just is synonymous with Savannah, of course the data is, is captured. And, okay, well, sometimes we might not want advertisers to do this or that, or maybe I'll agree to the cookies here or there, or, you know, whatever, but, but it is entirely possible to imagine the great jukebox in the sky with none of that, that doesn't try to hack my mood, that doesn't try to say that I should, um, you know, just chin up sort of feel better after, you know, somebody I know has died or something, um, because like the platform for a positive music listening experience or whatever the hell that monetize your ear holes.

James Parker (00:46:08) - Yeah. Yeah. I mean, so I dunno, I dunno if I've just already made the argument pretty, but I just wanted to, like, you know, it's not just about, I mean, labor is a really important, um, and the things, and then also the, yeah, the kind of the capture of listening itself. I mean, and that's also related to this broadly, you know, Spotify is border moving to podcasting because if you go on my, on, on Spotify now, it's not obvious what it is that you're listening to, but everything like Spotify has sort of trying to, you know, commodify all of listening and, and maybe this goes back to that roots, as you're saying, they're effectively an advertising platform and that they're just an audio advertising platform as opposed to any other form.

Liz Pelly (00:46:54) - Yeah, totally. I mean, I think that it's in the article. I think I raised the point that it's important to think about like the various types of compromises that we make as listeners on streaming services or that people make when they're, you know, listening on streaming services and the idea, you know, regardless of whether they're advertising aspiration. I think part of the reason why talking about advertising is tricky is because of the extent to which, I mean, I don't, I don't know if this has changed, but I've heard that, like, they're not very good at advertising and that, you know, uh, there's been some interesting conversations about like digital advertising as this bubble that's about to burst and stuff like there's this really interesting book that came out, uh, last year by, um, Tim Wong called subprime attention crisis. So it was about like, you know, questioning whether or not advertising as, um, the sort of economic engine of the internet, like is something that is about to just like, um, burst.

Liz Pelly (00:48:00) - So I, it gave me an interesting, other perspective on like, um, talking about Spotify as advertising platforms, but like, regardless of the complications with whether or not they're like successful at, um, advertising targeting, it's still something that they're driven by and it's still like, um, the, uh, incentive that they're chasing or that they're, um, sort of like communicating, uh, as a crucial part of their business. Um, and also like, you know, even if advertising is completely taken out of the equation, um, there still are ways in which these, you know, Machine Listening is happening and the way that like, you know, Spotify is tracking everything you listened to in order to service you better recommendations to strengthen its product. Um, so like even if the only product Spotify is advertising is Spotify, like it's still affects your experience as a listener. And I think that that actually is starting to happen, like the, um, sort of like audience segmentation and like streaming intelligence that like, you know, they claim to be really useful to advertisers. Like I think they're actually starting to, um, build playlists like around that data, um, and re recommend playlists around that data anyway. Um, yeah, I think it's just really interesting to remember how you people, how you listen differently. Like if you're not being watched by a corporate advertising streaming platform,

James Parker (00:49:36) - This is a really banal point, but I find that my listening is really different as a function of having to type into a search bar rather than looking through a shelf. And I know that's like incredibly banal, but like, I forget what music exists, you know? And, and Spotify wants you to forget because it wants you, it wants you to click on a like box that says a word, or has a picture that you like. And then I go onto my record collection and leaf through it. And just, you know, again, like that's a, that's an extremely banal point, but.

James Parker (00:50:12) - I felt like yes, a lot of the tension gets. Yeah. If you sort of, if you think about the interface, as well as the algorithm and you think of, you know, what it means to listen via phone, whether it's going through your sound system, it's just a, it's a totally different experience. I mean, the other, um, experience that is sort of in some ways, technically similar to Spotify, but politically completely different is like listening to community radio streams. Like, like I, when I, I usually, if I put on Spotify after, after a little while I kind of check myself and turn it off and put on WFM U or NTFS or an Australian community radio station. And you can kind of remember what it's like to have, um, a person hosting a radio show who is passionate about music, who's playing, playing new things and kind of kit cares whether you like it or not, and is, um, you know, and it's a, it's a completely different logic, which, which isn't sort of extractive. Um, and yeah. And, and unless you contrast the two listening experiences, it's sort of easy to forget what the difference is.

Liz Pelly (00:51:29) - Yeah. Couldn't be more polar opposite experiences, but I, I definitely agree that in some ways, like, you know, listening to a, uh, community radio station, like you just said, I think can like, uh, really illuminated for people like how different of experience it is listening to radio versus yeah. Just like a playlist or algorithmic playlist. Um, yeah, that, that was actually my new year's resolution this year is listening to community radio more. Um, and it has been, uh, going really well. I love the radio.

James Parker (00:52:13) - Um, did you, did you see that, um, that pattern, that Spotify pattern that hit the news, um, a few weeks back, um, I think they originally filed it in 2018, but it only, um, sort of, um, uh, I actually can't remember what the language is for when, um, when a patent is granted, I suppose it's, I think it was the, yeah, the moment it was granted or what have you, but, um, that the, not just that, that, that sort of takes the personalization logic one step further that doesn't just, that wouldn't be the idea is that it wouldn't just sort of, um, uh, follow your listening habits in the app, whether you click this mood or that mood, or whether you, um, uh, you know, prefer to listen to riot go or, you know, um, ambient music for work or what have you. Um, but that also listen, literally lessons through the smart speaker or through your phone to your environment and so on.

James Parker (00:53:18) - So, you know, they, they, again, like this is like the intensification of the mood point. So that, that supposedly it would, um, uh, in, in the, uh, listens for, um, metadata that indicates an emotional state of a speaker, um, the gender of the speaker, the age of the speaker, the accent of the speaker, the physical environment in which the audio signal is input. So like whether you're in an indoor space or an outdoor space. And so, um, you know, this is an example, thinking of Machine Listening, where they kind of, the real kind of intensification of the surveillance, um, seems to, I mean, that seems to be on Spotify as horizon. And there was a bit of uproar when that, that pattern sort of got a public profile, but sort of, I feel like it's also quickly normalized and that it might just slip in. People might not even really notice or care and, you know, the terms of service will be updated. And, you know, did you follow that or, um, have any views on it at all?

Liz Pelly (00:54:22) - I'm clicking. I clicked on the link. I, you know,

James Parker (00:54:29) - It's not, it's not big news in your, in your thing.

Liz Pelly (00:54:34) - No, it is. I, I just, uh, I think it maybe is, I'm sorry, I'm just looking at it. Or maybe like something that I thought was known already.

James Parker (00:54:55) - I mean, that's the, that's the thing. I mean, every time this kind of thing hits the press, you know, it's like, did we not already know that? And whether a Spotify was doing it, other places are definitely doing it. I mean, you know, the, the, the Apple, um, home pod.

James Parker (00:55:12) - One of it's like Mark it's features when it marketed itself, was it, it listen to your room and not just the site, it sort of listened to the size and shape of the room. And in order to better kind of broadcast the sound into the space, but you know, sort of the disperse or the sound, but then it also like could work out where the listeners were in the room as well, um, from listening to voices and movement and so on. And so, you know, that was like explicitly, you know, that was, that was the big point of difference. Why you should pay more for the home pod rather than the dinky little Google, whatever it's called. Um, so, you know, it's not like these things haven't been happening for a long time, but for some reason, this one, this one seemed to get a better price. Maybe it's just because,

Liz Pelly (00:55:59) - Yeah. And does the fact that it is something that is known or something that's been happening for a long time doesn't mean that we shouldn't still also be paying attention to this and demanding that these companies don't surveil users, um, and you know, shouldn't be reminding listeners of what is happening. Um, I, yeah, I, it's interesting. Um, I think that, I think it's important to, it's an important thing to bring up because I think a lot of people associate, you know, smart speakers with in-home surveillance, but I don't know the extent to which, um, it has become, um, you know, a part of people's general digital media literacy that like their Spotify app itself might be listening to their home environment. So I think that that's like pretty, pretty dangerous and something that, you know, should be, uh, I think, you know, I think it's worth the coverage that it got that I missed, or don't remember where I, Oh, wait, what's this, this was from 2018 though.

James Parker (00:57:18) - Um, the, the patent was filed in 2018, but it just got granted, I think a couple of weeks, but in any case, and there was, I mean, there was just some, and there was some publicity rate recently when it was when it was granted. I mean, and I, and I suppose just sort of, um, yes, Spotify is sort of wanting access permission to your phone's microphone. It's sort of, um, in, in some ways it's just, it's just so revealing of what, what, what their kind of actual business model is. I mean, cause obviously with smart speakers and voice user interfaces, it's an essential technical part of the operation of the device. Um, but with Spotify it's sort of hard, it's hard to understand how it would really improve the service, um, or do anything than, um, just sort of massively expand the data set and also just, um, open the possibility for, for, you know, collusion between Spotify and you know, other agencies with regards to the surveillance of everyone's audio environments.

James Parker (00:58:32) - I mean, I think it should be gone. Sorry. I shouldn't have interrupted. Yeah, absolutely. I mean, I was just going to say, I think we should be clear a little bit about like what the problem is. And I don't think the problem is necessarily, you know, that the microphone can be hacked or whatever the is hacked like that that's a thing, like we know that from the Sloan and that's not, so Spotify is not specifically part of that problem in this case. Um, but I think the problem is partly like the fantasy, but you know, this is what you, the point that you were making before that they could know anything worthwhile from this data, because that's what they're selling. It's a fantasy that they're selling to other companies. Like what could you possibly do with the information that my voice is supposedly gender male, according to your, um, what were you going to do with that?

James Parker (00:59:30) - They don't say race. Um, it's not listed in the pattern, but like, what are you going to do with your, your weird, like vocal determination of my race? Like you're, you're, you'll get your, you're going to sell it. You're going to sell the fact that you can do something. But I can't imagine it's the same as with the mood things. Like what could you possibly actually know about me that would service me any better? I don't, I just don't buy it. I'm sorry. Part of me is like, this is creepy, but actually the creepiness is not the problem. The creepiness is just the kind of project. Yeah. The product, because I think what the problem is is Spotify.

James Parker (01:00:08) - Selling snake oil basically to accumulate market share and, um, you know, to cement power as against, you know, artists. And this is just part of the bullshit that helps it do it.

Liz Pelly (01:00:23) - Yeah, totally. It's like new talking points for when Daniel goes on like TV shows where he talks to wall street people and like tries to get investors, to like, you know, keep throwing venture capital and investment into Spotify stock.

James Parker (01:00:39) - Yeah. I mean, emotion recognition in voice. W w we we've done a little bit of research on it is it's just so mad. It's so mad. We came across, we had a great chat with genic Jessica Feldman, and she talks about one app, but like literally tried to tie emotion to pitch. So that like speaking in, in the letter C I mean, th th the musical notes C is supposedly like consistent with, you know, particular emotional state I'm in, it's complete and utter nonsense. So it's nonsense. It's really marketable at the moment. And I, I sort of, I'd be tempted to say that the bubble will burst except that all the other bubbles don't seem to be past it. Um, so, you know, self perpetuating.

Liz Pelly (01:01:27) - Yeah. I mean, it's, I feel like it's like, it's tricky because it's, it's all true. Like, knowing the extent to which like to continue, like, are the best race to sort of like contextualize the critique within that, like, acknowledging that, like it's bullshit, but it's also, um, you know, such a big part of their narrative and like trying to offer a, um, meaningful critique, but also by critiquing the thing that is kind of bullshit it's like, you kind of run the risk of like enforcing it almost. Um, so that's like a tricky thing to, um, balance in having these conversations. I think

James Parker (01:02:12) - I'm conscious that, um, we've been going on for awhile. I mean, I do have one last question, which you can choose to answer or not answer. Um, but I just noticed that you've been working a little bit on this, uh, other app or platform called so far, um, which on one level is not directly connected to Spotify, but on another level, I mean, you, you should say a little bit about what it is if you, if you feel like it, but it's, you know, it's kind of like, I think it understands itself seems to understand itself as in the position that Spotify was in 10 or 12 years ago. And so it's interesting to sort of watch it, you know, to think about how the critique of Spotify might relate to the critique of so far. And yeah, I just wondered if you want, if you had any thing to any dots worth joining there, if you could say a little bit about so far, because obviously during the pandemic this, well, you should say a bit about what it is, but I think it becomes newly salient. Um, artists have been totally decimated around the world anyway, so, so yeah. Could you say a little bit about so far maybe, and try and join some dots?

Liz Pelly (01:03:28) - Yeah. So it's this app? Um, well, okay. So, so far songs is this company you can go on to the so far town's website or the so far sounds app and say you're in New York. Um, there'll be like advertisements for shows that are coming up. And it'll say something like, uh, you know, um, show in park slope Friday at 7:00 PM or, um, Midtown Saturday, 2:00 PM. And there's not a lot of information about it. It's just sort of like a neighborhood and a time sometimes there'll be like a general vibe. Like it might be like hip hop night, um, in Midtown like 7:00 PM, Thursday, or, you know, like, uh, something very vague, um, or, you know, singer songwriter, uh, women's singer songwriter night or something. Um, and you buy tickets, there's no information about who's playing. It's just like the neighborhood and the day and the time, um, you, you apply to be able to buy a ticket.

Liz Pelly (01:04:30) - They email you letting you know whether or not you're able to buy a ticket and you buy a ticket for usually like 20 ish dollars. Um, and then the day before the show, they email you the location and then they the exact address. And then you like kind of like show up to the show, um, not knowing who's playing until you get there and you're watching the show. And, um, uh, they sometimes are like in people's houses or sometimes they're in like clothing stores, or sometimes they're in like a luxury condo buildings, or maybe it's in a bar. Like the location is a mystery as well, but they kind of like hype up, like.

Liz Pelly (01:05:07) - Weird intimate, quirky places or whatever. Um, but, and then if you have a space and you want to do a show, also, you can like sign up to be a show space. Um, so, uh, artists get paid a hundred dollars to play one of the shows and you just play like two or three songs and it's like very stripped down. Um, but over time it's kind of like revealed itself to sort of be like an Uber of health shows in a sense. Um, because it's like this app that is sort of like pairing the musicians and the, uh, location and the concert goer. Um, and the whole thing is basically like when you go to one of them, it's usually there's like around 50, like people that are like young professionals and it's BYOB. And there's usually like a string of Christmas lights behind the person playing and like a little sign that says so far towns. And when you go in and they'll give you like a slip of paper with like the social media handles of the, um, people who are performing, it's usually like singer songwriter music, or like sometimes it's like spoken word. Like it's usually like very like mellow music kind of like coffee shop for music. Um, and yeah, it's uh, yeah, so, um, it's venture capital backed company. Um, so connections with Spotify. I don't know. I hope I did a good job characterizing it.

James Parker (01:06:42) - I mean, my instinct is, it sounds like, you know, the app notification of the gig, if Spotify wants to amplify, you know, listening itself well, you know, you could easily see it expand it, buying up so far and expanding out into the world of small gigs and things. I mean, it's so weird the sort of the way in which the gig economy sort of cycles back on itself. And like now it's like literal gigs, but a part of the gig economy.

Liz Pelly (01:07:11) - Absolutely. I mean, I wrote this article comparing Spotify and Uber a few years ago, talking about how like the sharing economy is sort of like, um, and I drew on this interview that I did with Asher Taylor in this piece, um, a few years ago where we talked about how like, um, the gig economy kind of like commodified and capitalized on the idea of the artist and order to sell the idea of like being a free agent who works like an artist. Um, and it has like independence. And then now you have apps like Spotify and so forth. Sounds kind of like taking the gig economy logic and like selling it back to musicians, um, uh, through the lens of like VC funded apps. Um, but so, so far sounds I like happened upon because I, I accidentally kind of like ended up at one once and then I realized it was this thing. And the more I like talked to people about it, I that a lot of musicians that I knew actually had like played one once or twice. And I ended up going to, I have like, for this article I wrote, I like went to a bunch of them over the course of the year and, um, did some research on the founder and it turns out the guy who started it, um, is a former like Coca-Cola executive executive. And he used to work at the Walt Disney company. Also, this guy brought profit offer.

James Parker (01:08:31) - Well, that sounds a lot like the house shows that I've been to before. I always think Coca-Cola and Disney.

Liz Pelly (01:08:39) - Yeah. And really interesting. He did like, um, I think he did like experiential marketing there. So it really does like, um, make sense that this would be something that he would get into because I did some research on like what that even means. And it's like he was doing marketing and advertisement and very tied to like physical spaces. And it makes sense that like, you know, um, people who are trying to attract venture capital investment and advertising money with like, I don't know, like try to do something like this that involves like getting people in a room and a physical space and then figuring out how to market to them. Cause there are like examples of these shows that are like branded and corporate sponsored and stuff like that. Um, but you asked if there's a connection between so far sounds and Spotify. So the original founder is not a CEO anymore. The current O um, is someone who used to work at Spotify who, um, uh, he used to work at Spotify and he was one of the co founders of this company called the echo nest, which is the company that Spotify purchased. Um, I think it was like 2013 or 14. I'm not exactly sure on the year that Spotify purchased echo nest, but like they power the algorithm of recommendations on Spotify essentially. Like they, um, were really involved in like bringing playlist and recommendation to, um, Spotify, uh,

Liz Pelly (01:10:06) - So to me, it was also very interesting that like there were, was that connection between, um, you know, data-driven recommendation and sort of, um, the shifts towards recommendation being by like mute and activity on Spotify and like, uh, this, this company that, that connection was there, I thought was really interesting.

James Parker (01:10:33) - Well, it brings us both full circle really to the, you know, the way you got into working on Spotify in the beginning, you know, the relationship between, you know, what, what space is there for independent labels, independent musicians, not wanting to work outside of the, the major label space or the kind of the, um, the corporate corporatization of music. I mean, it just seems to me like, so anathema, like, like really hilarious almost to the, like, like to the point of like, how could this possibly work. Um, but that the experience with Spotify, you know, makes it clear that no, no, no. I mean, labels that formerly understood themselves as independent, you know, now on Spotify and they've sort of been forced onto Spotify and there are, there are a lot of artists to have all the politics of independence and nevertheless find themselves on Spotify or not on Spotify, something similar and, you know, the relationship between band camp, whatever.

James Parker (01:11:34) - It just seems, it just seems so crazy to me, but that the, the house show is something that could be, could be run by a former Coca Cola exec. Um, yeah, I just, yeah, I mean, but it's a, it's a sort of a wild experiment to sort of say to what extent that is even possible. I mean, because it's the, you know, the, the independent grassroots music subcultures have they produce so much of value in it, you know, there's so there's the social relations produced in those cultures are so, um, meaningful. And I mean, at least in my, in my personal experience that you can sort of say how these companies just, just, that is just something to it's infinitely sort of extractable. I mean, there's so much sort of capital in those social relationships that exist in the independent music scene. Um, but I mean, yeah, it, it it's, it will, I suppose, um, be on artists and, and sort of, and listeners in those music sort of subcultures to try to also find ways to resist the, the kind of type of total capture and commodification commodification of the scenes it's like with, um, it's like with the pandemic, you know, shock economy sort of thing that the argument that Naomi Klein made about, like the way in which, um, tech is, you know, using pandemic to sort of get into schools and so on, like the timing is excellent for something like so far.

James Parker (01:13:28) - Like if you can, you know, if you, that there's going to be a period where, where shows are going to start to open up and, you know, and of course the big venues will push for licenses and whatever, but, you know, people are gonna want to put on shows and if they can get guaranteed income, I mean, I, I find it worrying the timing is boring.

Liz Pelly (01:13:51) - Yeah. Well, I think it, it is worrying. Um, but I also think that, you know, there are still spaces in, in music and culture where, um, you know, artists are fully aware, this is all bullshit. And just trying to like carve out, you know, uh, sorry, there's a, uh, ambulance about to go by. My building will be really loud for a second. I don't know if could hear it or not.

James Parker (01:14:25) - Zoom's Machine, uh, filtering of the ambulance. I can't hear anything anymore. So that's

Liz Pelly (01:14:33) - Okay. Well, um, uh, yeah, I, from, from my perspective, like, I, it is like alarming, but I also think that during the pandemic, I have seen so many examples of like artists organizing and, um, like at least in New York, there's been like a really big concerted effort amongst this group called the music workers Alliance to, um, make demands of local elected officials for, um,

Liz Pelly (01:15:09) - Uh, works programs and funding for music, which is something that never happens in the United States. Like, um, and that has, it was, I don't know. I, I went to a protest that they organized outside of Andrew Cuomo's office, um, in Midtown Manhattan a few weeks ago, that was like a, um, they were calling for a statewide, um, program that would create jobs for musicians and artists. And from what I understand that like program, um, hasn't gone through, but, uh, there's like some funding for music and the arts that, um, in about, as a result of the pressure that musicians had, like, um, put on and the organizing that had happened. Um, so I, yeah, yeah. I don't know. It doesn't like exactly address what you were asking about.

James Parker (01:16:09) - No, no. There's reasons to be hopeful. I mean, it's, it's the socialization point. Like, you know, music become, can become a, you know, a method for, you know, Paul or frontier or politics. I mean, as it always has done as it should be, but it seems to me like as a no brainer that so far sounds at least should be resisted.

Liz Pelly (01:16:29) - Yeah. Yeah, totally. I, I feel confident that artists won't just throw their hands up and be like, Oh, well, like when the pandemics over, we should all play so far. So on shows. Like I don't, I don't think that that's, uh, necessarily, uh, going to happen. Although like, you know, there are some artists who probably will, um, I think, uh, it's thinking about like Spotify and so far sounds and companies like them, like in total, like, because there are so many musicians who ultimately like will in the short term rely on them for compensation. That's kind of like why I think when thinking about, you know, like a healthier music scene or music, community music world, it's like important to both do the work of like imagining alternatives and putting pressure on elected officials for public funding, for the arts, thinking about cooperative alternatives and socialized alternatives, but also holding these companies accountable to pay artists more and be more transparent and create, um, you know, systems that are more fair to artists. Um, because there are people who rely on them for a short-term income, um, whether or not that's fair income, you know?

James Parker (01:17:53) - Um, what, what are you writing on next, uh, on Spotify or review? Are you wrapping that up?

Liz Pelly (01:18:00) - Oh, um, so I know I have a bunch of stuff that I'm, I'm working on. Uh, I am right now working on an article that's like, kind of about, um, the stuff that I just mentioned with, um, the musicians in New York who have been like advocating for a WPA style program and thinking about that, but also, so I guess some of the stuff I'm working on right now is kind of more of the direction of like the, um, the bigger picture vision of imagining, um, alternatives and different ways of funding and valuing music like outside of the corporate models. But, um, I also do have a couple of Spotify things as well.

James Parker (01:18:48) - I look forward to reading them. They're going to be out in Baffler as well.

Liz Pelly (01:18:52) - Yeah. Yeah. I know the Spotify stuff will be in the Baffler. Um, yeah, I can, I can tell you about it more like not on the record or whatever.

James Parker (01:19:02) - Yeah, yeah, no, no, let's wrap it up. That was, that was fantastic. We've taken up so much of your time already. It was a real pleasure. Thank you so much, Liz.

Liz Pelly (01:19:10) - Thank you. Thanks for having me.

+ 172
- 0
content/transcript/reid.md Voir le fichier

@@ -0,0 +1,172 @@
---
title: "Kathy Reid"
status: "Auto-transcribed by reduct.video with minor edits by James Parker"
---

Kathy Reid (00:00:00) - Uh, hi, I'm Kathy Reid. Um, so I have my background in open-source systems. I came to Voice from open source rather than from say, Congress, uh, computational linguistics or other disciplines. So I've worked for, uh, I've worked for a university for a lot of my professional life, where I was taking emerging technologies like video conferencing and bending them down and operationalizing the mean to, uh, into normal sort of roles, taking them from that edge and that emerging space into an operational space. Um, from there, I went into developer relations at Mycroft AI and, uh, for me that was a natural intersection cause my first undergrad was in languages. Um, I majored in Indonesia language and it was this beautiful intersection of, um, being able to take my technical skills and some language skills into, uh, into a company. And one of the things that I did when I was at Microsoft was set up their Microsoft translate platform.

Kathy Reid (00:01:00) - One of the challenges that I think we see time and time again with voice assistants is that they're only assisting certain, certain people, certain cohorts of people. And what we wanted to do at Microsoft was try and open up voice assistance a little bit more to the world and what the translate platform allowed people to do. It was a crowdsourcing platform, uh, and what that allowed people to do was to translate, uh, voice command skill commands into their language. Uh, and it's one of the necessary things that's required for voice assistance to be able to support, uh, many different languages. So while we might have voice models for some of the major languages, we don't necessarily have some of those command translations for, for many others. So it was a very interesting time. Um, I'm in their voice team and, uh, I've worked on various projects with a lot of their different voice technologies, like common voice, which again is a crowdsourcing platform for, uh, recording, uh, voices and other answers in different languages.

Kathy Reid (00:02:07) - It has one of the most language diverse voice datasets in the world. I work a little bit with deep speech, which is the speech speech to text offering from Mozilla and occasionally with their TTS system as well. So, uh, uh, that's a little bit of the through line and, uh, now I'm embarking on a PhD, um, and what's the PhD on as it related funnily enough. So just started PhD I'm with the three H Institute, uh, within the college of engineering. And I should know this computer science within ANU. I'm going to get marks deducted for that. I'm sure. And, uh, at this stage, cause we all know how PhDs evolve and how they change over time. Um, but at this stage, what I'm really interested in is how voice assistants got a scale, what are the trajectories through which they go to scale and what opportunities do we have to influence or to, uh, change the course of that? The course of that evolution are primarily for the benefit of more people. So that's, that's where I'm at at the moment. Now I want to see how they go to scale, why they go to scale and can we shift some of the things around some of the axes around how they go to scale?

Sean Dockray (00:03:29) - Um, what is the sort of landscape of voice assistance? Cause I'm sort of familiar with obviously, you know, the big ones and Microsoft, is there, is there more, is there some other smaller ones?

Kathy Reid (00:03:42) - Uh, so if I, if I cover off the major ones that you, you probably know it, uh, at the moment, uh, so you, you have Amazon with Alexa, uh, which has a significant amount of market share. Um, you have Google with their Google home. Uh, you will have seen Siri, uh, in the Chinese market. Baidu Baidu is the largest, uh, uh, the largest voice assistant. I'm gonna forget the name. I think ... is the name of Baidu's voice assistant coming out of China, uh, in the African market. There's not really anything at the moment. And, uh, in, as far as I'm aware, there's not a lot that's coming out of South America either. So we have that divide between the global North and the global South in the open source space. Uh, I would characterize that as a lot more fragmented. So with the voice assistance, Amazon and Google, uh, they, they are quite mature. They have reasonable speech recognition capabilities. They have thousands and thousands of skills, which give them as an ecosystem, significant utility in the open source space. That's a lot more fragmented. Mycroft is a key player we've seen, um, Mozilla released their, um, uh, Firefox voice, uh, with bass voice assistant recently.

Kathy Reid (00:05:03) - Uh, the other open source tools all tend to suffer from one of two problems. One is that they're a single person trying to develop something which is an ecosystem, and they haven't quite figured that out. Or you'll have a group of people who, uh, tend to take the technology, do what they need to do with it, but there's no impetus to sustain or maintain or evolve that technology. So we've seen several, uh, uh, open source projects come and die like, uh, Julius pocket's thinks. And the Sikhs project that originated out of the CMU development on that is now dead. One person, Daniel Povey is primarily doing most of the County development at the moment. And Cody was a flagship, uh, ASR tool for several years, you know, over a decade, but they've now, uh, those projects are now to cutting because, uh, we don't have a way to sustain them. So I think the next question you had for me was can I explain automatic speech recognition in layman's terms? Uh, that would be great.

James Parker (00:06:08) - Thanks for doing it. Thanks for doing our job for us. We're a victim of our own scale. Aren't we, there's four of us and everyone's being too polite.

Joel Stern (00:06:17) - I mean, I think I was just taking a moment to sort of take it, to take it all in because it was, it was, um, you know, very extremely helpful for me to get a sense of the same outside of the major, um, corporate voice assistance, you know, so, you know, that's, that was the, that was the pause, but yes, please do explain, um, ASR

Kathy Reid (00:06:40) - And, and please do interrupt me cause this is, you can tell I'm quite passionate about this area and, and like most nerds felt passionate about an area and just talk until you, until you make me stop. Not now, Kathy, you sure she can't be. Um, so there's, uh, basically two different approaches to automatic speech recognition. We have the more traditional approach and we have a newer approach that's merging. The traditional approach is there's two different models, a language model and an acoustic model. And what happens is that when somebody, uh, says a phrase, I mean the technical term, that phrases an utterance, what happens is that the language model tries to extract from that recording, uh, the basic building blocks of speech and we call those phonemes. Um, so they're the sounds that you might hear in words like baby has two phonemes, you know, has Bay and B.

Kathy Reid (00:07:37) - Um, you might think of these as syllables. It's a, it's a good sort of point of reference. They're not quite syllables, but it's a helpful way to think of them. So the, the acoustic model, sorry, I've got this totally wrong. I'm going to have to start again. I've mixed up my acoustic model with my language model I knew was going to kick me out. Totally. Um, I'm sure you can do some fancy sort of editing tricks, but basically what happens is that when somebody says something and they say a phrase, the utterance, the it's, the job of the acoustic model to match that to the basic building blocks of speech, to, to finance. Then what happens is that once the findings are extracted, the language model needs to figure out which of those phonemes goes into which words. And so it's the job of the language model to reconstruct words from findings. And this is why you need different speech recognition models for different languages, because in different languages you have different phonemes. So the findings in French, uh, somewhat different to English and the findings in Japanese are quite different to English. Um, and so this was one of the reasons why, uh, voice assistance in different languages is, is quite a difficult problem because languages are so different

Joel Stern (00:08:57) - At the acoustic model, um, is common across different languages, but a separate language model is then applied.

Kathy Reid (00:09:04) - No, that's not true. So you need, I'm sorry for being so direct. You need a different acoustic model per language, different file names are pronounced differently, different languages. So we might have a phone-in called car in English, but in Indonesia and the case zone is a lot softer. So it would be, um, and in Arabic that, because sound is different. Again, it's a much more guttural sound, which is why you need a different acoustic model per language.

James Parker (00:09:33) - And presumably that immediately gets complicated by regional dialect and, uh, and so on. So you sort of have an infinitely expanding number of acoustic and language models. The, the sort of the more widespread you want the technology to become.

Kathy Reid (00:09:54) - Absolutely. So imagine, imagine that you speak English, but you have a heavy Welsh accent. Your acoustic model is going to be incredibly different from somebody who speaks with, um, a very proper British accent. That acoustic model is going to be very, very different. On top of that, you also have slang.

Kathy Reid (00:10:14) - So, not only does the acoustic model have to match the speaker, the language model has to recognize, um, idiot, idiosyncrasies and idioms in the language. So, uh, well I know a couple of us in the voice coach, uh, are Australian. Uh, so imagine the sentence, uh, there's been a bingle, uh, up at Broadie and the traffic's chockers back to the servo and I'm going to be late for bevies at Tomo's. Now, if you're Australian, um, and you, you, or you're not, then you've been here a few years.

James Parker (00:10:49) - It's been here 10 years. I've still got no idea what you said.

Joel Stern (00:10:53) - I got it,

Kathy Reid (00:10:55) - Points for Joel, he's going to bevies at Tomo's. Um, but so you see the issue here that we have with slang. It's not just the acoustic model, which, um, w which needs to be cognizant of accents. It's also the language model as well, and idioms of language.

James Parker (00:11:12) - And I'm wondering if age and I mean, you know, the, the moment, the moment you, you, you probate, presumably it gets increasingly complex again. And my son was speaking to, uh, uh, my phone the other day. So saying something like, you know, show me poor patrol or something like that. And, and I was amazed that it could understand him and I, I, I couldn't help, but think there must have been some specific modeling done for four year olds. And that, you know, politically, it's kind of important kind of important for organizations like, you know, Google, Amazon, so on to be able to get that, get that sort of, you know, young market early, right? Like if I was going to invest in modeling, I'd probably, um, I'd probably go there because if you can get people to adapt to voice user interfaces before they can tie, um, you're onto a win.

James Parker (00:12:07) - And I just, I mean, I said poor patrol, but, you know, paw patrol is uses voice, internet interfaces in the show. Right. So that the dogs are literally like, you know, God, I can't even think what they do. Like, you know, helicopter go or whatever, and it does it right. And so I just, I can't help, but think that there's a kind of a process of habituation that sort of going on very explicitly across cultural, technical, um, domains. And that, that must be that that must be something that the kind of the industry is conscious of.

Kathy Reid (00:12:39) - Uh, so yes and no. Uh, I agree. And I disagree on that point. So I think you're absolutely right in that the industry is trying to make voice user interface a default, uh, or make voice user interface, uh, a widely used, uh, interface mechanism, particularly in a time of COVID where touching surfaces is actually a dangerous form of interaction. Um, I don't know how many of us have been to an ATM and typed in our pin number since COVID, you know, w we're not using cash anymore because it's a tactile physical. So I think, yes, absolutely. The voice industry is trying to get people used to voice just in the same way that when the iPhone came out in 2007, there was a process of habituation where we had to acclimate to using a touch screen on a mobile phone. Um, I'm, I'm the generation.

Kathy Reid (00:13:32) - And as you can tell before I phones. And so I had to go through a process of, you know, going from my Nakia, you know, 30 to 10 to something that had a touch touching to face that I wasn't used to. So you're seeing industry is absolutely trying to get people to make, uh, to make voice the default way and to get very used to using voice, uh, by the same token, this isn't anything new, right? So socially and culturally, we have a history of expecting to be able, we have an imaginary of being able to speak with machines. So if we go back to star Trek in the 1960s, um, you know, computer do this, or, you know, Jean-Luc Picard, you know, T O gray hot, or my personal favorite, Kate Mulgrew coffee black. So we have this long cultural history of expecting computers, uh, to listen to us and to do our bidding. You know, we see it with kit and Nightrider, we saw it in time tracks, uh, with Selma. So we have a very, very long cultural history of expecting to be able to speak with computers, but it's only now that the technology has been able to deliver to that imaginary, if that makes sense.

Sean Dockray (00:14:49) - Yeah, definitely makes sense. Um, do you think though, um, that James, uh, that, that there's, um, an attention to children specifically, like, uh, James was hypothesizing just then I'm sort of curious. Cause I was thinking about how, um, also like having children not have to ask their parents to enable some piece of technology to do something to, um, they don't have to ask for permission that they can go directly to the, you know, that this also, um, well it saves me time, so then I don't have to get exasperated. Right. So it kind of functions to, to, you know, allow this kind of like working at home, working from home parents to not be distracted too much, like, um, it would be quite a clever thing for a voice companies to be sort of like aiming for in particular. Do you have any sense that uh, children, language models are being developed, uh, in particular or

Kathy Reid (00:15:55) - So? I, I really don't know for sure. It's not an area that I, I work in specifically from a technical point of view. It's absolutely plausible that voice models trained on children are being deployed. So if we look at, for example, changes or differences between women speaking and men speaking, um, not to cast gender as a binary and I recognize agendas are not binary. If we look at some of those different characteristics, we tend to have different fundamental frequencies. When my speaking, so men tend to speak at a different, a lower, fundamental frequency from women. And so children speak at a, sorry. Women tend to speak at a higher fundamental frequency than men. Men tend to operate at a lower vocal range. Kids tend to have a higher range again than women. And so being able to capture a voice recordings or samples that have that higher range, then being able to, to train, uh, speech recognition models on those samples is absolutely something that I think voice companies are doing in order that children can be heard.

Kathy Reid (00:17:04) - I know when I was at Microsoft, one of the big issues that we had in training, some of their models was that they didn't respond and recognize as well to women and children. And we found that was primarily because we didn't have samples of a women's voices and children's voices in the data set that we were training from. So I think that's where, that's where that problem starts. But if we tie this to the broader ecosystem view and we think critically about what it is that voice assistants are doing, what is the intent of a voice assistant? What does a voice assistant want? If we think critically about what commercial voice assistants want their funnels to something else? Right. So if you're speaking to Alexa, part of Alexis job is to get you to order more stuff from Amazon so that Amazon can take a cut of that sale.

Kathy Reid (00:17:57) - If you're speaking to Google, part of the Google assistant is, uh, is trying to be able to put advertisers products and services in front of you via that interface so that they can get a cut from their, their partners who advertise on the platform. So if we think critically about the intent of a voice assistant and how that might intersect with children using that voice assistant, then absolutely. Um, absolutely these companies will be looking to see how they can, uh, how they can commercialize commercialize the use by that cohort. So I think there's another element here too, with children using voice interfaces.

Kathy Reid (00:18:38) - If, if I cast my mind back more decades and like here to imagine now, and I think about when I was learning to drive a car. So I learned to drive a manual because automatics were fancy new and shiny. And so I had to learn how to operate a clutch. And I had to learn, you know, where all the gears were now, automatic cars are a lot more common. And I suspect that we've seen the last generation that will ever need to learn how to drive because in another generations time, autonomous vehicles will have increased in their maturity and our underlying infrastructure and regulations will have caught up to where the technology is. And so we won't have to learn how to drive a car anymore. And I think what we're starting to see in the user interface space in the HCI space is a similar evolution.

Kathy Reid (00:19:26) - So, uh, I learned to type on a manual typewriter because that's how old I am. Uh, and then I shifted to an electric typewriter and, you know, I can type at 130 words a minute and that's fantastic for working in tech, but if you're trying to learn to type at the moment and you can talk to your computer instead and have it transcribed faster than you can type, then why wouldn't you speak to your computer and have it transcribed the words instead. And so I think what we're starting to see is the inculcation of a different default way of interacting with computers.

Kathy Reid (00:20:01) - Rather than a keyboard and mouse. And so I think the keyboard and mouse has been the default for so long that this is really a seismic shift and the generation that's coming through now that is much more comfortable talking to a voice assistant than they are typing, uh, is going to find voice and much more fluent experience than typing.

Sean Dockray (00:20:20) - Yeah. One interesting thing about that, I think too is, um, I mean I realized that the, the sort of like trend models are becoming small. Like it's possible to miniaturize them and to place them on the devices, but they require even then they require constant updating, because like you said, the language model is always being sort of evolving by the day, but that when the interface sort of is only about kind of like touching or, you know, entering in text that it's something that can happen on the device. But, um, but now, like, as we've moved to a kind of like voice enabled interface sort of means that the, the, you sort of depend on an external computational power, your EDW depends on, you know, um, remote servers, you depend on the cloud in order to either deliver the language model to you for you, or to actually do the speech to text.

Sean Dockray (00:21:12) - And so that dependency on the remote. Yeah. Can I ask a variation on, on, uh, on, on that, that exact theme? Um, you know, because I was also thinking, as you were speaking, then you know, that a keyboard isn't owned, uh, and sort of corporatized in the same way that a voice user interface is now, obviously, you know, you've been working in the open source space, but in order to interact with Siri, I'm interacting with, you know, an enormous, enormous corporation, one of the biggest in the history of the world. So, so it's not, you know, it's not just a sort of infrastructural dependency, but a corporate dependency as well. And so it's interesting, you know, you, you said before about, you know, what, what voice assistants want, and one of the things that many of them want is to be no dependency on, on corporate infrastructure specifically. So it's, to me, those Sean and Sean's question is related to that, that question of kind of corporate path dependency and interface design.

Kathy Reid (00:22:21) - Absolutely. So I think there are two threads to this discussion, no pun intended. I think the first thread here is, um, sort of exploring what that dependency is and the dichotomy between a cloud enabled voice assistant and what affordances that has versus the affordance of something that is offline or embedded. Um, uh, very happy to speak to that. And then I think the second part of that question is really getting to the type of relationship that a voice assistant user has is no longer with a device, but it's with an ecosystem. And what are some of the emergent properties of that changing relationship? Um, so first let cover off on the cloud versus the embedded cause I think that's a very, very, uh, uh, very, uh, prescient, uh, dichotomy at the moment previously, the key reason why we couldn't have, Uh, Functionality of a voice assistant in something that was offline and not connected to the internet was that those devices generally like the computational power to be able to do speech recognition, intent, passing, natural language passing, and then to generate voice what we're now seeing as hardware accelerates and improves is that it, and, uh, also the intersection of that with advances in machine learning algorithms and the ability to condense our machine learning models onto embedded devices. What we're finding is that it is now possible to have comparable, uh, comparable algorithms. So comparable speech recognition, comparable language, passing, comparable speech generation on commodity class hardware. So as examples, deep speech, which is Muscillo speech recognition technology can run at real time. So that means it can sort of keep up with what people are saying. It can run it real time on Russ, before raspberry PI for hardware. Uh, and we haven't seen that anything near that in the last couple of years. So embedded embedded hardware is getting better. Algorithms are getting better. So we were removing some of those technical barriers to having a voice assistants work in an offline capacity that doesn't solve the problem of updates and it doesn't solve the problem of the services and, uh, skills that those voice assistants connect to. You need to still be online to be able to access their services and skills, but it means that your speech and your utterances Stay can stay on device.

Kathy Reid (00:24:59) - Of course, that doesn't serve the purpose of some voice assistant companies and ecosystems, right? Because it's that voice data that utterance state of the commands that you're giving to your voice assistant that are actually, uh,

Kathy Reid (00:25:12) - The, the source of wealth or the source of revenue for four people, that's how they, that's the data that they use to train their models with. So that's another line of dependency between, uh, voice assistant users and the ecosystem that produces voice assistance. So that's a little bit about the, the dichotomy between sort of cloud and the embedded. If I look to the relationship between users and voice assistants, and we see how that relationship has changing compared to other, uh, interfaces. And I think that's really, really interesting. So one of the, one of the approaches to market, or one of the plays to market for voice assistant companies is not as a voice assistant, but as an interface to the rest of their walled garden or rest of their ecosystem. So for example, you might use your voice assistant to turn off the lights or to turn on the lights, or you might use your voice assistant to play music or stop music.

Kathy Reid (00:26:15) - Um, what voice assistant companies want to do is not just I'm that voice assistant experience, but I'm, uh, that entire smart home, that entire connected experience. And then if we go outside to the garage and where we might have our semi-autonomous car, you can see how the voice assistant paradigm is also extending to not just out inside the home, but out of the home, into the car. And then you, it's not too, too much of an imaginary to imagine that when you get to work, if we ever go back to the office again, you might also have a corporate assistant that assist you at work in your work context, voice companies want to own as much of that experience as possible because the more they can own the more services they can sell you, the more, uh, the more other things they can get you to buy voice assistance themselves, have very little, uh, have very little revenue generation capability other than as entry points or as funnels for other systems and services. And so that's changing the relationship between the voice user and, and the ecosystem that they're interacting with.

Sean Dockray (00:27:31) - Yeah. So maybe following up on that, can you talk a little bit about the Microsoft ecosystem then? Just because obviously my then make my craft, isn't just an alternative voice assistant and alternative device. It's a, it's obviously seen an entry point into some alternative ecosystem.

Kathy Reid (00:27:49) - So, and so the ecosystem that Microsoft is the entry point for is again, an open source ecosystem. So people write, um, people write skills and make them available through the Microsoft skill, uh, skill page in very, in a very, very similar way that people write Alexa skills or in similar way to, uh, they write Google skills. The key difference is that that's not monetized or incentivized at all. And I think that's actually a, a benefit and a problem. So for example, there's no financial benefit to a skill developer in writing your skill for the Microsoft platform. They get no revenue from it. Um, they need to maintain and sustain that skill. Whereas if A, uh, Well, let's take the example of Spotify. You can only access the Spotify API, which is required to create a skill if you have a premium account with Spotify. So if you use the free account, you can actually use our Spotify with your, uh, with your voices system, because you need to have a premium account to be able to do that. So that's one way in which the, the service or the skill that the voice assistant links to is trying to drive revenue from the voice assistant, but the voice assistant itself gets no revenue from that. The voice assistant ecosystem gets no revenue. What we also see is that is a pattern that's common to many other open source endeavors as well. And that's open source is generally created to scratch in it. Somebody creates something because it solves a single point problems for them. Somebody else uses it, modifies it, um, you know, gives it back to the community.

Kathy Reid (00:29:43) - And so what we see is a very different, a very different development of skills in the Microsoft ecosystem. So we have skills, for example, for diagnosing, uh, wireless and network problems. That was one of the first skills that went into the Microsoft skill store there, the aircraft skill, um, because the people who are creating micro skills are also network administrators and system administrators. And so they use this to diagnose their networks. So that's a very different pattern as well. So I think if we think about how skills are incentivized, that sort of cracks open this problem a little bit as well at the moment, there aren't very many incentives for skill developers, unless they're employed by a company that gets a revenue stream from having a voice assistant skill.

Kathy Reid (00:30:33) - If we think about this and compare it to the mobile phone, a mobile phone app market, then we don't have, we don't have the sort of market economics in play at the moment. And one of the dangers is that if we follow the mobile app store sort of paradigm, you have developers there who put a huge amount of time and effort into developing apps. Studies have shown that people tend not to buy apps. So people are quite, um, you know, they might buy a thousand dollar mobile fine, but they're really stingy about actually buying apps to put on the phone. And so trying to generate revenue as a mobile app developers very difficult, unless that mobile app is actually non-trained to sort of platform as a service or a software as a service offering. And I think we're starting to see the same thing play out in the voice space, especially cause I think, uh, Apple is an Apple that takes 30% of the revenue from each, uh, app store sale. So Apple takes a cut from the app and the app developers left struggling to, to try and find a revenue stream.

Joel Stern (00:31:42) - I mean, this takes us really neatly into the, um, next question, uh, I suppose, which is around the politics of open source voice assistance and, you know, um, having so eloquently described the kind of, um, incentivization in the, in the commercial sphere for developers, what motivates, um, the production of an open source voice assistant or, you know, to put another way, what, what do you think are the sort of political imperatives at work, um, in, um, an endeavor like that?

Kathy Reid (00:32:18) - So, so I think what I might do here is talk a little bit about the challenges of an open-source voice assistant and then sort of relate that back to some of the politics that are in play that open source might be able to, uh, to, to have a, uh, some influence on, uh, so thinking about the challenges of open source voice there's, many of them, uh, some of them are common to open-source endeavors across the world, like sustainability, maintainability. How do you derive revenue in order to fund some of the infrastructure that open source is working on? We've talked a little bit about, um, skill stores and the lack of incentive for voice skills. Uh, but the, the trick there though, is that the utility of a voice assistant is really a function of how many skills it's able to do, except that all the user studies are showing that, uh, when people, when people use voice assistance, they only use a very small handful of skills like weather or time or setting a timer, those sorts of things.

Kathy Reid (00:33:27) - That's a very, very small amount of skills. And so even if you start to increase the number skills that a voice assistant can do until you get people discovering those skills and using them, uh, you're not actually increasing the utility. So it's not just a pure numbers game of how many skills can be developed. You actually have to get people using the skills, um, and actually getting them to riding some utility from those skills. So that's one of the challenges of open-source, you know, not just having the skills available, but how do we show people what skills are available that discovery layer. And I think that's common to most water systems, not just I can source ones, one of the other challenges that we having open source, um, and keep in mind that open sources sort of sell to the market is about privacy and not using your data for, um, uh, not using your data for commercial gain, you know, not impugning your privacy is that people don't care about privacy.

Kathy Reid (00:34:27) - People are not willing to pay to have their privacy protected or only a small proportion of people are. So we're, we're all too willing to give up our personal information and our privacy for access to a service like, um, you know, different social media services. So as a society, we don't value privacy. And, uh, I've already mentioned that sustainability piece that sustainability and the ability to derive a revenue stream from open source voice is very, very difficult, which in turn makes the sustainability and maintainability very difficult as well. The, the revenue opportunities, just as a voice assistant on there, until you connect it to an ecosystem like an e-commerce platform or an advertising platform, Linking that back through to the question of politics, that there are huge politics in voice assistance. Firstly, I think, uh, wherever there is politics, there is power or lack of power.

Kathy Reid (00:35:29) - And I think that plays out in voice assistance as well. So the first question that comes to mind here is which voices are recognized. So there's a very famous study which was done recently, uh, by Aaron co Nikki. I'm going to have to double check her name, but basically her study found that voice assistants, commercial voice assistants are much less likely to recognize African-American speakers. And there's been some work done on that to try and find out why. But at the, at the core of the issue is that voice assistant language models are not trained on people who speak with African-American vernacular. And they're trained on people who have white speech. And so there's all this sort of power tied up in voice assistance, where you have a history of inequity, history of marginalization. And this again is manifesting itself in a voice assistant. We see politics agenda playing out in voice assistance.

Kathy Reid (00:36:30) - So most of your voice assistants have a female voice. So if you think of a voice assistant as a machine that is ready to listen to your commands and do your bidding here, we have the manifestation of yet another, yet another thing in society that expects women to be subservient and to take orders and to do, to do the bidding of others. So, uh, a began cost women in a subservient service oriented role. Uh, I'd love to see a voice assistant that is very authoritative, uh, that has a man's voice instead. And so that I can give a command to a voice assistant. That sounds like a man, like a man. And it says in a very authoritative tone, thank you. I'll get to that immediately. I'd love to have a voice assistant like that. So there are politics agenda that play out here as well.

Kathy Reid (00:37:22) - And we see this in conversational design as well. If you, if you interact with the voice assistant and you're rude or you're abusive, then the way that the voice assistant handles that from a conversational design perspective has politics as well. If you have a voice assistant that is gendered to be female and the voice assistant, uh, deals with that sort of, uh, dialogue in a way that's very subservient and very passive, what message does that send to the user? You know, does it normalize patterns of behavior that we're trying to Dean normalize and problematize in society? Uh, so yes, I think there's a huge amount of different politics tied up in voice and system.

James Parker (00:38:06) - And do you think that open source is particularly well suited to, uh, uh, confronting or dealing with those problems? Or is that, uh, is that something that sort of, is that a problem that skates across all the different forms of voices system?

Kathy Reid (00:38:23) - It's a great question. Uh, and again, I'd have to answer with yes, no. So from the perspective of technology where open source is available and you can alter and modify and bend the open source code and hardware to your will. Yes, it does very much lend itself to challenging some of the, some of the orthodoxies. Some of the established patterns I'd also have to answer no because less than 10% of the open-source community are women. And so you have a predominantly male community building open source, excuse me, building open source software and building open source voice assistance. And as humans, we tend to build things like we are. And so I think the lack of diversity and open source communities, not just along gender lines, but along, uh, uh, racial diversity as well is also a problem for open source because we don't have that diversity to draw from, to build from. So yes and no,

James Parker (00:39:28) - As a sort of follow-up to that, you know, so some of the things that you've been saying about, uh, source or the, the, the political problems that you you've you've you've mentioned, uh, are basically, uh, almost like a problems of access or the completion of the project of voice. So you said, you know, well, the problem is that Africa doesn't have voice assistance in various African languages or South America. And their problem is that, um, African-Americans also can't access their voice assistance. And so I'm immediately thinking, you know, all of that's true. And of course, if they're going to be voice assistance, you should have access to them. But on the other hand, there's a politics, a very serious politics to the, the project of, cause there's a kind of, uh, uh, colonial or expansionist dimension to, to, you know, to always treating the problem as, um, we need more of the thing that, that we started off with, you know, it, it, there's something about that, that.

James Parker (00:40:35) - I mean, maybe it's just because I kind of, I'm just like a little bit, I have a bit of an aversion to voice assistants somehow that, that, that if the problem is always there not being enough of it, um, then that is, it strikes me, there might be other political, maybe. I mean, maybe another way of sorry to interrupt teams. I was just, um, Kathy, when you said before that, you know, the, the, the political question, um, you know, is, is who, who gets to be heard or who, you know, is audible to a voice assistant. And one of the questions we've been sort of exploring is, um, who, who is allowed, um, not to be heard, you know, who has the option to evade, um, the capture of these always on Listening devices, which are increasingly pervasive. And, um, obviously there's a lot of people who, who, um, in our communities and, you know, who, um, would feel threatened and insecure, um, by H having a sort of, um, device that captures the, the Sonic environment and what they're saying and which, um, that they might feel, they have sort of limited control over how that information is used.

Joel Stern (00:41:54) - So I suppose, you know, this, um, political question is both about, you know, access to these devices and the benefits, but also, um, protection from them. Um, in some sense, just to offer a further extension, you know, um, it's not just voice assistance, don't just listen to voices. You know, so increasingly audio event detection, audio scene analysis, and whatever are all being integrated in as well. So, and, and sometimes by means of the, kind of the Trojan horse of voice. So in other words, to understand your voice, we need to better understand the Sonic environment. And so they kind of, they, they get sort of entangled up in each other and, and again, it just, it just, just seems like the kind of the horizon is always more, or, you know, we, we must, we must listen to more capture more first, it's the voice, then it's the context then it's, you know, a kind of total ambient sensory. So if you're a, if you're, you know, um, let's say an activist involved in a campaign where you feel sort of, um, very insecure or sort of threatened by, um, not just the authorities, but, you know, the, the, the major, um, industrial partners who, who, who work with them, I suppose we're sort of thinking about what are the, both the positive and negative horizons of these technology with that kind of thing in mind.

Kathy Reid (00:43:25) - So, Oh, wow. A huge can of worms here. So I think, let me, let me tackle the ambiance and the Sonic environment one first, and then let me talk a little bit about how do I protect myself from being heard, um, you know, how do I, how do we treat silence as a value as well? How do we treat privacy and silence voices as valuable as well? So if we think about learning more about the Sonic and acoustic environment in that might be noise, uh, doors slamming, what is the traffic outside? We look at more than more than human design principles. Can I hear birds outside? Can I hear a dog barking? What is the level of traffic outside wasn't industrial noises? We can start to get that context where that starts to getting creative, incredibly scary is not disappointing time, but where we're able to gather that data to give us a picture over, over a temporal context.

Kathy Reid (00:44:24) - So how does that change week by week or month by month or year by year or season by season? So if we think about scale along that index line, then if we start to think about what other data could that be combined with. So we have a huge smart city. Isn't a huge open data movement at the moment in a lot of cities in order to better control things like waste and smart parking and those sorts of things. How do those data sets interact? And what are the affordances from those either sort of collisions or collaborations. And I don't think we've done a lot of thinking about what that might look like or what imaginaries might come out of that, except what we do know is that there's going to be an intent behind those. So if we look at the intent of smart parking, you know, it's so that people can find a car park wherever they want. It's not so that we reduce our dependence on cars. If we look at things like industrial noise, is the intent there to regulate, or is the intent there to, to do something else altogether.

Kathy Reid (00:45:34) - So I think we need to be really careful about the collisions and the collaborations of those datasets. Uh, I don't know if anything, specifically in the literature that looks at those at how voice assistant sound data can be combined with other forms of data, um, to, to infer much more about a person than what we already know. But what we do know is that the ecosystems to which these voice assistants are entry points, the portals know a huge amount about us from our web browsing activity, from our phone activity, from our location activity. Um, and so it's not, it's not inconceivable that voice and acoustic data would be used to augment things like geo location data. So for example, you don't just know the time I was at a park because you have my GPS and you have a timestamp. You also get to know where the ducks quacking in the park, where the other dogs were feeding was the wind howling. And so we start to add these different layers of context onto existing data sets as well. And that might be incredibly scary.

Joel Stern (00:46:45) - Yeah, we already have. Yeah. Yeah. I was just going to invite you to say what, what, what scares you about that?

Kathy Reid (00:46:53) - So I'm scared about voice assistance in many different ways. Uh, so the ability now for law enforcement to subpoena people in the United States to have their voice assistant recordings used in legal proceedings, you don't consent to that when you get a voice assistant that might be barked down or might be bundled with another product, and then suddenly you have a microphone in your house, and we're not just talking about a microphone in your lounge room or your kitchen or your bathroom. If we think about context and I, I go back to Jonathan Habers classification of context of spaces, uh, public, uh, shared private and intimate. We're starting to see voice assistants move along that spectrum. So they're not just in lounge rooms or in kitchens or in bathrooms, which is private space. We now have voice assistance at our bedside table in one of the most intimate places in the home.

Kathy Reid (00:47:54) - And what might it record there? How could that be used against us in ways that we haven't imagined yet by the same token, it might protect us. It might enable us to, uh, have our truths told and to have those truths believed in a way, uh, that people and witnesses often aren't. So I can see a, I can see a dichotomy there as well. Uh, the other things that scare me incredibly about voice assistance, and we go back to, uh, the political, the political angle here. If we start to think about acoustic models and language models, and we start to think about people who have accents, and if those accents are able to be detected and voice assistance and another mechanism to do racial profiling and over-policing of marginalized groups. Absolutely. So if you think about Portland and the protests that are going on at the moment, they're one of the ways that police could possibly target protestors is to try and determine whether the voice assistant that you're speaking to can detect.

Kathy Reid (00:49:01) - Whether you speak with African-American vernacular, that's incredibly scary, or we don't have to look that far away, right? We have an indigenous population with the lifespan that is still 20, 20 years, less than white people in Australia. What if we did racial profiling of Aboriginal Australians, because it's not like that hasn't been done before. So these are some incredibly, incredibly scary things that voice assistant technology might do and regulators aren't anywhere to be seen. Right? And I think there are some major problems there. This technology is difficult to get your head around from a technical perspective. It's difficult to get your head around from an affordances benefits and risk perspective. And it's also difficult to see what trajectories or through lines the technology might have used in certain ways. It might be very beneficial to certain cohorts of the population, but it might also have a devastating impact to other cohorts. And I, I don't think regulators and lawmakers are really grappling with those issues yet,

James Parker (00:50:09) - Which is something that I've been grappling with recently. That to me, I'm also, I also find smart assistance or voice assistance, quite scary. One of the reasons I find it scary is because of the, kind of the sort of completionist, um, dynamic or sort of impulse that sort of seems to be sort of implicit in it, you know, more, more, we want more, more sound, more analytics in it and the way that that's tied to a specific political paradigm or kind of, uh, capitalist paradigm of data extraction. And so, so to me, like the it's not so much, um, is my privacy at stake in terms of, you know, will somebody hear something specific, but just the idea that like, there's just the, the data collection data extraction imperative is stroke so strong for these companies that we're going to continue to feed the beast, basically on the, kind of the, the, the data colonization of our entire auditory worlds. So I find that really scary the trouble is that it's quite hard to specify that.

James Parker (00:51:19) - And it's interesting that you, when, when you started talking about, you know, the context of policing and things immediately got speculative, and one of the problems I find is that I can't point to a specific example right now of, you know, the use of, um, you know, Machine Listening in over policing. And, you know, it feels like the pushback in relationship to facial recognition is strong because it's being used in that way. And it's a real challenge because if we let it get that far, we've already lost. So I don't know how to, I don't know, like rhetorically or politically illegally how to confront that problem, because it's easy to sound like a paranoid maniac, you know, that, you know, imagine it could be used or this, to me, it seems empirically true. It's obviously going to be used later that I can point to, you know, papers where site, where scientists and engineers are literally developing the applications right now, see it specifically in relation to COVID actually, you know, scientists saying, well, you know, you could use it in the enforcement of social distancing and this way and the other, and here's how you would do it, but there's a kind of a speculative mode at the moment about, think about addressing the politics of voice and Machine, this name that I find challenging.

James Parker (00:52:34) - Um, so I, I don't know if that's a comment or a question actually.

Kathy Reid (00:52:39) - Um, I think it's, uh, a provocation, it's a, it's a great provocation. So I think we are now with Machine Listening, where we were with facial recognition five to 10 years ago, just because you wear a tinfoil hat doesn't mean you're wrong. So if we think about these exact same arguments were being made with facial recognition and image recognition in artificial intelligence five years ago, and in that intervening time, those technologies have been productized and platform attires and now weaponized against cohorts of the population. And I think the point that you make is, uh, how can we get the regulators attention and the attention of consumer rights people in the attention of lawmakers now before, before the technology is already in place. And we're trying to put Pandora back in the box again. Uh, and I think you're absolutely right. We, we are not thinking critically about some of the speculative and what I would call quasi speculative because we know that technically it's possible, which means that for me, that puts it firmly outside the boundary of purely speculative. So I think that the problem is how do we get the attention now? And I think part of that bigger problem is that we don't have a history in legislation or consumer rights about thinking preemptively about thinking speculatively into the future about what are the principles and ethical axes that we're going to need in order to harness the benefits of emerging technology without exposing particularly vulnerable populations to the damages and harms that can cause. And I think that actually requires a fundamental shift in technology regulation, because we always seem to regulate after the technology has been built. We don't seem to proactively regulate as the technology is about to, about to hit the market.

Joel Stern (00:54:46) - W uh, I just want to, um, remind you of, um, what, what, something that you was sort of started saying, um, at, at, at the outset to, to the answer about valuing silence and valuing, you know, not being heard. And it'd be great to hear you just expand on that a little bit.

Kathy Reid (00:55:07) - So I think one of the benefits of having a voice assistant not recognize your language is the same benefit that we've seen throughout history and throughout culture, where language has been a signifier or a marker of an exclusive group that perhaps doesn't welcome outsiders or people who are not part of that group as easily. So we see this with Romani, people who speak a dialect. That's often not the language of the country in which they're in which they're living. We saw this in world war two with Navajo speakers who use never whole language that was unintelligible to the opposing forces that were listening in on those conversations. So we've seen language as cryptography. One of the benefits of having a language that a voice assistant doesn't speak is that it can't hear you. I can't be recorded. The downside of that silence might be the death of your language though.

Kathy Reid (00:56:03) - So if we think about, for example, the 700 or so Aboriginal languages in Australia that are still active voice assistant technology might actually be one way to, uh, help preserve, uh, and help those languages, those sleeping languages come to life and be reanimated and reborn. Again. If at the moment there are about 7,100 languages in the world, but 23 of those languages account for 50% of the world's population. And we're going to see languages decline, uh, as speakers age, and, uh, and die out, quote, voice assistants actually be used for language preservation. And that for me is the flip side of that silence. So I think, uh, I think absolutely people do have a right to silence as they have a right to privacy, particularly in their own homes. But I think we also need to be aware that if a voice assistant can't hear you and doesn't understand your language, then that's one less mechanism we have for preserving some of those languages. And because language is a marker of culture and history, it's a way of preserving culture and history as well. So I think, uh, yes, we all have a right to silence, but silence in a voice assistant can also have other consequences as well.

Sean Dockray (00:57:28) - Thank you. Yeah, my question was going to it's you just gave an a great example for the question I was about to ask him, which I will ask now, but I was thinking that so important to do that work of paranoid speculation, you know, that you are talking about with James, um, you know, especially imagining kind of like imminent threat of the proliferation of these voice assistants. Uh, and I'd wanted you to kind of do the opposite work of the opposite kind of speculation, which you kind of hinted out. You said there are all these possible, um, you know, uses, and it's such a shame that, that this technology is being sort of misused and abused and so likely to trouble down the road of kind of like surveillance and, you know, mass marketing and all that kind of stuff, because it has all this other potential. And I was just wondering, yeah. What, what do you have in mind, like have you seen through my craft, like some, you know, really interesting possibilities. And then I think that that language preservation example is a fantastic example, but, um, I'm just wondering if you could even continue, uh, just sort of thinking about what you have had the ability to stay. Yep.

Kathy Reid (00:58:42) - Yep. And again, a great provocation. What does, uh, what does a voice intent? What does a voice assistant with good and pure intent look like? What does a voice assistant for all of humanity look like? I think there's some incredibly beautiful provocations

Joel Stern (00:58:57) - we've been Calling it socialist Siri

Kathy Reid (00:59:00) - Like that, too. Um, so many options at the risk of boring and go to sleep. Cause I know we've gone a little bit over time, so if you you're all happy to stay with me and listen to, um, the nerd rant. Um, so one of my favorite books is a book by Neil Stephen center. That's called the diamond age, a young lady's illustrated primer and the voice assistant in that book. Isn't actually a voice assistant. It's a real human and it guides the books protectiveness now and teaches her leadership skills and helps her grow, uh, from a really socio, uh, very poor socioeconomic, uh, cohort that really has no options into the leader of her community at the end of the book. And that for me is a role that I see voice assistants being able to play. Could voice assistance, being mentors, be guides, be leaders.

Kathy Reid (00:59:54) - Particularly in, especially distanced way. No, we're protecting and isolating a lot of our older and more vulnerable people for very, very good reason, but it means the mentorship and the leadership and the coaching roles that a lot of these people play and not able to be played as much with younger people. So could, could we have a voice assistant as coach, as mentors guide as a young ladies in Australia primer? I think that would be lovely. I also like the, the idea of the subversive Siri, um, the, the Siri, the questions, the status quo, the calls bullshit on fake news that provides access to alternative use alternative content, alternative paradigms, alternative ways of thinking instead of a voice assistant that got you to buy things from the store that pays the most money to that voice assistance ecosystem. What about a voice assistant that let you uncover, uh, the, the unknowns, the gyms in your local area?

Kathy Reid (01:00:58) - You know, teaching know that the best place for burritos is actually this tiny little corner around the street or that the best coffee in Brunswick doesn't actually have a webpage because that's Brunswick, you know, remember the hipster map of Melbourne that was drawing, you know, about five or six years ago where they, they plotted all the hip stop locations of Melbourne on a map I'm from Jalong. So coffee's about as hipster as I get. I'm sorry. Um, but imagine a voice assistant that could do that, or a voice assistant that, uh, upheld the ideological values of the community or the house or the space in which it was placed, you know, um, imagine a voice assistant that was supportive or, you know, uh, in various social situations that Siri or Google or Alexa completely back away from, you know, imagine trying to have a conversation with your voices system about how to navigate a polyamorous relationship. That's something that's, uh, that your commercial voice assistance to sidestep, but there are, there are interventions and there are niches here that I think a voice assistant could actually play

James Parker (01:02:08) - Really interested that you didn't mention that not suggesting that it's not part of your thinking, but does disability applications, um, been, you know, as I said, I I'm a little bit sort of, uh, um, sort of instinctively tuned attuned against, um, um, some voice assistance. And then I just keep thinking to myself, but the, the, the, the D there's one's context where, you know, the D the word assistant just really rings true, and that's, you know, blind communities or people with severe speech impairments. I was reading about a new company that was, um, trying to develop, um, uh, that voice assistance for speech. Um, so people with severe sort of, um, uh, physical impairments of their voice box, you know, the sort of escalation of the throat and so on and so on, I just, I can never go past them as, uh, as something that seems really, really valuable.

James Parker (01:03:13) - Um, but then I immediately get into my sort of creepy radar kind of turns on because then, you know, I've read about things like an ambient assisted living in elder care, and the kind of the way in which it seems like elder care is kind of pie sort of seems to be a kind of a laboratory almost for kind of smart home and sort of extreme surveillance techniques, including Machine Listening and so on, you know, again, using ability disability as a kind of a framework for doing that. And it's so, so, so, you know, it's not to say that there's some sort of pure space of voice assistance in disability, or, uh, elder context is something that's kind of unimaginable that there would be dark political consequences, but anyway, yeah,

Kathy Reid (01:04:04) - So taking that as a, as a provocation, you know, what are the affordances or, or what are the trajectories of voice assistance with people who are, who have disabilities? Again, that's not an area that I know a lot about, so I don't feel particularly comfortable speaking about that area specifically, simply because I don't have a lot of the, the background it's, it's not my specialist area. I'm going to have to sort of say, no, I don't know about that area. Okay. Sorry.

James Parker (01:04:33) - No, no, no, no worries. No worries. It's admirable to, to not go wildly speculate, like I would be 10 or that would have a tendency to not, I mean, I think we've, we've we run you through the ringer. That's right. I, I think I was just going to say that we sort of, um, what you've given has been really generous and has covered a lot of ground and.

Joel Stern (01:04:59) - It's been, it's been really valuable for us. So, you know, thank you so much for that.

Sean Dockray (01:05:06) - Your last answer on the not positive speculation, so many possibilities,

Joel Stern (01:05:13) - Because I wasn't sort of thinking about, it's like a really, I mean, one thing, um, not to ask you another question, but just to sort of add to that, it's, um, one of the things that the, the festival that's hosting us in, in Poland has continually asked of us is to sort of, um, survey the, the positive and negative imaginaries, you know, so it's, it's really valuable to kind of start to collect those in these interviews. Um, and I imagine that over the course of a few interviews, there'll be some wildly different imaginaries that work for different people and what they think the future holds for this technology.

Kathy Reid (01:05:58) - And I guess my parting for it is those imaginaries, they're not mutually exclusive, you know, the same voice assistant that can help me order my groceries. I'd also be telling the government that I'm in a minority population and assisting in over policing. So I think that that's something that we also need to grapple with. None of this is black and white. A lot of this is overlapping and complimentary and, and there are collisions it's, it's not a mutually exclusive cut and drawn area.

Joel Stern (01:06:28) - That's a great note to end on. I mean, actually one of the questions that we didn't ask is, um, who else should we speak to? One of the things we do kind of want to do is develop a bit of a network of interesting. I mean, I don't mean just for the purposes of this interview of interviewing them, but just developing a kind of a community around those sort of these kinds of political questions.

Kathy Reid (01:06:49) - I have a list prepared. Um, so what I might do, uh, cause I, I know that Sean has to go, what I might do is pop that in a list and sort of, um, let you know what their context and sort of, so, um, I know that Mozilla, I think, would be very interested in this because Mizzou's huge on privacy and huge on sort of, uh, chipping away at some of those established paradigms. Um, the person who took over me over from me when I left at Microsoft is an indigenous language person. Uh, and he will have a political perspective as well. He's based in Darwin. So his context is again, different to mine,

James Parker (01:07:27) - Is he Indigenous?

Kathy Reid (01:07:28) - Uh, no, he's not. Um, I, I wish we had more Indigenous people in voice technology because it means we might be building it differently. Um, but it's, it's like every other sort of participation and um, every other field as well. Uh, unfortunately, uh, the folks at Opal at Stanford who are working on an open-source voice assistant there, I think they would be interesting to talk to because I think they are trying to partner with commercial providers and they're at a very nascent emerging embryonic stage and their through line is going to be influenced by who they can partner with commercially. So is there a way to intervene at that sort of early stage and then at ANU? I think, uh, if you don't already know him then Swift, um, I hear from the creative computing people where he's, uh, just had a baby boy and Nick ... now Nick is I think one of the deputy deans of computer science and engineering at AAU, and the reason I think he would have, uh, an interesting perspective on this is a materials engineer by training. Uh, but as a part-time geeky DJs, uh, so he comes into this from, um, the music background as well and as a materials engineer. So I think he would be fascinating to talk to him as well.

James Parker (01:08:56) - Amazing. So, I mean, if you would send us that list, that would be fantastic. And you know, we'll just keep the conversation going, I guess,

+ 132
- 0
content/transcript/stachura.md Voir le fichier

@@ -0,0 +1,132 @@
---
title: "Thomas Stachura"
status: "Auto-transcribed by reduct.video with minor edits by James Parker"
---

James Parker (00:00:00) - Thanks so much for, um, for taking the time to speak with us, Thomas. Um, perhaps you could just begin by telling us a little bit about yourself and your work and how you come to be heading up an organization like paranoid.

Thomas Stachura (00:00:14) - So I guess, um, the concept, uh, started when I was in my brother-in-law's, um, suite, uh, apartment and my kids were there for the first time. And it was the first interaction of my kids with any smart speaker, because of course we don't own one at home. And I found myself entertained by their, you know, repeated attempts to request things out of this smart speaker advice device. But at the same time, I found myself, um, contemplating what it would be like to have it at home and feeling unease surrounding that. In fact, I was on uneasy, even in those initial questions with somebody else's smart speaker with having my kids recorded and, you know, uh, what parts of our conversation might be captured or not. And perhaps I'm being quote unquote paranoid about it, but it, it caused me a lot of unease. And in that moment being sort of an inventor mindset, I said, they've got, there's gotta be a solution to this.

Thomas Stachura (00:01:20) - Why can't I have the best of both worlds, the, the fun and excitement of watching my kids play with the smart speaker, but without the anxiety of it, without the worry about whether I should even worry about the privacy and that battle, like, do I have to make a choice? Should it be tech or privacy today? Which one do I want? And so that prompted the concept, which later prompted patents. And then of course the full business model, my background, going into business, going further back. Um, I grew up, you know, programming and software development in my mother's basement. That's, that's really where I got my start. And from there, it evolved to running a business in order to, uh, give myself an opportunity to invent more software. So I'm an inventor at heart. And what I really needed to do was to find a business that would allow me to just be an inventor.

Thomas Stachura (00:02:17) - And that's remarkably hard now, businesses don't just put out job postings for, Hey, we want an inventor with interesting ideas that he wants to explore. Typically that doesn't happen. And so out of necessity, I got into even learning how to program running a business, becoming a sales guy than a project manager, all the way to executive leading a team. And now we have a hundred people at pleasant solutions. Uh, all of that was out of necessity to just do inventing, to just have those golden moments, uh, much like a lawyer's golden moments are to say, you know, you can handle the truth, but really there's weeks and months of prep to lead up to that. Um, I wanted my golden moment where I'm watching my kid play with a smart speaker and my goal, the moment is aha, I've got an idea to stop the privacy issue from happening and still enjoying the tech. Fortunately, I have this a hundred person company that can now execute on it and actually make it a reality. And so I felt vindicated that I built out a company that can actually take my ideas and bring them about to real life.

Sean Dockray (00:03:27) - What would you say is the proportion of time that you get to spend on R and D versus sort of, um, just the commercial work that sort of funds that more interesting and venting R and D part of the company

Thomas Stachura (00:03:40) - I'll answer that, uh, both for myself, uh, being the inventor versus manager and for the rest of the company as well, for myself as an inventor, I would say that I get maybe 2% of my time, uh, actually inventing. And sometimes it comes in a concentration where for two days I have to write out all of my ideas into a patent because it's now or never. So for two days I have to come up with every possible way to do this so that we can pass into every possible way. Um, but usually there's at least 5% additional time where it's inspirational and not necessarily I'm inventing and seeking inspiration. I'm not w you know, watching the clouds pass by, uh, or, or, you know, watching videos that are inspirational, but rather doing things that are inherently inspirational, um, where I'm motivating a team member, I'm hearing their problems, I'm hearing what things they want to resolve. And so in general, I would say that's an additional 5%. So a total of maybe 7%, uh, the rest of it is just literally running the business and not getting to say, you know, um, uh, you can't handle the truth, so to speak.

Thomas Stachura (00:05:01) - The company as a whole. Um, it depends on how you define R and D because even if you think about for clients, we do a ton of R and D for clients, but that's sort of billable work for science or where perhaps we're. I mean, I guess if you consider the D development software development, almost everything we do is developing stuff. So if I were to define R and D as the high risk stuff that we don't know whether it'll actually work or we it's the first version of something that we hope will work, I would say we only get maybe 10, 15% of our time on that currently, but it's increasing. And then we get a substantial amount of time executing after that. So paranoid, um, was conceived by me. We got a prototype with one other engineer and then to actually bring it to market. We now have seven engineers working full time, mechanical and electrical as well as consultants. So to an extent, if you consider that R and D still, even though, by my definition, it's no longer a risk. We know we're developing different models, increasing compatibility. It's no longer the big R it's more of the big D

James Parker (00:06:21) - Th the sort of Eureka moment that you described. Um, how long ago was that? Now?

Thomas Stachura (00:06:28) - I don't know. I'd have to look it up. I'm not the best for remembering timelines, but I would say somewhere on the order of three years ago.

James Parker (00:06:37) - So, so it was a smart speaker because smart speakers, you know, I've obviously been not, not been around very long and sort of achieved a pretty major market penetration very quickly, but it was a smart speaker rather than, you know, um, uh, a digital voice assistant on a, you know, an iPhone or something that produced that moment. So,

Thomas Stachura (00:06:56) - Yes, yes. And it was my first interaction as a family interacting with a smart speaker. And in fact it was Google. Um, I don't remember what model number, but it was a Google one.

James Parker (00:07:08) - Um, and so, so, so you founded the company three or so years ago out of this, this sort of Eureka moment. Uh, and so w w was, was paranoid the kind of the first, the first product that you started working on and then pleasant solutions developed out of that. And what's the relationship between paranoid and pleasant solutions. I mean, are they all, they all sort of privacy oriented, you have a, somewhat of a similar mission, or is it that they will in audio? What's the relationship?

Thomas Stachura (00:07:38) - So actually pleasant solutions is 13 years old. Um, and it's been around for a while with a hundred people and paranoid is a couple of years old. We didn't start it until it's meant to commercialize the patent. And so the relationship is that technically speaking, a sibling company in the pleasant solutions group of companies is the relationship. And one interesting element of that is that the companies are very, very different and premise pleasant solutions does make use of Google analytics. We do make use of a lot of privacy suspect tools. Uh, our motto is experts. You wish you called the first time we make data analytics, we've done artificial intelligence systems, all that kind of stuff. And when we started paranoid, I knew I needed to segment it off completely to be a company that has a foundational allergy to data. And so it took a little bit of work and our motto is earning lots of money by increasing privacy, not eroding it, um, as well as uniting tech and privacy get paranoid.

Thomas Stachura (00:08:52) - And so there was, there was a little bit of a friction there at first, for example, all of my web design teams and my analytics guys and my, my marketing team, when I told them, no, our websites can have no cookies. They said, okay, well, we have cookie free Google analytics. I'm like, Nope, we're not using any third-party analytics. Then they said, okay, well, we can make use of this platform that does heat mapping. Nope, we can't do that. And they're just, you know, panicked about, well, we've got no data, we're getting nothing. And I said, we're going to do this old fashioned way. If we want a little bit more insight, we're going to call up a few customers or we'll pay for customer surveys, like by, you know, uh, focus groups or whatever. They, the old fashioned way. So paranoid is actually flying blind relative to what we're used to at pleasant solutions, because we have no analytic data about our website. Uh, it's absolutely minimal. And in fact, we use it as a pitch at the bottom of our website, no cookies. And let me tell you, it is painful.

Thomas Stachura (00:09:55) - It is painful because I don't know, every time we do a podcast, every time we do an interview or, or a press release goes out, um, all we can see is basically like geo cities of old, the, the counter going up, like, okay, we got a lot of hits today from who knows where, but we got a lot of hits today. And so it seems like it was probably this press release. So there was quite a bit of cultural separation required to get the team to understand that, uh, it's a very different premise, treat it like it's a client. They want a completely different from what pleasant solutions is going to do. And that's where we came up with models like allergic to data. Cause it helps our team understand just to what extent we're going to avoid data. Um, and we're going to be spending time researching, not how long can we keep people's data on the store, but we're going to find out what is the legal minimum, where we can just start deleting data. Do we have to retain their phone number? Can we delete that? Do we have to have a receipt number? Can we delete that? We're going to be looking at every turn, where can we eliminate data where we just don't know anything about our customers as much as possible.

Joel Stern (00:11:11) - Um, Thomas, could you just say something about the, um, kind of political imperative to, to delete that data? I mean, what, what sort of drives that desire, um, for you in, in the company? Why, why is it so important?

Thomas Stachura (00:11:25) - I think there's a lot of companies out there making statements about trusting them with your data. And I feel like I don't want to have paranoid, just be another company that says, yes, you can trust us with your data. We are creating products and corporate structure where you don't need to trust us because the structure implicitly does not allow us to violate your privacy. So for example, the paranoid product itself has no connection to the internet, has no software update while it has software updates, but they're one directional. And I can talk a little bit about that later on. It's quite innovative how it it's one directional in a way I don't think has ever been done before. And so people can trust the device. Even if the person manufacturing it, they don't trust because they can verify themselves that that device has no wifi, no Bluetooth, no capability to connect to the internet, nor do they program it to their wifi, nothing.

Thomas Stachura (00:12:33) - And so they don't have to trust us same as a corporation. Um, it's a little harder, but we want to create ways that they can trust us. For example, we don't have cookies right there. There's no JavaScripts related to Google analytics or any analytics that, that are external to our company. So now they don't have to trust us that we're policing Google analytics with their privacy. Google will not know that they're on our website unless you're using Chrome and that's a different story, but that's a browser issue. So ultimately what we want to keep eroding our own ability, our own choice to even violate our customer's data. So that at the end of the day, customers can say, you know what? I don't trust Thomas. I don't trust paranoid, but there's nothing they can do to hurt me because of a, B, C, and D. And that makes me, I think that makes the company very different than other companies. And I think it would be very hard to replicate what we're doing from a trust level. Other companies can't go and do that. Google could not become allergic to data, Amazon, Apple, Microsoft, none of them could become allergic to data like we are.

Sean Dockray (00:13:49) - Well, I was just going to say, although I think that's, that's definitely true. These, these companies whose whole business model is built on the accumulation of data, couldn't do what you do. But then there's that kind of gray area in the middle of so many companies that are ostensibly doing something kind of unrelated to data, like their core purpose is not about the accumulation of data and generating kind of advertising, but they still kind of fall into the trap of, you know, or kind of what you were saying about the people, the people in pleasant solutions, just sort of saying, you've got to be collecting data because you need information and data in order to make decisions. Right. So, um, I think that, although it's, yeah, it's, it's both innovative, but I do think it also kind of is conserve as a model for companies more generally, like of course, Apple and Google can't do it, but I'm just thinking of all the other smaller companies that potentially could, um, could learn something and maybe, maybe sort of adopt some of the practices. So I think that's very interesting, um,

Sean Dockray (00:14:59) - And it also is just opening my eyes to how paranoid it's not just, you know, that device that sits on a smart speaker. Right. It's um, it's also, you're designing the company and some just, I guess I'm wondering what plans you have in store, um, for further development, like what future products and future kind of, um, I don't know, even organizational innovations that you're talking about.

Thomas Stachura (00:15:27) - So there's a few things to respond in there. Um, in terms of other companies not making it data is their business, and yet they end up collecting data. I would ask anybody how many companies end up doing major pivots over the course of 10, 20 years. And when you think about that, how many companies are currently deliberately deleting data because it's too big for them to store, that's not really happening. And so the data they're collecting now, even if they're completely ignoring it 10, 20 years from now, I huge portion of the companies are going to be pivoting. And is it common for companies to pivot towards data analytics and more advanced machine learning and all that? Absolutely. It's a global trend for companies to pivot from something more basic towards data analytics or machine learning. And so when I look at these overall trends of collect the data, whether you need it or not never delete the data companies generally pivot to over the course of time.

Thomas Stachura (00:16:34) - And a lot of companies are pivoting towards machine learning and accessing data in whatever way, all of those come together to say, you know what? I don't trust a company that says we're collecting the data, but trust me, we're not using it. Not to mention, of course, the poor track record of companies, even legally when they're not allowed to, or whether they promise in their contracts that they won't, they don't have a good track record of abiding by that. I would say in terms of where the company's heading, I think it's important to look at sort of our, our mandate, uh, earn lots of money by increasing privacy. That that was very controversial. Um, it speaks to what we're going to be like as an organization, radical transparency. It's not just about data allergy. It's about radical transparency. And so we actually had a COO in our company, again, talk about culture shock.

Thomas Stachura (00:17:32) - This is why other companies can not do it very easily. When I put that mandate up. And I said, well, look, I have not a privacy advocate for the last 40 years. I'm not going to pretend to be I'm here to earn money by increasing privacy. That's that's my market opportunity. They actually took it down a couple days later, without my knowledge, they did a little coup and they took it down out of concern for our company's wellbeing. And I had to actually have a stern discussion with them to put it back up and who authorized this? How can you do this? Like that was the company mandate. Don't, don't water it down. And so it's not just about ways to not collect the data, but it's exploring ways of being, not for the sake of friction, but for the sake of radical transparency, relaying to the public, what we're about, you know, are we going to scare away the public by saying, we're here to make lots of money?

Thomas Stachura (00:18:31) - Cause you know what, if Google said that people just get more alarmed because they already knew it. But now it feels like if they're saying it must be 10 times more true, something's going to go down, they're going to abuse. Everybody's privacy, something. So with us coming out of the Gates saying it, hopefully they're going to say, well, you know what, that's at least some statement I can believe in. Okay. I believe that you're here to make lots of money. I'm going to keep reading about this company. Maybe you're telling the truth about the rest of it. So it's about not only the data allergy, but about radical transparency. And what does that involve? I don't know. I, pleasant solutions is not a radically transparent company that way. So I don't even have a model behind paranoia to follow of a company that is that radically transparent. Um, maybe charities, I don't know,

James Parker (00:19:23) - Sounds like you're describing or you're sort of the business model is a sort of a wager effectively on the, sort of the possibility of a privacy industry like that as a sort of counterpoint to, you know, whatever you might call it, surveillance capitalism or platform capitalism or something, and a new market in which privacy is the sort of is the selling point. I mean, is that, does that, does that industry exists. That you know of, or are you trying to invent it or do you have allies, you know, um, in that market space or when I was reading about the company, I just thought, it sounds like, sounds like they're trying to produce a new business model. Uh, that sort of, it's not exactly, is it parasitic on, you know, you know, mass data collection or is it antagonistic to it, but it's either way it's kind of in relation to it,

Thomas Stachura (00:20:23) - The market absolutely existed. There were lots of people worried about surveillance, but they were all crazy, right? I mean, up until recently, everybody was just crazy for the last 20 years. No, I mean, that's the perception the market existed, but it was all tra niche, ultra fringe, not mainstream accepted and something that, you know, you should just learn to accept trusting your government. And as a side note, I trust my government far more than I trust corporations to each his own, I guess. But now I feel like the market is becoming mainstream. And the reason behind that is because I think there's a lot more to fear behind corporations than government government, at least on its surface claims to be for the people on the surface. Corporations do not even claim to that. In fact, it's illegal in many cases to favor too much towards the people, rather than the shareholders and making money, you legally are obligated in the U S to focus on making money.

Thomas Stachura (00:21:32) - So there are cases I'm sure where if you donate too much to charity and the shareholders don't like it, they can say you're not working in their best interest and they can Sue you for it. So there's a very different premise there. And I think it creates a structure that is much higher risk and up until now, I think that corporations didn't have quite as much power, whereas we're entering a time where, you know, I always wonder when, what global government happened and now I'm thinking it's happening. It's just not going to be the politicians and geographic countries. It's going to be global government because corporations are going to piece by piece takeover infrastructure. If you think about how countries used to run the post offices, I mean, I don't know about every country, but certainly all the ones I'm familiar with it was run by government for the people and was not intended to be a profit center.

Thomas Stachura (00:22:27) - Uh, and if it did make profit, they would divert it to other departments. Now our most vital form of communication. And I will say that email is more vital to our communication. Then the postal system is run by corporations, our most vital communication tools instead of telephone, which was heavily regulated. But yes, had corporations. Internet is largely the domain of completely D regulated, uh, corporations. And so corporations control our access to information. And, and a lot of this, you start thinking about like, well, Google controls, our search, Google can suede, uh, what kind of results I get if they wanted to and swayed my voting and all that kind of stuff. At the end of the day, I feel we'll actually seeing the start of a global government run by corporations and that, wow, I just sound like I'm crazy like those people 20 years ago, but that's an opinion I don't dwell on it every day, but nonetheless, I think that's, what's creating a bit of a market. I think the market is that people are now starting to become concerned about corporations. And I think it's more valid to be concerned about corporations than government run infrastructure. And because of that, what was a niche market, I think, is becoming a multi-faceted market that's going to grow. And so is paranoid targeting that absolutely paranoid is positioning itself to create many tools to protect the privacy of customers.

James Parker (00:24:05) - Okay. Could you maybe say a little bit about the product range that you have at the moment and you know, how they're doing and so on

Thomas Stachura (00:24:10) - Currently the products, I'll talk a little bit about our smart speaker product. Um, it's an add on device that, um, you put on top of your device and, or built into the device and you purchase it as a separate add on after you've bought Google or Amazon or whatever, and it blocks it from eavesdropping until you say the word paranoid. So of course this sounds loopy to some like, so, okay. You say paranoid, Hey Google. Well then who's going to protect me from paranoid listening. Well, like I said, paranoid itself cannot transmit the data. So it's okay that it's listening because it has no connection to anything. If it's like a cassette deck in my basement might as well, listen, it's not transferring anything anywhere.

Thomas Stachura (00:24:56) - It's protected and it blocks the microphone. There's a few ways to do that. And we have a few models. The first one is literally pressing the mute button. That is the first one that we were able to launch by pressing the mute button. Uh, the moment you say paranoid in theory, the smart speaker is going to be in mute mode every time. Um, you're having a regular conversation without talking to the smart speaker first and saying a paranoid in that model, as well as all of them. We've gone to the extent where it has a little dongle or a little antenna that watch us for the lights and watch us for the reaction of the smart speaker. If it does not show that it's listening by turning on its lights and activating and interacting with you, the microphone will be turned back off. So if we have a false alarm, we've gone to the extent of shutting down the microphone two seconds later, because the smart speaker didn't light up like it should.

Thomas Stachura (00:25:56) - The second model is by way of jamming. We're going to be jamming the microphone depending on the exact device in different ways, in a way that's silent, but nonetheless, the microphone is drowned out with noise. Um, and we've gone to great lengths to try to make sure that it's, future-proof against machine learning, listening really closely. It's not enough to say, Hey, Google does nothing. It's not enough to prevent it from talking back to you. It has to be something where if the audio was stored 10 years later, nobody's going to get at it. Even if they put a lot of work to it. So we're really trying to jam the audio. The third one, apparently max is, well, of course the maximum we, we cut the microphone wires. Now, technically they're not wires they're circuits, nevermind that we're effectively cutting the wires to the microphone. And until you say the word paranoid, we actually won't let electricity flow through from the microphone of the smart speaker to the CPU of the smart speaker. So those are the three models that we've got currently.

Joel Stern (00:27:04) - I'm interested in the, um, decision to focus on microphones and, and the sort of audio capacity of smart speakers. Um, you, you, you mentioned, you know, that you're that Eureka moment, um, a few years ago, but I'm wondering, is there something specific about Machine Listening and the kind of listening practices of smart speakers that made you want to focus the product range? Um, you know, so specifically on, on that problem and whether you could say something, you, you, you've spoken a lot about, um, privacy, but whether you could say something about your kind of fears and anxieties around, um, how the audio, which is captured by these speakers might in fact be used in ways that we want to protect ourselves from what, what your sort of, um, fears or anxieties, uh, around the capture of audio and the way in which these devices listened to us.

Joel Stern (00:28:04) - Um, you've mentioned, um, privacy in general as a principle, which is very important, but is there something specific about, um, the capture of, of audio and the way in, and the listening capacities of smart speakers that made you want to focus on, um, them in particular, um, and developing products to, to jam and sort of, um, you know, circumnavigate the microphone, you know, because of course our project is called Machine Listening and, you know, we're very concerned with, um, surveillance and privacy discourse in general. Um, but we're also trying to think specifically about sound and listening and whether there are, um, problems in particular, um, that, that we can diagnose and sort of think about, um, in relation to those things.

Thomas Stachura (00:28:56) - I think what worries me about corporate data in general, and I'll get to audio data specifically in a moment is that over the years, I feel like it's been demonstrated to me that people are creative and companies are creative too. And if you give them a resource, they will find many creative ways that I have not thought of to use that data. If I didn't think that companies were extremely creative, then I would think of all the possible ways that you can use data. And I would decide whether those are appropriate risks for me. And I would probably move on with my life and not worry about privacy. However, I know there are phenomenally smart and creative people in the world. And so whatever list I can come up with, I feel it's not even 1% of what's possible. So some of the usual ones that many people think of, okay, you're going to use it for insurance because you know about my health or my behaviors.

Thomas Stachura (00:29:57) - You're going to use it to hire people as a more recent one. Um, maybe that wasn't obvious 10 years ago, people will go and say, well, what did they say on social media 10 years ago? Um, now there's politics because of the scandals. Maybe it's persuading people to impulse buy. That's an old one, but where else can it go? If I knew that answer? Um, or if I thought I had the monopoly on coming up with new ideas on using this data, then I'd be comfortable. The thing that scares me is, I don't know. I don't know how else they're going to use the data, but I know that it's going to be far beyond what I can imagine. And even for the list that I just mentioned, I would say maybe I'm okay with the impulse buying right now, but I'm also worried about how advanced it gets with more data.

Thomas Stachura (00:30:49) - We all know with machine learning, the more data you get, the better it gets and maybe this, a new techniques that are not that way. But, so I'm worried about my choice having, uh, a dog CA on them for 20 years. And then just how deeply the machine learning will be able to manipulate them over time. I feel like I have formed a lot of my opinions. I, in today, estate agent, I'm not going to be as manipulated, but I think everything influences me, but my children, I feel will be at a disadvantage because they might still be at a developmental stage 10 years from now. Now they will have a lot more data on them. I heard too when I was a child. So there's going to be a more of a dossier. And the machine learning is going to be much more powerful.

Thomas Stachura (00:31:39) - All of those things make me more scared for my children than I am for myself. Yeah. So those are the, the two things. Now audio data particularly scares me because I'm not exactly sure. I even more, I know even less about how the audio is going to be used. It seems like it's a smaller usage case. Perhaps they'll start to find my emotional patterns, perhaps they'll figure out what time of day is best to negotiate against me because of the tone of my voice throughout the day. Perhaps they'll know my hormonal patterns throughout the day. I don't know. I'm very limited in those examples, but yeah, I feel like it's very intimate knowledge. It's not stuff that I'm consciously in a few words of text giving. I am giving megabytes of information every time I speak. And there's so much data there. I don't even know what data I'm giving.

Thomas Stachura (00:32:39) - So there's two elements to that one. I don't understand how it's going to be used because it's harder to predict two. It's not filtered by me. It's subconscious data. That's going out. It almost feels like if I knew the extent of how they could use my audio, uh, voice or video, it might be just as bad as starting to monitor my brainwaves. It feels just as pervasive to me as monitoring brave weight brainwaves, because I think there'll be able to get at the same data of what's going on in my brain. Just based on the tone,

Joel Stern (00:33:17) - The example, um, of sort of being worried for the, um, development of, of, of children, um, as they're exposed to these technologies is, is a really important one also in thinking about audio, because I suppose very young children, um, are going to be interacting, you know, through speech with these devices probably before there, you know, for instance, um, on social media kind of using the computer itself. So perhaps, um, audio data is, is, is a way to, uh, engage in and captured, um, the sort of habits and behaviors of, of children at a, at an even earlier stage in their development. Can I just follow up on some of the really interesting things you said that almost the first one I'm thinking is so paranoid product range goes sort of targeted at the sort of the space around the wake word. So the problem is presumed to be that like Google or whoever is listening, even when you don't want them to be.

James Parker (00:34:19) - Uh, and I think, you know, on the website you have a, you know, a statistic that flashes up about, you know, how many, um, accidental, um, how many times the, these devices tend to sort of switch on. You know, I was reading a piece recently about, um, how, okay, Google is triggered by cocaine noodle. Uh, and it, anyway, it was like a whole thing about anyway, whatever they get miss triggered. And, um, of course they're listening and we don't know because we don't know that we can trust them. We don't, it's not even just triggering it's, you know, it's like, well, you know, they really, when they say they're not listening, are they really listening? But there's a whole, there's an expanding amount of things that as you precisely, as you pointed out that can be done with the data, once the device is on.

James Parker (00:35:06) - So, for example, there's a number of companies at the moment working on, you know, COVID voice detection, um, uh, and you could easily, you know, Amazon already had a pattern a number of years ago, whereby it would work out when you were coughing and start selling you cough medicine and so on and so on. So there's all sorts of things that a, um, that these smart speakers here and it's expanding when they're on. So, so that's the first question is, does paranoid have any kind of, um, intentions or ability to kind of deal with that space where, you know, when the device, when you've allowed the device to listen to you, what it's listening to you in the specific ways in which it's listening are growing and expanding. And so just because I consent for it to be on doesn't necessarily mean I can send it to everything that's being apprehended when it's on.

James Parker (00:35:58) - So that's the first question. And then the second question is what about beyond smart speakers? Amazon just launched, um, the halo today. I don't know if you've seen all of the, um, the press around this. So this is a device that's like a Fitbit, or what have you, you know, sort of a health tracker. I mean, it, it, it wants you to do sort of full body, um, uh, scans of yourself using your, your, your phone. So it's for, for fat and so on, but it also has a speaker, I mean, a microphone embedded, which, um, doesn't require wake words. They say that it's not listening all the time. It only listens intermittently, but it's specifically listening for emotion. I have no idea what people think would be interesting or useful to themselves. I, you know, I can understand why I might want to know how much I'm walking a day, but I'm not totally sure why people, why anyone would want to get feedback on how aggressive they sounded today or sad or what have you. But anyway, that's already a thing. And so I'm immediately thinking, well, where's the paranoid for a halo. And so, yeah, I'm just wondering on both fronts, you know, does paranoid have any intentions to go in either of those directions or what do you think about that? Or are they just totally different technical problems that, you know, can't be addressed or, and so on.

Thomas Stachura (00:37:23) - So in addition to the disadvantages we have in building a company where we're allergic to data, and we are going for radical transparency, which can scare some people and people will use statements against us, like, look, they said, they're earning lots of money. We have a third disadvantage in that our premise is to unite tech and privacy. And what that means is that we are not just defending privacy and sacrificing the features and technology. We are trying to innovate in a space where you get the full usefulness of the product, whereas maximum to the maximum extent possible you get the full usefulness of the product. It is much easier to do a privacy product, for example, where you can just go up and press the mute button or whatever, a fast way to turn it on and off, but that eliminates the voice activation feature.

Thomas Stachura (00:38:16) - So now all of a sudden you can't do it by being lazy on the couch, which I want to be lazy on my couch. I get that. So I appreciate the tech portion. When you say, you know, why would anybody want to know your emotional map for halo? Uh, I can see that I would want to know the emotional map for myself and say, okay, well, it looks like a most cheery this time of day. That's probably the best time of day for me to do X task, you know, or maybe that's the time that I should be drinking more caffeine because I get grumpy or whatever the case is. So I can understand and value that because I appreciate the tech. I mean, we have a whole company that's devoted just to the tech side, pleasant solutions, not the privacy side. So that gives us three things that we have to worry about.

Thomas Stachura (00:39:03) - And then we get confronted with absolutely. I, I understand on the privacy side, do they need to know your emotions when you just want to say, Hey, Google, what's the weather. Do they need to know that I'm feeling grumpy or not? Do they need to hear my kids in the background during that moment? Do they have to know when my fridge is running? I don't know how that hurts me, but maybe they'll know what temperature it is in my house based on how often my fridge turns on. I don't know, that's a creative one who would have thought the answer is we do have that. Actually we have patents already in place, and that is a product roadmap. We can't do it with the mute button obviously, but we can do it with the paranoid max. And it is a future potential Avenue. We'll go down where we will often escape people's voice. We will trout it in just enough noise so that you can hear it, or we will replay it with a robotic voice or whatever the case is so that all the devices getting is just the words.

Thomas Stachura (00:40:02) - Check the weather or, you know, add something to my calendar and they will get nothing else, no background, no guests talking to you in the back or none of that. Uh, and as a side note, aside from kids, we're also really trying to protect guests. You know, things that are happening in the background. There are already cases where smart speakers are recording the neighbors conversations. So if you're in an apartment building, you don't even know if you're being recorded or not. What if there's a smart speaker literally right. Through a thin wall and right there, every time they're activating it, something's leaking through. And you might be saying something very important. You know, as a CEO, I compete with the number of companies. I care about my privacy from a corporate standpoint, corporate espionage. So that's another concern that I'm always thinking about. Um, so in general, I would say, yes, it's on our product roadmap and we have a lot of things to worry about how not to remove the features. So if they start adding a feature related to emotions, it's going to be very difficult for us to remove the emotions out of that without disturbing that feature. So at that point, we might have to give the user a choice. Do you want your emotions to go through if so it might turn off some features.

James Parker (00:41:19) - And what about, um, the, you know, the horizon sort of beyond smart speakers, uh, like, you know, uh, micro paranoid that goes on your wristband or, um, you know, the back of your phone or something.

Thomas Stachura (00:41:31) - I won't rule anything out, but I will say it's already proven to be quite a challenge to, uh, to do the smart speakers. Um, and there's a lot of fine engineering that has to go into that. Um, I think there's opportunities to do other things and we're definitely exploring other markets. I can say, for example, uh, some of our models we're exploring whether they can go onto smart TVs as an example. I wish I could say more, but I will say that there are some software cloud tools and there's, there's a whole bunch of markets for privacy and we're going to be pursuing a number of them

Sean Dockray (00:42:07) - Just, uh, in light of the comments about, um, this kind of idea of a new global government, um, and the diminished role of actual government and what you had said earlier about paranoid, almost operating a bit like a charity, um, that it is a company and the, you know, in the mission is to make money. And I understand that, but at the same time, the way it behaves is a little bit like a charity. I was just sort of thinking about the way that it kind of suggests maybe, uh, an idea of how regulation can happen in this kind of new corporation based global government and that regulation. Hopefully we can still pursue it through the state at some level, but this is almost sort of saying we need to pursue regulation through technological means. Um, and that paranoids product range at some level is this kind of performance of, uh, technological regulation. But that's more of a reflection. I dunno if that, if that resonates with you or you just,

Thomas Stachura (00:43:19) - So I would say that when it comes to regulating, I don't believe that government regulation is going to really stop the corporations because they will find a way around it. And the governments aren't, I would say motivated enough to be proactive that way. And again, I, I sound a little crazy when I start thinking, like, I don't normally talk about global government being corporations, but I think the solution is going to end up other corporations that have a mandate to hunt down privacy violations and other companies that I don't know if the government will offer a bounty, uh, that would be definitely motivating a lot of companies. If the government said, well, if you can find privacy violations for each person's privacy that was violated a minimum of a thousand people or whatever, you know, we will give you this much money. Now, all of us know, we we've privatized the idea of enforcing privacy.

Thomas Stachura (00:44:19) - And I think that has a better chance of fighting against privatized interests to maintain the data. Other than that, um, I can give you two stories. I saw on my list that, um, maybe they fit in, maybe they don't. Uh, the update system that we had was quite an interesting and challenging approach because from a company product development standpoint, we knew we, we need to iterate. If we release software out there, we need to be able to do a software patch, but what paranoid, customer's going to want to install a software app or plugin the paranoid device to the phone. Now, all of a sudden we've lost this claim that nothing ever leaves the device. And so we started thinking at first about an app that can play sound over like a modem.

Thomas Stachura (00:45:11) - To the device, but then we started thinking, well, how do we know that the app isn't receiving information too? You know, it doesn't access the microphone or not. And so on. So we actually landed on something. I think that's innovative and we're not a hundred percent sure whether it will work as well as we want, but you update it by playing a song on any MP3 player. So the MP3 player, of course you can choose whichever one you trust. We just released an MP3 file. Obviously it's not programmed. Then you can't program it to record information and send it home and you play the song and it updates the device so much like Google has lollipop or various candy names for their operating systems. We're going to have versions that are associated with music, which has its own challenges. I wonder with what are people going to think about the song while they're updating? And right now, as it stands, they might have to hear it a few times or leave the room, uh, if they don't like the song, but we found our way to do a one-way update into the system with nothing coming out. You could update it with a cassette deck in theory.

James Parker (00:46:22) - That's fascinating. Can I just ask then, um, is the, the, the, the information that's being apprehended by the breeding received by, by the device is not audible, is it it's sort of, um, you know, we've been reading about this idea of, you know, internet over audio or data over audio and, and, and adversarial audio, and the different ways in which, um, signals can be sort of sent over audio, but not apprehensible to humans. Is, is that how it works? So you could in principle, you know, in Coda in Beyonce or whatever you wanted, or is there something specific about the song

Thomas Stachura (00:47:07) - The system will allow for any song, uh, at least in theory and true to our premise of transparency, we're trying not to. And also for the health of pets, we're trying not to use ultrasonic or anything like that. Instead, we're trying to do it where let's just put it this way. This song will not sound like an audio file putting on his headphones and listening to an orchestra. It might sound like there's some interference or a little bit, you'll be able to hear some of the data, but you're not going to hear the ones and zeros. Of course,

James Parker (00:47:38) - That's absolutely amazing. And what was the other example you were going to give?

Thomas Stachura (00:47:40) - What I found very revealing is when we did our first commercial and paranoia.com. Um, I don't know if you've seen the commercial on the video site. Uh, we video studio just for that with an alien Saifai effects and everything going on. Like we went as much as we could, and we got enough attention that people started making mocking videos about us. So the video is two minutes and I think very entertaining, but I was entertained watching a 10 minute rant on YouTube about how bad our commercial was and the title was the stupidest product ever. And not just because at this stage, any publicity is good publicity, which is not always true. Um, there's an element of that in this case, but it, it came to, it gave me a really good realization. The reason why he was saying it was the stupidest product ever was not because he was saying smart speakers can be trusted.

Thomas Stachura (00:48:42) - It's not a concern. Just, you know, regulation will take care of it. Government will protect us. No, he was saying it was the dumbest product ever because you shouldn't even own a smart speaker. Just don't do it no matter what, don't have one don't. And yet at the same time, he was competing against billions of dollars of marketing that are going to make smart speakers more useful. So we have this whole camp of people saying smart seekers are not that useful. Don't get them, but you know what, with billions of dollars and a lot of creativity, I bet you, at some point, you're not going to be able to buy a house without smart speakers installed, because they're going to be useful and builders will find them useful. And eventually there's no other way to open and close your garage. There is no button, cause it works better, just voice activator or whatever the case may be. And so we'll end up with a situation where a one-side billions of dollars is going into telling you, don't worry about your privacy. You're in good hands with government and or corporations. And then you'll have extremists saying don't trust technology. Absolutely. Under no circumstances, it is stupid to trust technology and they call the middle ground stupid. They call the middle ground stupid because it's not worth it to budge an inch. And where I had a great realization is.

Thomas Stachura (00:50:06) - And, and as an inventor, I often look for gaps in the middle. There's a huge gap of, and I think the majority of people are in the middle, but there's a huge gap between the subject matter, like being discussed. There's people who are very much advocating and advocating privacy. And there's very much people saying who cares. And we saw that on our fate, Facebook advertising and all that kind of stuff. We saw that when we did the video ad half, the comments were very aggressively pro and half the comments were very aggressively against, and there wasn't a lot of like recognition that most people I think are in the middle. Everybody acted like there is no middle. And so that's where the market is for us. And as a side note, we openly disclose this on our cookie policy where other companies collect the data like Facebook or whatever, once it's collected, once it's there, we will absolutely make use of it.

Thomas Stachura (00:51:06) - So when we advertise on Facebook, because we will advertise paranoid on Facebook, we will actually look at their analytics and look at their stats. We just refuse to collect that ourselves because once it's there, they've already got it. You're toast. Somebody's going to be using it. Have you. So I would say that our initial launch was quieter than we liked. And then all of a sudden, out of nowhere at the start of April, we started getting a many fold increase in our, um, in our sales. And I don't, I don't think it was the mocking commercial, which came out April. He thought it was an April fool's joke. So I know he found out about it on April's fools. Um, maybe everybody else needed to see that it existed for a while. Like when they research like, Oh, this, this came out in February, but what, for whatever reason, sales went quite a bit higher in April and we've been enjoying and see, there's an example of, I don't know if I had my analytics, if this was pleasant solutions, I could tell you exactly where all the people came from. No, we're going to have to be finding out the old fashioned way. Where did people find out about us? We have to ask them on the store order form. How did you find out about us? So the sales have been definitely increasing and enough that I know now that we're committing seven engineers to it. Um, and we're committing to additional product lines and we're expanding it. Like we're going full throttle.

James Parker (00:52:32) - I mean, it sounds like for you, it's a good thing. If you know, Amazon echo keeps growing. I mean, we, it's a bit hard to find the statistics. We we've been told that, uh, voices, uh, smart speakers are the fastest growing consumer products since the smartphone. I'm not totally sure whether that's true, but from your perspective, because you want to unite privacy and tech, that's a good thing, right?

Thomas Stachura (00:52:57) - Well, in terms of being the fastest growing, I would say I'm not surprised if it's more than phones or anything else because never before has so much been given for free out of the, over a hundred million smart speakers out there. A huge portion of them are free. And so of course, adoption, like I've never seen so much of a rush to give a customer something for free because they know the value of the data. They know this is the next Google search. You're going to be searching and talking to your smart speaker more than you type on your desktop computer. Why get up when you can just ask. So I'm not surprised if it's the fastest growing, but you have to factor in how many people are getting a gift for free from the company. Not to mention a lot of these are being re gifted and whatever.

Thomas Stachura (00:53:47) - So many of those are not actually being turned on. I know so many people who have gotten the smart speaker and they say, well, it's turned off in the basement to which I respond. You should tell whoever gave it to you to also give you paranoid for free. Then, then you can turn it on and feel secure about it. So is it advantageous when everybody increases their use of technology? Absolutely. Our market depends on companies being not trusted, which I think I'm safe to say that's going to be a good market for a number of years. And it depends on technology collecting data, which I think I can also safely say is going to be a growing market, leaps and bounds. And I think it's almost a given for me that we're going to be growing leaps and bounds along with that. Do I hope that adoption comes about more rapidly with smart speakers? I don't care that much because if it's not that it's something else and whatever it is, we're going to chase down the ways to make it more private. And so I think there's already, you know, big enough market for me to chew on. I don't care if it goes from 150 to 300 million smart speakers. My market's big enough. I have to focus on just refining my message and getting the, you know, the sales and getting word out there and building trust. Um,

Thomas Stachura (00:55:10) - Whether it's sending it to privacy advocates and letting them dismantle it and then, you know, speak on whether it's communicating to servers or not, or whatever the case is. Uh, that's my focus, so I I'm indifferent, but of course the market's going to be there no matter what I do.

Chargement…
Annuler
Enregistrer