You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

andrejevic.md 101 KiB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234
  1. ---
  2. title: "Mark Andrejevic"
  3. status: "Auto-transcribed by reduct.video with minor edits by James Parker"
  4. ---
  5. James Parker (00:00:00) - So, I mean, uh, it would be great if you could just kick things off by telling us a bit about yourself, um, and your work, you know, how you get to the questions that you're currently thinking about.
  6. Mark Andrejevic (00:00:12) - Thanks. Thanks so much. It's great to talk to you. And, um, the, you know, the, the longer background story is I started my academic career nominally in television studies, um, because I wrote a book called reality TV, the work of being watched, and, you know, what I was really interested in and talking about reality TV was the way in which I thought it modeled the online economy. And the question really that book came out of the question, uh, that had to do with the discussions around interactive media at that point. So I've been taking a look at interactive media art, you know, talking about like late nineties.
  7. Mark Andrejevic (00:00:50) - So, um, you know, hypertext novels, participatory, artworks the promise of, um, reconfiguring the relationship between Taryn dater. Uh, and, uh, I remembered him as often as the case, I think in the aesthetics fear, the kind of empowering and interesting and original characteristics of the medium are being explored. What's going to happen when capital takes it up. And the answer that, you know, I was thinking like, where's the space where I can see that happening and reality TV, which was just getting big at that time. And it's, you know, in its iteration that took off in the two thousands was, you know, they had some successful formats. The real world and road rules is kind of MTV formats in the U S and the big brother followed immediate on that. Um, and I looked at that and I thought, that's it, that's one of the ways in which the culture shows us what happens when capital takes up the logic of interactivity.
  8. Mark Andrejevic (00:01:47) - It's about offloading, uh, portions of the labor of production onto the audience in the name of their self-empowerment and self-expression and acquainting willing submission to being watched all the time with those forms of self-expression and self-empowerment and self knowledge and reality TV stage that. So interestingly, and as a media studies scholar, I was really interested in the ways in which the media kind of reflect back to us our moment and reality TV became, um, the, the, I don't think of it as a genre. I think of it as a kind of mode of production. It became the mode of production. It reflected back to us what was happening with mass customization via the interactive capability to collect huge amounts of information, and to use that, to rationalize production marketing and distribution processes. Um, so I got my first job in television studies, but I think people understood, I wasn't intelligence studies when I sent that book out, you know, they circulated it, the manuscript to TV studies authors, and, you know, the first set of comments I got back from the first publisher I sent it to was, you're not really engaging with television and it was true.
  9. Mark Andrejevic (00:03:01) - So unfortunately I found a publisher who was okay with that, um, because it was really a book about the political economy of interactivity. Uh, and then my subsequent work has really, you know, it was moved away from reality television and, and looked at that, those questions of the political economy of, of interactivity, the ways in which, um, the economy that we're building is reliant on the mobilizing, that interactive capability of digital media, uh, for the purposes of information collection and the forms of economic rationalization and risk management and control, uh, that flow from that. Um, I remember, you know, early on making that argument in 2008, and it was 2001, I was in, uh, in, um, a job search. And, you know, I was trying to make the argument that this is the basis on which the online economy is being built. Uh, and at that time I didn't have a very receptive audience, but times have changed since, since then, you know, now that's become kind of a commonplace that that's what the interactive economies is built on. So my subsequent work has really focused on the relationship between digital forms of monitoring and data capture and different spheres of social practice, uh, and how, uh, forms of, uh, power and control are being a mass by those who have access to the means of collecting processing and mobilizing the data.
  10. Sean Dockray (00:04:32) - When you think people started catching up with, with that observation that you made, like, what, what is it that happened in culture that, that made it apparent to people?
  11. Joel Stern (00:04:41) - If your argument was it when the reality TV star became president?
  12. Mark Andrejevic (00:04:47) - So that's, um, uh, it was, I, you know, I th th the big shift looked to me yes, right around that moment. Um, you know, I think there'd been a lot of coverage of, um, you know, how much information is being collected about you and what that means for privacy.
  13. Mark Andrejevic (00:05:05) - So the privacy stuff started to kick in, you know, not too long after that, I would say, you know, mid two thousands, mid naughties. Um, but the moment where things really started to galvanize would be I think the Snowden revelations, which made the connection between the commercial data collection and, um, state use of that information and, you know, made it quite palpable. So that snowed moment was a big episode. And then I think probably the other big episode is the Cambridge Analytica moment and, you know, Brexit and the 2016 election, you could really just see the whole time switched. And all of a sudden these companies that had been, you know, floating on some kind of halo that differentiated them from other big corporate industries, you know, I remember talking to undergraduate students and, you know, the way they thought about, for example, finance industry, you know, it was so different from how they thought about the tech sector.
  14. Mark Andrejevic (00:06:03) - You know, the tech sector was where you'd go, if you wanted to express yourself and event and you could be young and do all kinds of cool creative things. And, you know, what had been the big, you know, kind of instrumental economic sector to go in, in the 1980s, which was finance, that was the state boring, you know, um, bad capital and tech was kind of good, cool capital. Um, but that really got reconfigured. I think, you know, uh, all of a sudden, you know, the ability of Google to say things like do no evil, it just didn't work anymore. Couldn't get it, couldn't get away with that. And now we live in this moment where, um, I don't know, there's something interesting going on with the way, um, quote unquote, legacy media frame, digital media, and, you know, in a way they get to blame digital media for a lot of the pathologies that they participate in and digital media, because, you know, the tech sector now becomes the villain. Uh, you know, I, I, I think it's, it's not the only villain and those, those who hold it up as the villain or are participating in, uh, in what they're imagined they're critiquing. Uh, but yeah, I would say that shift happened, happened around them and of course, Shoshana Zuboff surveillance capitalism, right. Like hit right at that moment, that sweet spot and just gave a vocabulary to, um, the, the kind of critical moment in the popular culture and in the punditry realm. And that just took off,
  15. James Parker (00:07:32) - You've got this new book, um, automated media, which is obviously very concerned with, or responsive to, you know, a similar kind of paradigm or, you know, contemporary, contemporary situation that the Zuboff is concerned with. But, but you frame it quite differently. I mean, of course you're concerned with capital. Um, but you, you tend to talk about, um, what you call the biases of automation, which sounds like a kind of it's, is it a political logic or, or, uh, sort that, that seems like a, sort of a really key and foundational concept in the book. Could you say a little bit about what the bias of automation is, how it's different from other forms of bias, algorithmic bias, or other forms of bias in tech? What sort of, what sort of object is it?
  16. Mark Andrejevic (00:08:20) - You know, I, I, when, when surveillance capitalism, when that book got so much attention, I remember kind of kicking myself and going like, damn, why didn't I use that term? And, you know, cause I've been writing about this stuff for 19 years or so beforehand. And, uh, and then I realized it, you know, I thought would it have occurred to me and I realized it probably wouldn't have occurred to me because I see surveillance capitalism is a as, um, a redundancy, a plan is right. You know, it's, uh, capitalism and surveillance go hand in hand. So that, so that seemed to me to be a given. Um, and, uh, the idea that there was some special subset of capitalism that was surveillance capitalism is distinct from other forms of capitalism, just wouldn't quite occur to me because, you know, historically all the work that I'd been doing had been looking at the continuity, uh, between very early forms of, you know, enclosure and wage labor, and the forms of monitoring that, that enabled, that actually made, um, you know, industrial capitalist production possible that the idea that this was a kind of distinct rupture, um, just wasn't in the historical stories as I saw it.
  17. Mark Andrejevic (00:09:31) - So too bad for me because it would've been a useful, useful term to mobilize. But so the, the book automated media, a little bit of, I don't know, just a little bit of a backstory was it started as a book that might still, that I'm still kind of working on that was called drone media. Um, because I really, I wanted to write a book about how automation under the social conditioning conditions that was being developed. Operator's a form of violence.
  18. Mark Andrejevic (00:10:00) - And I liked, um, the figure of the drone as an avatar for automation, uh, and a lot of the logics that I then described as biases, uh, and incorporated into that book, automated media came out, spending a fair amount of time looking at how drones, uh, reconfigure the, I mean of conflict and what it, you know, the forms of asymmetry that are, that they rely on the forms of kind of always on, uh, every space. Every space is a potential site of conflict at any particular moment. Uh, the way the French did space disappeared, withdrawn conflict, uh, those informed what I call the biases of automation that I described in the book. Um, and what I mean with that term bias that comes out of North Americans, specifically Canadian, uh, you know, media studies research, Harold Innes, uh, writes a book called the bias of communication.
  19. Mark Andrejevic (00:11:01) - And he's, he's really interested in large acts of empire. The bias communication for him refers to the tendencies, the different media technologies, uh, reinforce a reproduce within particular social formations. So, uh, you know, it can sound a little bit technologically determinist in the sense that it's the technology that carries the bias, but I, the way I read his work is that it's the technology within a particular social context, that those two are connected. Uh, and w and, you know, for example, he thinks about, um, media technologies that, uh, lend themselves to, um, rapid transmission across space are used to control, um, Imperial programs that reach across space. Uh, whereas those that are durable through time lend themselves to a different concept of, of, uh, control of information through times, you know, durable stone tablets versus portable pirates, or these types of things. But what interested me is that the notion that in a particular context, immediate technology could carry the very choice to use.
  20. Mark Andrejevic (00:12:11) - It could carry logics of power within it. That was the insight that I thought was interesting because I tried to figure out if you're writing about digital media technology, and you want to critique the power relations that it reproduces in which it's embedded, what level do you do that? Uh, and you know, one of the difficulties in writing about digital media is things move so fast. Uh, and so my goal was to see, you know, what are some of the tendencies or patterns that we can abstract from the fast movement, um, that will allow us to maybe anticipate logics that are going to emerge over time, uh, and provide some form of critical purchase so that we exist, not in a purely reactive relationship to the technological developments, but in a more knowing and anticipatory relationship to it. We can where these things are headed. And the goal of extracting biases was to, to suggest that these particular tendencies, that flow from the choice to use a particular technology media technology in this case allow us to, to, to generate some understanding of where we're headed as we take these technologies in hand. Um, and so that's what I tried to do in the book was isolate some of those tendencies. Um, yeah.
  21. James Parker (00:13:27) - And what are some of those tendencies, are you, you're not talking for example about, you know, racial bias. I mean, you're not just talking about bias in that sense, you're talking about a different kind of tendency.
  22. Mark Andrejevic (00:13:38) - Yeah. So, so bias in this sense refers really to kind of tendential logics that flow from the choice to use a particular technology. I mean, probably the best, I don't know if you think about the apparatus of the market, for example, you know, what would be the, one of the biases of the, of, of market exchanges is, uh, they're biased towards the quantification of value, you know, understanding value in purely, um, quantifiable terms, but you're right, when it, when you're talking about bias and digital media, the ready correlation that people make is to the ways in which automated systems are biased on protected categories, um, you know, race, gender, skin, color, et cetera. Um, and, and I think that work is, is super important. And I think there's also an interesting connection or question to ask about the type of biases that I'm interested in, uh, logics and those biases. Cause I think there may be a way to make the connection that under particular social circumstances, those are connected, but the biases that I'm thinking about, uh, and I'll I'll name a few of them have to do with, again, the kind of social imperatives that are carried through the technologies that are implemented. And one of them that I talk about is frameless newness. Um, and what I mean by frameless anise is the ambition basically to reproduce the world and data fired form. Um, and I th the term famousness flows.
  23. Mark Andrejevic (00:15:03) - It really from a particular example that I came across a while ago, it was an ad for, uh, uh, the lifestyle cam that you wear on yourself all the time. And it records your life and that the advertising promotional material for that device said, uh, you know, did you know that the average human only rep remembers? I can't remember. It was like 0.9% of, you know, what they encounter every day. That's about to be fixed by the life, or now you can remember everything and that ambition to record everything, to be able to remember, everything seemed interesting and telling from the perspective of automated media, because it reproduces the ambition of, uh, digital media technologies mobilized for commercial purposes or policing purposes or state surveillance, uh, to quote. And this is a quote from the former chief technical officer of the CIA to collect everything and hold onto it forever.
  24. Mark Andrejevic (00:16:03) - Um, because the idea there is that if humans are trying to make sense out of information, you have to do, you have to engage in what's what they call search and window, um, surveillance or data collection. You collect what you can, and then you get rid of what's not relevant, but if you have automated systems that are processing the data you collect, um, what you want is as much data as possible because the logic is the machines can go through this at speeds that are superhuman, and they can detect correlations that are, uh, undetectable to humans. And that if you leave any piece of information out may form part of a pattern that would be undetectable, you discard it. So the winnowing process is, is, um, is reversed the, the way the CIA chief technical officer described it was you. We want to connect the dots.
  25. Mark Andrejevic (00:17:00) - Um, and sometimes the pattern of the dots emerges only when you get later dots, if you throw away some of the earlier dots, you won't get to see the pattern. So that's why you temporarily you'll have to hold onto the data forever. Um, and to see the pull pattern, you need as much data as you can get. So that's why you need all of, um, and so I use that notion of the frame to, to think about the, uh, you know, the way in which a particular picture or a narrative, uh, is framed, basically understands that pictures and narratives are always as, as our subjectivities. They're always selective, you know, the subjectivity that of, of me who remembers my day, um, if I remembered everything in a sense, I lose myself. It's part of what constitutes our identity are those things that we remember and the gaps that we don't, um, the idea that you could fill all of that in, and that would be perfecting yourself, I think is tantamount to saying, well, that would also be obliterating herself.
  26. Mark Andrejevic (00:18:01) - Um, so I use that. I try to put, I put a lot of weight on that figure of the, because it, it talks about what gets left out. Um, and of course it's a visual metaphor, right? In framing the picture. But, you know, I try to extend it to thinking about it in informational terms, frames, having, uh, defining where the information collection would stop, where the use would stop, um, where the motivation would be restricted or limited. So, um, if you think about, I don't know, no, the average marketing researcher, I imagine asking them what type of data that you could collect about people and what they do would not be relevant to, you know, your marketing initiative. And I imagine the answer, right, it'd be, it's all potentially relevant. There is no frame that would stop. It would say you just stop here. And the other thing that interests me about that concept of the frame Listening is the way in which it gets reproduced actually in forms of representation, because representation, one thing that I would argue about it is that it's always in framed in a certain way, like a representation has to leave something out, but the, the, the generation of, um, forms of innovation capture and forms of information, representation that don't leave anything out is, is something that's familiar in our moment.
  27. Mark Andrejevic (00:19:21) - So three 60 degree cameras, virtual reality that there's, um, I was really interested. I don't know what's happening to this technology in a data collection technology that I think of, you know, it's trying to imagine what would, what would a frameless form of, uh, yeah, I can imagine kind of F kind of a frameless form of representing patient and essentially digital reality. That was inf sorry. Um, virtual reality. That's infinite that reproduces actual reality, right? Um, that you get in there and there isn't any frame, the only frame would be provided, uh,
  28. Mark Andrejevic (00:19:54) - You know, what are the things, but what's the information capture, um, medium to collect all of that. Uh, and I don't, I don't think there's any particular answer, cause I think these are all impossible goals, but I think they're also tendencies that, that we can discern. So the fact that the goal is impossible, it doesn't mean that the tendency to move in that direction is not emerging. And, uh, this was the, the smart dust, um, moment that came, it came, uh, came out of the, um, uh, second Iraq war. Uh, they were trying to figure out how can you monitor urban spaces in ways that get around the problem of, of what urban warfare poses, which is, you know, walls and hidden spaces and things that, you know, can't be seen through. And they, one of the technologies that they came up with was smart dust sensorized particles that could be distributed through the air, uh, and that, and that would relay their position to one another. And the idea was that the air itself would be permeated with these particles. So the air would become the medium of information capture. So you could see internal spaces if there was a person behind the wall, the air would have to flow around them and that information would be captured by the dust particles. And so you could isn't that sound? Yes, exactly.
  29. James Parker (00:21:11) - Yeah, I was, um, I was thinking, as you were speaking about how, you know, you said you started off with the frame of visual metaphor and then you sort of, you're, you're sort of tendency is expansive towards logics of automation in general, biases of automation in general. And, you know, we framed this project in terms of, you know, we're sort of advocating or trying to think through a politics of Machine Listening um, you know, what, what would it mean to take that as an object? And I'll be, I've been struck from the beginning about the way in which that's sort of important because, you know, facial recognition, computer vision, um, search, uh, sort of the smart city more generally, you know, these things get a lot of press, but a lot of the audio technologies, which are actually very pervasive and there's many more companies than people seem to realize, sort of get missed.
  30. James Parker (00:21:57) - And so there's something about, you know, signal boosting or consciousness raising or something to say, you know, Machine Listening is a thing too, but I'm really conscious that there's a risk that comes with that of, you know, embedding and privileging. I don't know if it's just anthropocentric or, you know, uh, sense specific logics in a way when, when the data is kind of agnostic to sensory input on some level, although perhaps, you know, there are a sudden biases, you know, that that are, or affordances from Listening or audio technologies specifically that, you know, you might put performance, some logics I'll be interested to know, like, what do you think about the politics that say of sense agnosticism or framing things around specific sensory modalities?
  31. Mark Andrejevic (00:22:44) - Well, I, you know, I think the specificity of the sensory modality is, is crucial to understanding the affordances of these monitoring processes. There, there is kind of a, you know, a synesthetic quality of some of this, you know, it's on the one hand, there's a real temptation in the data monitoring approach to just, as you say, collapse it all into data. Um, but that does bulldoze some of the, uh, kind of specific technological affordances and some of the synesthetic stuff is really interesting, right? This is stuff that I think you, you know, more much more than I do, but, you know, the, the, you know, those Listening processes that rely on ultra high speed cameras, um, that can capture vibrations, right. And so kind of can transpose auditory information into visual information and then back into auditory information again. Um, and what that, to me points to is the specificity of the modalities of those, of those affordances.
  32. Mark Andrejevic (00:23:46) - In other words, you can't, you can't see what people are saying, unless you can capture the traces of the, of the sound waves. And so, I don't know, maybe maybe a better approach than, you know, undifferentiated convergence is synesthesia, you know, the, um, the ability to, to find ways, how can you use one medium to capture the information from, from another medium, uh, and then how can you retranslate it or translate it back. But, but I think your, the, the point that you make about between sound particulate capture, um, I mean, it, that's so interesting, right? Because you know, the, the challenge then posed, you know, something like basically what the smart dust is really trying to do is to take something very similar to sound, which was the, you know, the disturbance of the, of the particles, uh, but make it travel.
  33. Mark Andrejevic (00:24:40) - You know, piggyback it on because these things are supposed to relay. It doesn't work. Right. But it's the notional idea they're supposed to relay this information electromagnetically in data form so that you could hear through spaces that are more direct promulgation of sound waves, you know, wouldn't be able to reach, but what if you could make sound that would, you know, travel through all the byways that you need to get it to go to, to get back to you. But I think, I mean, I, I think probably the, one of the things that you're addressing and I think it's super important is the default vocabulary for monitoring is so visual. And that's one of the challenges that one faces when we look at a realm in which monitoring practices or visual is just one subset of, of all the forms of information and data collection that, that participate in this kind of interactive form of data capture. I've been talking about.
  34. James Parker (00:25:38) - I mean, one of the things that occurs to me with sound and voice specifically is this idea of, you know, the disappearance of the interface. And, you know, that's a, quite a specific audio phenomenon. You described it to us once recently as touchless listeners, the tendency of the interface to sort of disappear in the way in which COVID and the pandemic contexts and ideas of hygiene are sort of accelerating that kind of that discourse. And it seems like touchless, Snus and voice are quite a specific subset that have their own sort of politics and tech and biases baked in.
  35. Mark Andrejevic (00:26:19) - I mean, the, the stuff on touchless Snus came out of, you know, spend some time looking at the facial recognition technology and then watching them pivot in response to the pandemic moment. And, you know, they saw this immediately as an opportunity and the opportunity was a whole range of kind of transactional informational interfaces that rely on some form of physical contact in an moment when physical contact was stigmatized because of its relationship to contagion. Um, and so I was at an NEC online conference, that's their big developer of among other things, uh, facial recognition technology. And they're coming up in partnership with other, um, commercial providers with a range of, you know, touchless miss solutions. And because of the way in which, and, and, you know, this is, I guess again, thinking about the affordances of the different sensory technologies, but they saw, uh, facial recognition is providing solutions to touch Listening.
  36. Mark Andrejevic (00:27:16) - So everything from a security room where, where, you know, you may have had to do a fingerprint or a card swipe, replacing that with facial recognition to access, to meds, mass transit, um, to access to stadiums, uh, to, you know, clearance for entrance to buildings. All of those could be transposed into facial recognition registered. So shopping was, was one of the big ones, you know, you don't have touch those dirty screens that others have touched. Um, uh, you know, there's, I think there's a, there's a deeper tendency here that, um, it's probably noting it's something I spent some time in automated. Yeah. Kind of going on a screed about, um, but it's, it's the kind of anti-social fantasy of digital media technologies and the pandemic moment really. And, you know, that, that gets phrased in, in, you know, I don't know, certain kind of popular discourses around, you know, like does Facebook depress you, or that it's not really at that level that I'm, I'm thinking about it, it's more on the level of what it means to see oneself is kind of hermetic sphere and others is potential threats to a permit system.
  37. Mark Andrejevic (00:28:26) - Um, it's been real interesting to see all of the discourses around the dirtiness of the other, right? Like the, you know, here are the particles that come out of the mouth, here are the things that they leave. Here's what happens when they flush the toilet. It's not like some really, um, you know, nasty embodied, like just wherever we go, we leave these organic particles that are potentially contaminants. And it reminds me a little bit of, you know, futurists Ray Kurt's wild, he's kind of scorned for the flesh, you know, like, yeah. You know, the thing about proteins are, is that, you know, they're like really temperature sensitive and, you know, very fragile, like carbon polymers are much better. So if we could, you know, upload ourselves into carbon pollen, you know, like the flesh is dirty and weak and, and mortal and, you know, yuck. And that really fits with a particular tech aesthetic. Right. You know, like the clean Machine. But I think of those early days, you know, um, going back when I started reading about digital media of the, the, all of the fantasies that were socially hermetic, you can live and they were very privileged, right. You know, you can live in your mountain cottage and never leave. And anytime, like for work, you just zip on your corporate virtual workspace, this is, was a thing, the corporate virtual workspace. And, you know, you'd find yourself in a haptic space.
  38. Mark Andrejevic (00:29:39) - We're with virtual reality. And you know, I'm not that different from what we're doing now, but you know, it's still a few dimensions, uh, extra, but the fantasy was one of stasis and the fantasy was one of the, um, diminution of the social. And it was very hermetic, right? Bill Gates, his early fantasies were very similar. You know, if you read, um, the road ahead, not only would you be able to pick, I want to live on this tropical Island and you know, I don't have to go anywhere because I can go there virtually, but all the media that I can consume, I thought this was such a telling moment. He said, you know, like you're watching your favorite movie, you'll be able to customize it. So you are the star. And it was interesting to me, that was the first thing that came into his head.
  39. Mark Andrejevic (00:30:20) - Right. Like everywhere I look, I see only me. Um, the extension of that into, into informational space is hyper customization and hyper targeting. Right. You know, um, yes, everywhere you go, you will see only you, the movies you want to watch. The news you want to hear. Um, this is the fantasy of digital customization. It's a kind of, um, complete background of the social processes that make those decisions. Instead, the promise to reflect back to yourself, yourself in perfected form. To me, those two things seem to fit, right? This kind of desire for kind of a pure, um, I, you know, hermetic individual, um, understood is kind of your own fully defined entity unto yourself. If only enough data can be collected about you and this kind of reaction, that's really brought out by the pandemic, um, to, you know, like the threat of others. Um, you know, just, just walking down the street that, that weird feeling that you get when in the pandemic moment, you know, somebody comes to close, somebody makes a little cough, right. You know, it's that, it's that kind of, to me, fully terrifying moment of, um, other, you know, of the others, the literalization the threat of the other,
  40. Joel Stern (00:31:40) - I was just gonna say, uh, of sort of social exposure, but it's also, you know, it's, I was thinking about, as you were talking about, um, the particles that sort of come off other people, the sort of synthesized voice of Siri or Alexa as a, as a kind of voice, you know, without breath, without contagion as a kind of, uh, an, a lot of bodies, you know, and on the other end of that, the sort of, um, co COVID diagnostic tools that are sort of coming up through, through voice analysis and recognition where you sort of cough into, you know, the telephone or a microphone and get diagnosed. So this sort of set in a way, um, these sort of voice interfaces, as ways of mediating this kind of fear, fear of contagion, or, you know, producing a voice that is, um, dangerous or, or infectious.
  41. Mark Andrejevic (00:32:36) - Yeah. That's a great, great connection. Sorry. I dropped out for a little bit, so I just keep it, um, but yeah, that's really nice. Voicelessness and breathlessness. Um, yeah, the breath is such a threat, you know, for somebody who's obsessive compulsive a little bit, like, I, it's really weird to watch these, you know, as a kid, you know, I really had that journal really weird to see it's like the neurotic fantasies of, of my child, the contemporary sort of it's terrifying.
  42. Sean Dockray (00:33:15) - Um, I wanted to ask her a question about, um, paranoia, um, just cause you know, like this kind of relationship between like sort of imagining everything else is connected and somehow that you're subject to that kind of vast conspiracy around you, but not actually part of it, you know, that kind of follows from what you were just saying about this kind of crisis of the cells and the hermetic cells and everything. And I guess one thing I had wanted to ask was just, just about the role of like paranoia and the mobilization of paranoia, both in like critiques and also even in the marketing of, of, of a lot of the, the new devices. So that was one direction that I was hoping to go down and then another direction, um, it was just in the, in the touch Listening um, and maybe even related to that previous point, it's just this like ultimate disappearance of the, um, of the computer at this moment in history, you know, that we sort of imagined that the computer appeared on the scene and it's like more and more a part of our life. But actually it seems like the computer is a historical artifact, which was kind of around from like the eighties to the two thousands. And then it's gone like that. The computer as a figure is, uh, you know, in the workplace a little bit, like it wasn't the sixties, but otherwise it's something that's disappearing. Right. And we're back to like. The the, the, the kind of domestic space, again, you know, everything is different. Everything is kind of haunted by the computer, but it's, uh, so I guess it was just, that was another area I was sorta wondering about just in relation to what you were talking about, about cleanliness and the aesthetics of the aesthetics of that.
  43. Mark Andrejevic (00:34:56) - The questions are both so great. The, the paranoia one is one that I think about a lot, you know, I started, I started listening to that podcast about, um, Q Anon, uh, and it, it fascinates me. The conspiracy theory is tough. The formulation that you made to me is really, uh, you know, about the kind of, I think about it as the kind of suppression and misrecognition of interdependence in the social, that's lined up with the offloading of the social work of constructing individuality onto automated systems, right? So, you know, if we can see from the start that, you know, whatever our conception is of the, of the individual is, uh, constituted by the social relations in which we exist. The work of a, of a kind of, um, emphasized individualism is to misrecognize that, that role, uh, and the role of automated systems that individualize us for us is to participate in that fetishization.
  44. Mark Andrejevic (00:35:52) - So th so that we can really start to imagine, yes, I am that constituted, um, individual. One of the moments in the book, that's kind of, one of the defining starting moments for me is when trying to imagine that he can reconstruct his father in his entirety, by getting all of the data traces that his father now deceased, left behind, uh, and then reconstruct them AI that he can converse with, and that it would be just like conversing with his father and somebody presses him on that and says, but would it be really like, um, you know, being with your father? And he said that bot would be more like my father than my father was to me. That's a, that's a really interesting formulation because it, it suggests what the fantasy of total totalization via datafication is that you can actually be specified to be more like yourself than you are.
  45. Mark Andrejevic (00:36:45) - Um, the book has a kind of full psychoanalytic, um, framing for out it, right. But you can see why that's really appealing from a psychoanalytic perspective as a, as a pathological observation, because what it, what it attempts to do is to take a subject it's constituted through from a psychoanalytic perspective, um, lack, split, and say, no, we can make you whole, and we can make you whole through filling in the lack with all of, all of our data. Um, but how that, I think connects to paranoia is precisely what, what you've been saying is this, um, once you lose the coordinates of, um, the social and you kind of forward this fantasy of kind of pure autonomy, um, it's actually the breakdown between the relationship, the individual and the social that makes paranoia seem possible. Cause the social is part of the constitution of subjectivity.
  46. Mark Andrejevic (00:37:39) - But if you can't recognize that, um, it looks like an external force, um, from which one is disconnected and this embedded, and the fact that it has its logics start to look like, you know, start to foster the forms of paranoia that you're talking about. There's something happening here. It's systemic, I'm on the outside. Um, but that outside does two things. One, it makes me feel excluded, but two, it gives me that kind of outside position. Uh, ha I can see now the patterns, um, and that, that kind of megalomania and maniacal fantasy of the conspiracy theorists. Like I see something that nobody else sees because I've, uh, I'm on the outside. All of you are on the inside.
  47. James Parker (00:38:19) - I'm right. Trying to write something about this company. Clear speed. Um, right now, um, who are very clear that the snake oil, I mean, sorry, technology they're selling is, uh, not voice stress analysis, but what it does is it it's a, um, a risk assessment tool that, uh, I mean, it's been very widely sold by as far as I can tell. Um, something like, yeah, 13 languages, 12 countries, 23 industries, and the technology offers to vet for fraud, security and safety risks based on a two to 10 minute long phone call with greater than 94% accuracy. And now they're selling, uh, COVID specific ones, uh, sort of, you know, you can buy the product now for, um, um, dealing with COVID, um, financial fraud. So, you know, a very F so like the, the moment like the pandemic comes along, they suddenly think, well, think of how many money grubbers there are out there who are desperately going to be exploiting welfare systems and, you know, loan schemes, um, and whatever. And that w and, and now we can sell our anti-fraud vocal, uh, technologies. And, and they're very clear that it's not stress analysis, so whatever they were, I mean, clearly that stuff got a bad rap, you know? Um, and they, they, they talk about macro biomarkers and things like this, um, not micro biomarkers.
  48. Mark Andrejevic (00:39:50) - Can you buy that tech and turn it back on them because they're, they're going to be their own undoing,
  49. James Parker (00:39:55) - Right. I mean, there's, no, you have to buy it. It's like, uh, you know, the sort of that classic pyramid scheme thing where you only get to see what you have to buy in, in order to be able to see how, or whether the technology works. Like it's, it's fully black box, right. So it's like, it's once you've already bought in, it's sort of too late, uh, you know, because you've, you've sort of committed, you know, financially and sort of spiritually to the, the, the thing. And so it's completely unclear what science, if any, it's based on, uh, there's nothing there just, it's like, we'll tell you about it. If you buy the product,
  50. Sean Dockray (00:40:33) - Am I sort of like imagining this penal colony conclusion to it right. In their demonstration and kind of talking about the product, the product is to turn back on them. It's for reveals I'm to be a fraud and that cast them right.
  51. James Parker (00:40:51) - They made, uh, millions of dollars and, um, um, prevented hundreds or thousands of people from accessing social services that they desperately need.
  52. Sean Dockray (00:41:02) - Well, one thing is what you do bring up, which is kind of interesting. We were talking about agnosticism, like data, you know, that whether it's whatever image capture of voice capture, you know, it kinda all ends up in the same place or represented in this kind of format. Uh, but there's also the similar, like a tool agnosticism or something where it developed some products that, you know, um, will tell if, if, uh, you know, if you're nervous or not, and then, uh, you know, COVID comes along and like, Oh, my, my nervousness tool is also really good for like, diagnosing. If you've got, you know, this, this, this disease that's incredibly difficult to test for. So the fluidity of these solutions, it kind of goes alongside the other thing. Yeah. The portions of various tech industries and other ones who've seen the pandemic moment as an economic opportunity is fascinating, but especially the tech sector, because, you know, it's, they've got that.
  53. Mark Andrejevic (00:42:08) - I don't know, you know, tech solutionism capability built into them or that promise built in, but also because of all the forms of mediation that are associated with managing the pandemic, but with the voice stuff, I, you know, th there was a moment that I, I noticed that it's, it may extend to the present, but I was really interested in for a, in the, the role that body monitoring systems were taking in promising to bypass the vagaries of discourse. The idea that at, at a time, and this speaks to the question about paranoia that the Chan brought up, but, you know, at a time when, uh, forms of, you know, social reconfiguration and the breakdown of social trust in various ways, pushed in the direction of the savvy subject to knows that discourse is manipulative and, um, politicians are lying and everybody could be lying. And so on the promise of some modicum of authenticity to be got at, through various ways of reading the body. And this was part in part my interest in, you know, reality television because of its, it's got that kind of.
  54. Mark Andrejevic (00:43:22) - Promise of authenticity through forms of contrivance, right. You know, put people in the lab, experiment with them and then extract some, uh, authentic emotion from them. And what's authentic is not the situation, but the, but the reaction, but reality TV really quickly incorporated some of this technology. Uh, and so there was, uh, an MTV dating show. I can't remember what it's called now. Um, I don't know if you ever came across this, but it might be interesting historically for the work that you're doing, where you'd be set up on a blind date. Uh, and you'd be doing an interview to kind of like, it was, it was actually, I think you, there were like three candidates that you would interview. Uh, and then at the end you decide who you wanted to go out on the date with. And the interview would be conducted Mike, because, you know, it's a TV show, but the mic was connected to a voice stress analyzer in the van.
  55. Mark Andrejevic (00:44:17) - And there would be like the buddies of whoever was doing the interviews and they'd switched the gender roles. You know, it seems to be a woman interviewing three guys or a guy interviewing three women. And then back in the van, they'd analyze the results, you know, and they'd go like, Oh, she's lying now. You know, when asked a particular question, you know, like, Hey, so do you think I'm cute? Oh yeah, you're beautiful. In the back of the van, they'd be like, no, that's a lie. Um, and then before deciding who to go on a date with the Interview, we would go consult with the, you know, the backstage crew, but it had all the, you know, the kind of cinematic vocabulary of the surveillance apparatus, you know, like the people in the panel van with, with the machines, you know, watching the, you know, something from the conversation, um, you know, watching the voice stress.
  56. Mark Andrejevic (00:45:04) - Um, and then there was a, there was a really sadistic one. I wish I could remember what it was called. It was one of these, uh, I can't remember it's called w where you'd be asked a series of questions. And if you answered truthfully all the way to the end, as measured by some combination of biometric devices, it was like, I think it was voice stress, but also, um, you know, um, pulse rate and skin galvanometer, all this stuff that they use. Right. Um, but they'd ask you that, you know, these kind of horrible questions in front of, you know, they'd dig up things about your past and then ask you about things that were potentially very embarrassing. But if you, if you told the truth all the way through, you'd win the million dollar price, but, you know, like, have you ever had an affair there you are with your family in front of you? Um, you know, if you tell the truth, you'll get the million dollars, but if you lie, you'll be found out. So there was, or at least that's what they led you to believe, but it was, it was, uh, you know, it was kind of a bodily torture show, but an interrogation show anyway, but that notion that the body tells the truth, uh, even though the speech lies, but it's not.
  57. James Parker (00:46:16) - So the body tells the truth, the body tells the truth and only the Machine can know the truth, the body, because, you know, there's an intuition that we can sort of tell when somebody's tone of voice or whatever, but it's sort of not verifiable or something. So th so, you know, obviously kind of, you can trace that history back to the stethoscope, uh, and the sort of, you know, that, that, um, media oscultation and so on. But, but the specific paradigm now seems to be that we, we, we sort of Intuit that the body probably tells the truth that we might maybe think sometimes a little bit that we can discern, but the Machine therefore must definitely be able to discern the truth of the body. And that's just simply an act of faith, a faith that immediately becomes kind of profoundly political as these technologies get taken up and sold in every imaginable context.
  58. Mark Andrejevic (00:47:16) - Yeah, it's true. I've been thinking for a while, it'd be interesting to do a genealogy of the Iraqi color as a way of thinking about the decisions that are made by, um, these automated systems, a kind of non-negotiable decision that has that's nonetheless actionable, you know, w we'll act on it. Um, and as you say, there's a, there's a kind of faith-based commitment, um, based on, I suppose, they have all kinds of literature right there, you know, we've done this research, we show this, um, but it's, it, it's something I don't know. I I'd have to go back and look through this, but it's, it's going to come to rest at some point on self-report. Right. Um, I suppose, uh, so there becomes a kind of circular element in it, but there was a, you know, there was a proliferation of these shows. I can't remember what they're called now. The Tim Roth show where he takes Paul Ekman's, um, micro-expression stuff.
  59. Mark Andrejevic (00:48:18) - And, um, and here's the guy who can read the micro-expressions. So he becomes the Machine character. Um, he's been trained for years in this, and so somebody speaks, he can see the Twitch in their throat or the, you know, the micro expression in there. Um, but, but he becomes the machinic figure. And then there was the Australian actor was in one of them, the mentalist, and there was a third show. I can't, but they're all about people that were kind of like highly trained in body reading and were therefore able to kind of see through the deceptions of discourse. And, but, but they were kind of miraculous, machinic figures, right? Impossible figures.
  60. Sean Dockray (00:48:54) - One thing I was thinking also to connect this through that there's a tooth to the body that's not necessarily, you know, perceptible through through language, or when you were saying about bypassing the vagaries of discourse, is this a thing that we've come across around what James is labeled wake worthlessness
  61. James Parker (00:49:15) - Riffing on framelessess. Yeah. Wakewordlessness,
  62. Sean Dockray (00:49:20) - Which is, uh, up until now the, the, that the wake word has been this kind of like, you know, moment of oral consent where you sort of speak and you say like, I'm, I'm prepared, I'm ready. I expect. And I know that I'm going to be listened to from for some period of time. And you know, this is on, started in this one, um, patent from, or from Apple wasn't the Apple watch that it would, uh, begin recording. Um, once, once, uh, uh, had registered some vigorous oscillations, you know, to, to register whether you're washing your hands enough. Uh, so at some COVID kind of monitoring, you know, it's wrapped up in all in all of that stuff, but in a sense, like it's the movements of the body and that becomes a wake word. And you can just sort of imagine that the wake word is kind of necessary thing for people to accept that these devices are going to be with them, but it's temporary, right? That there's this general chipping away and erosion of the wake word, uh, two different forms of consent than kind of ... one. And so I guess, you know, some of what you're talking about, I think could connect to different possibilities for like, well, your body consented, even if you didn't say anything, your body sort of consented to us to begin recording, or the moments consented, you know, that consent sort of might take all these different forms,
  63. James Parker (00:50:44) - Just read out what the precisely, what the vice president of technology at Apple says about this thing. So it says the new feature would automatically detect when aware of started washing their hands. Our approach here is using machine learning models to determine motion, which appears to be hand-washing. And then using audio to confirm the sound of running water or squishing soap in your hands during this, you'll get a little coaching to do a good job. So I think there's also biofeedback through the or haptic feedback or something through the watch or sort of oral thing. So, yeah. So just as a kind of a specification of what Sean was saying there.
  64. Mark Andrejevic (00:51:24) - Yeah, that's fascinating. I mean, I think you're right. The tendency is to, I, this is, this has intrigued me about another book project that I'm working on is, is called the fate of interactivity. Uh, and I'm, I'm interested in, I guess, one of the things that prompted it was thinking about the ways in which interactivity at first invited forms of active participation. And then it reached such a fever pitch of, um, requirements to interact that interactivity had to be automated. Uh, and the automation of interactivity raises some interesting questions about the so-called, you know, the promise of interactivity, which was kind of a participatory one, but this idea that, you know, it's maybe asking, it's asking too much to ask us to interact as much as these systems and devices require. And so they developed strategies for figuring out how to automate interactivity and this kind of reading the body and inferring, well, we know that you're signaling something by this, so we're not gonna rely on the conscious register.
  65. Mark Andrejevic (00:52:35) - Um, we'll just, you know, use your implied bodily, uh, actions as a, as a form of interaction. Um, and that's you telling us, you know, to prompt you to wash your hands correctly, but all of these ways in which interactivity gets automated, I kind of maybe belies some of the rhetoric that underpinned the promise of interactivity, which was, you know, a kind of conscious participatory, um, I don't know, collective form of interaction, but yeah, but I, I think you're right, right. The tendency is just thinking back to those Google patents that fascinate me about the smart speaker and the smart home. There's very little.
  66. Mark Andrejevic (00:53:17) - If any reference to the system being off it's just on all the time. And it's on all the time in terms of, you know, for diagnostic purposes, for example, in order to be able to engage in early forms of diagnosis, it's almost got to be on all the time because it has to capture the full pattern of, of activity. And because it's diagnosis before you know that you have any symptoms, you don't have the ability to say what time counts for symptom monitoring. It's got to have, it's got to be able to infer that itself, but it's not
  67. James Parker (00:53:56) - Just the, you know, the duration of onness. It's the, sort of the quality of on this, right? So, so when, when the mic is on, even if you've said the weight word, the now, now Siri, or whoever can listen or whatever can listen to a body, the body, and the way that you, in a way that you weren't expecting. And it's an always expanding sort of litany of ways in which it can listen to and about and around the body. Right? So it's sort of the, you know, the sort of constantly reiterating and expanding, um, paralinguistic analytics or context awareness, or what have you. So that the sort of what it means for a mic to be on is always a necessarily sort of beyond what you anticipate and could sort of possibly know, or have consented to. It's not just that the wake word is disappearing. It's that the frame of what's being there was being listened to within the frame of the wait word is always changing and expanding. Yeah.
  68. Mark Andrejevic (00:55:04) - Yes. The awareness or the wokeness of the device is, is expanding in ways that, um, are, uh, yeah. Continue to develop and, and re, and for in many purposes, the goal is not necessarily to inform you of, uh, of how it's Listening. And it, you know, that connects with maybe this concept of operational ism, which have been kind of grouping in as one of the tendencies of automation. And that one's, that's a pretty vexed concept because I receive a fair amount of pushback against that. But, um, you know, what I'm trying to get at with that is, is, uh, the distinction between what you say and what you mean, which is, I mean, that's a, that's a distinction that's made in certain psychoanalytic context to mean something a little bit different from how I'm using it now, but, you know, Google isn't interested in, you know, what you mean, what it's, when it scans your email or listens to what you're saying in the house, it's interested in how, whatever it can glean from what you say, which could anything from recognizable words to voice tone, to verbal, um, hesitations, uh, to even the change in configuration of the sound of particular words, um, what that says about you in a way that's actionable.
  69. Mark Andrejevic (00:56:36) - And so the meaning, and it it's, it fits really well with a certain data science paradigm. So that the guy, Alex Pentland, who's one of the big data scientists at MIT, his big stick is don't listen to what people say, look at what they do. And it's, it's not a nice thing to say about people because it, I mean, he's not interested in communicating with people, right. He's interested in analyzing and prediction their behavior. Uh, and for his purposes, what they say is he's not interested. I mean, I'm sure at a cocktail party, he is right. But, but what are his, uh, you know, what are his primary interests? Would
  70. James Parker (00:57:20) - You maybe say a little bit about why you would describe this phenomenon you're talking about in terms of operational prism, um, where's the operation, could you sort of rewind a bit?
  71. Mark Andrejevic (00:57:32) - Well, yeah, so the, the backstory for that term is, is, you know, it's borrowed from Trevor Paglen who borrowed it from the filmmaker, Harland foci. He calls folkies, um, I'm Machine series develops his approach to what he calls operative images, which are images that are created to be part of an operation. And this is the series of films where he goes and looks at machines and the images that they generate to provide some type of human interface for what they're doing. So machines like, I, I can't, you know, things like rivet detectors or, you know, metal, utensil, um, strength indicators, you know, at one point they had these screens where you could kind of see the scanning results of.
  72. Mark Andrejevic (00:58:18) - Of the Machine. And what interested in was, you know, these weren't meant as he described it to be representational, although for the humans, we're looking at them, you know, clearly they did have some representative function, but really they just showed some operation that was taking place in the Machine. And Trevor Paglen goes back a little bit later and tries to reproduce, you know, an update to the Machine series. And what he claims he finds is that machines don't generate those images anymore because they're doing things that don't have any really useful, uh, visual interface for humanized because the operations are too complex or too multidimensional, or, uh, there's, there's no really meaningful, um, visualization. And, you know, he says these images have kind of disappeared into the operation and that notion of an operational images is an interesting man, an operative one, because it's, it's really not an image anymore.
  73. Mark Andrejevic (00:59:19) - It's, uh, it's the image that obliterates itself. You know, the example that I always think about is the Machine vision system that he trains to recognize it, to recognize objects, and he trains it on the classic image of representation, which is the, you know, the agreed senior position peep, but he uses the Apple one. There's a very similar ones to senior pass and pump. And he just shows how the image, how the vision Machine. He trains it on the image that says this isn't an Apple and the machine says, this is an Apple. And, but, but what he's getting at is the reflexive space of representation. Um, it was kind of raising the question, you know, the space that's missing from the Machine is the, is the reflexive operation. That operation may not be the word I wanna use here is the, the reflexive recognition of the representational character of the, of the image, which is to say the recognition of the non identity, uh, between image and, and, you know, um, indexical reference. I, uh, the images for all practical purposes, the thing that's what it acts on, uh, and the idea of some kind of space, um, between that. I mean, I shouldn't say that's what it acts on. That's what it acts upon as an input. Right? If that makes sense. So what I mean by that operational approach is that is the kind of lack of that reflexive moment around the shortcomings of representation. Sorry. So
  74. James Parker (01:00:54) - When you say reflexive, do you mean sort of something approaching cognition or thought, is it, is it a, or are you trying to bypass that, that problematic, you know, which sort of sweeps out too much conversation, you know, uh, is the AI really an intelligence, whatever, but it seems like when we're talking about representation and reflectivity, we're sort of unavoidably getting at that question. So is, is the vision a vision and it would operational vision would be vision without sort of a reflexive understanding when you say reflection reflectivity, is that where you, is that where you're, where, where that's going?
  75. Mark Andrejevic (01:01:33) - Yes. I mean it's, yes, it is, right. It it's, it's raising, I think it raises the really big question, which is why it's a really vexed set of claims, but it's the, it's that big question of, again, because of kind of the psychoanalytic framework, I tend to frame it in terms of desire. Um, but it, it, it has to do with, um, yeah. What were notions of what it might mean to think of the machine's conception of the meaning of the image is there is, you know, imagining that there's a gap between, um, there's a gap constitutive of representation, where the representation is always non-identical with, with what it represents. Um, and that's, uh, you know, how do you, where, how does one locate that space of representation? I suppose if you take, if you take that Apple image, right. I suppose you could train, right. You could train the Machine I suppose I don't see why this would be technically impossible to recognize the representation of the Apple versus an Apple. So you could, you could probably train Trevor pagans vision Machine to say that's a picture of an Apple, you know, you'd have to develop whatever the capacity is for it to recognize, I guess, two dimensional versus three dimensional or something like this, but could you train it to recognize, you know, the difference between, you know, a three-dimensional wax, Apple, and an Apple, uh, and you know, how would you.
  76. Mark Andrejevic (01:03:14) - Recognize that difference. How would a human do it? I want to take a bite out of it, or, you know, I'm sure the Machine could insert a probe. Right. But in the end, what you're really getting at is why do you want to know what an Apple is? Why does it matter? What is an Apple to you? Um, and why does it matter to represent an Apple? And when you're doing it, who are you doing it for? But those are the questions to me that get to that constitution of, you know, the relationship between desire and subjectivity. So
  77. James Parker (01:03:45) - Operational, because, because the machine has no ability or interest or capacity to even inquire about that, it's simply a matter of receiving the data and then producing a kind of an output, which is the declaration Apple. Yes. Yeah. So how would this work and you've, you've talked about operational Listening, could you walk through the steps of what operational Listening would mean?
  78. Mark Andrejevic (01:04:12) - Oh yeah, I guess by operational Listening I was thinking about the ways in which, um, I don't, I mean, I've got to reconstruct what I've got to think about what was at stake in my, the whole operation. I spent so much time thinking about operational ism. I think, to, to get at that question of what operational ism is, is really to imagine the difference between a form of sense datum, that kind of results in an automated outcome versus a form of sense datum that's the object of particular types of, I dunno, reflection or contestation or deliberation. Um, but I, you know, operational Listening, I guess what I mean by that is thinking about how capturing information enters into a chain of automated responses. And, you know, one of the things that you pointed to was this formulation I use of the cascading logic of automation.
  79. Mark Andrejevic (01:05:09) - And what I mean by that is, um, once you start with developing systems for automated data collection, you generate so much data that, that you need to process in an automated way. And once you start processing in automated ways, then you work almost automatically towards some type of automated response. And so, you know, if you imagine forms of Listening that get implicated in this type of cascading logic of automation, um, the ways in which they get incorporated into re into particular types of responses. So think, I, you know, I, I guess I was thinking even of this trivial applications, like the car that listens to you, um, and decides whether you're sounding stressed and then, you know, modulates the controls accordingly, or, you know, feeds you music that's meant to distress you. Uh, but that, that is responding in automated ways to the automated data that it collects about.
  80. Mark Andrejevic (01:06:13) - And I, you know, I think one of the elements of operational ism, uh, or I don't know if, if it's an element, but one of the connections that I tend to keep thinking about when I look at operational ism is its relation to a preemption. And I guess another way to get it, the character of operational system is to think I juxtapose it to representation, right? So that that's the kind of opposition I'm interested in an image that represents, you know, an image that says something or show something or, um, a sound that indicates, you know, that, that, uh, carries meaning, uh, versus, um, a sound that does something, um, or an image that does something. And to give you a somewhat extended example, one of the areas that I work in is surveillance studies kind of broadly construed and the general dominant paradigm in surveillance studies for quite a while.
  81. Mark Andrejevic (01:07:18) - It's been, I think, a representational paradigm, one in which the standard Foucault panopticon model functions. And it's, it's a very representational model in the sense that it, it relies on the internalization of symbolic meanings. So the way this, the standard operation of kind of panoptic logic is, you know, that you could be being watched in any time. So you internalize the imperatives of the Watchers, presumably because there's some potential sanction, if you don't. So that form of surveillance relies on representation, um, as is indicated by the, you know, in the malls, those domed things in the ceiling, and you can just buy those domes and stick them up, right. And they can serve as symbols of, of surveillance. So you get those signs, smile you're on camera, right? That the symbolic reminder that you could be being watched and the panopticon itself, right. Bentham conceived of it as also a symbolic, it was a spectacle. All the surrounding folks were supposed to come and look at the powerful apparatus. Uh, and, uh, and in, in that sense, it, you know, it, it operated as a kind of symbol of the power of.
  82. Mark Andrejevic (01:08:29) - The surveillance operation. Uh, but what's interesting to me is the way in which certain forms of surveillance no longer rely on that symbolic logic. It doesn't care whether, you know, you're being watched, the idea that, uh, or that you internalize in a disciplinary sense, the imperatives of, of the Watchers, right? Because the paradigm is actually, maybe we don't even want you to be no, think that you're being watched all the time. We don't want you to change your behaviors in the way that disciplinary model of surveillance would, because we need to know what your actual behavior is actual in quotes, right? Your non, non disciplined behavior in order to find out the truth about what your future action is. And then we control you not by, uh, getting you to internalize things. So that was a very Bentham, utilitarian idea, right? How can you do it with the least harm?
  83. Mark Andrejevic (01:09:22) - Right. You know, if they, if people will actually internalize this, then they'll all behave, then you're, then you maybe won't even have to punish them after the fact, like least harm, you know, most good utilitarian logics. Now the logic is quite different, right? We watch you and you don't have to internalize anything. You don't need to have any relation to the symbolic, but we know what you will do in advance, and we will stop you before you do it. So if there was a kind of hands-off logic to disciplinary surveillance, there's a very much hands-on logic to what might be called. I don't know, posts, post pan optics surveillance. And I can give you just a very concrete, a concrete example. Peter Thiel is funded. This company called Athena that's meant to address the problem in the U S schools of school shooting because, you know, on a legislative level, they're not able to address that.
  84. Mark Andrejevic (01:10:13) - And it's, it uses these cameras equipped with machine learning, to be able to detect suspicious behavior and to act preemptively in response to it. So some somebody is not behaving, right? The school doors get locked down. The authorities get called, but the idea is stop. The action that's going to happen before it can happen. And the marketing literature for that, one of the things that it claims to be able to do is to detect when somebody's going to throw when, when somebody, after they've started throwing a punch, but before it lands, they can identify that a punches being thrown. And that space isn't really interesting space, I think because it's a space of automated intervention, right? Like there's nothing, there's nothing that humans can do in that space. Once they're notified by a machine it's all over, but it creates a space for machinic intervention.
  85. Mark Andrejevic (01:11:04) - And I hope that makes sense in a way to think about the difference between representation and operation. In one case representation functions in a symbolic way to instill a kind of behavioral response on the part of a conscious subject in the, in the second, you don't need the conscious subject or the symbolic representation. So you don't need either side of the symbol of the representational relationship. All you need is you've got an input that's collected that enters into a Machine and operation that then intervenes before something can happen. And it's the difference between a kind of, I don't know what you might call deterrence and preemption and deterrence is a logic of stasis, right? You know, I know I'm being watched. I better not do this. Nothing happens. The crime doesn't happen. The intervention doesn't happen. Whereas preemption is a logic of ongoing constant, uh, active intervention.
  86. Mark Andrejevic (01:12:05) - Stop this here, stop that there, stop this arrest, this person shoot that person. Right? So, so one is, one is really in a sense asymmetric and super active. Um, and the other is a little bit more symmetrical and static. Uh, and you can see the drone logic in there, right? That was, that was the figure of the drone that got me thinking about that. The difference between kind of cold war stasis and drone hot war, um, with the drones we can go in, we can preempt, you know, find out who the potential risks are using data profiling, take them out before they do anything, because they're not because the claim is in this logic, the opponent is not amenable to the symbolic or the representational intervention in the way that kind of major powers of the cold war were right in the, in the kind of active version
  87. Sean Dockray (01:12:55) - Could you talk a little bit about the kind of like recursive dimension or maybe in all your terms of you like a feedback, um, to, uh, kind of, uh, automated intervention and to prevent something from happening something all the majors, you know, happens, uh, that reshapes the kind of situation. Uh, so that, that the one thing doesn't happen, so, you know, chooses another path, but then, you know, that that creates a new ground on which, you know, future actions might happen, you know, so there's a. I guess I was just thinking that as a kind of recursive element,
  88. Mark Andrejevic (01:13:35) - and this is where I think the drone stuff comes in, right. Because of the somewhat terrifying logic is a rampant acceleration as a feedback effect of, of what's being described. Right? So in the absence of what's possible made possible through the process of representation, what's substituted for representation is preemption, but preemption has to be, I think, increasingly generalized, right? So you imagine a kind of the presupposition of preemption is a kind of ongoing process of, I don't know, social breakdown, which requires ongoing preemption and in the end, total preemption, um, the, the drone logic of course, right? If you go in and you, um, right, this is, this is the critique of drone warfare. If you go in and you preempt, you're actually engaged in a process of asymmetrical warfare that doesn't even look like warfare, there's no battlefield.
  89. Mark Andrejevic (01:14:37) - They're just people who've been, you know, statistically or for various profiling reasons considered to be potential future acts who are taken out. Uh, and then the result of course, is, um, ongoing resistance, which in turn feeds ongoing preemption. And that the same might be said for once you automate this process of preemption, you kind of presuppose the breakdown of the symbolic, which then seems to feed into, okay, if there is no symbolic, then in a sense, all things go, which means that you need to have, you know, whoever's in the position of authority, it has to engage in kind of escalating forms of, of monitoring and surveillance. Ben Anderson, who writes about drones has a really nice phrase. He says, you know, if, if risk is everywhere, then surveillance has to be throughout life without limit. That gets to frameless as, as we discussed earlier. But if you attach to that, the element of creation, it also gets to kind of preemption, um, throughout life without limit. And there's a weird death drive in there, right. You know, somehow, how do you, how do you, as your antenna get more sensitive to risk through various sensor apparatuses, the fantasy is that risk can be managed more effectively this way, but higher sensitivity detects more and more risks, which detects more and more preemption. And eventually you come to the conclusion that, you know, life is risk. Uh, and how do you preempt that
  90. James Parker (01:16:06) - Elimination? So I've got a kind of a dumb question in mind, um, which is basically what's the problem with automation or what's the problem with whatever synonyms for surveillance capitalism or tech, you know, the technological present that we Machine, that we want to, that we want to use. In other words, what's the problem is the dumb question. The reason I'm asking that question is because I guess a lot of the stretcher that I've been reading recently about, uh, some thinking, um, you know, Sue Bob's book, um, there's um, that book, um, dadada, colonialism, uh, and you know, most of the most things you read, they, they end up with the problem being autonomy, even if they don't think they're being liberals. Alum is basically that, you know, whether it's preemption or, you know, they're sort of recursive sort of interventions in your life, but you know, where you're sort of in their sort of cybernetic loop with the Machine systems or what have you, um, or just simply kind of advertising and shaping your, how are you going to vote or what you're going to buy.
  91. James Parker (01:17:26) - And so on, they always seem to end up with autonomy as the problem. And I've just got a feeling, a suspicion politically speaking, that that isn't enough that we're not going to overcome it. That's not enough to, you know, break up Facebook or to break up the systems that are in place here, which are kind of, yeah. And, and then at the same time, I think, well, the sort of the other main alternative critique is effectively capital. So you basically say, um, well, Jeff, Bezos's on 76 billion earned, sorry, precisely not. And has somehow increased his, uh, personal wealth by $76 billion since the start of the pandemic. And, you know, we have unprecedented, um, you know, I think Apple just reached $2 trillion valuation. You know, we have an insane, uh, values, uh, insane, um, levels of inequality and corporate power. And it's. Effectively extractive technologies and they're kind of the present paradigm that's allowed that to happen. So you can sort of say, well, you know, the critic, but not that doesn't sit that critique doesn't really seem to get at the kind of the epistemic and, you know, the kind of the real paradigm shifts, which do seem to have taken place in what it means to know and what it means for technical systems to know, and to act in the world, which sort of, uh, forms of power that extend beyond just capital accumulation and corporate power and so on. So I just don't find either of those two critiques that satisfactory. I mean, obviously you could combine them together, but just thinking in terms of like, what, what, what, how do, how do we frame the problem in such a way that it's going to get some political purchases? And I, I feel like neither the autonomy nor the critique of capital on their own are really going to do it.
  92. Mark Andrejevic (01:19:30) - Huh? It's a big, yeah, it's a big, big problem. And the big question, I mean, in this, I don't know, maybe there's a way to think when you put those two together, I suppose there is a configuration which they're really two sides of the same position if one critiques capital also from that position of, of autonomy, I think where the work that I've been doing, you know, the direction that it points in is really about a concern about the fate of the political, uh, and the fate of sociality, and probably wrapped up in that the fate of collectivity. So I don't know, you know, whether or not that's going to have political purchase is always a wager the way politics is. Um, but yeah, I, I agree with you about the, uh, the autonomy critique. I mean, again, to me that it comes to arrest all too heavily on the notion of a kind of hermetic, um, subject, which as, as we've discussed earlier is, is incoherent, I think politically, um, and leads to political incoherence.
  93. Mark Andrejevic (01:20:40) - And, you know, the, the moment that we're, when I look at the U S these days, um, this is one of the places where I worry about political incoherence, right? You know, this, the, the institutions and practices that made possible a kind of, you know, a sense of political, uh, action and activity have dissolved to the point that politics has become, I don't know, it's, it's become something that's inseparable from conspiracy. And as soon as you start to, to metal in it, you find yourself caught up in, in those logics. So does that mean in concrete terms? Ha I don't, I, you know, I think one of the things that I've mentioned before that I'm really concerned about, about the deployment of automation in contemporary context, you know, maybe could line up what, what some elements of the critique of, of automation would be. Certainly, certainly one of them to me would be, um, the background and misrecognition of the social.
  94. Mark Andrejevic (01:21:40) - So very often the decision-making processes that come out of automated systems are displacing or replacing processes that through participation in them reinforced the sense of, of the political and, uh, the sense of the social. Um, so I just to give it again, a kind of silly concrete example, there was that guide MIT whose stuff I liked, try it out every now and then Cesar Hidalgo. He's not there anymore, but, um, he was trying to imagine a solution to the political problem of information, which in, you know, various forms of social theory, um, it's nothing new, but the idea is that in a democratic society, um, it's just hard to be informed enough about the issues to participate meaningfully, uh, and you know, ongoing debates over what that means, but his solution was unsurprisingly in the current context, get a bot that figures out for you, what your political commitments and concerns are.
  95. Mark Andrejevic (01:22:41) - Um, that bot has the capacity to go through all the available information. All probably not, but, you know, whatever it decides is the credible meaningful information, and then decide for you who you should vote for, and then maybe eventually just vote for you. And that to me is like, it's, it's misreading what the problem is, right? The problem is, has to do with what it means to engage politically and exercise a process of a collective judgment. Um, and the idea that Machine could do that for you better than you could kind of completely misses the point it's, it's the wrong solution to the wrong, to, to whatever it perceives to be the problem. I don't know some of the other things that I've thought about when it comes to automation, um, and this really does get to kind of, you know, power imbalances and, um, that seemed worth.
  96. Mark Andrejevic (01:23:31) - Thinking about, and I'm not really sure what to do with this problem, but to the extent that automated systems can generate knowledge, that's actionable, but, but not explicable complicated particular paradigm, which imagines that, that, um, explanations should be subject to forms of transparency that make them comprehensible. But let's just take it as an assumption for the moment that it's actually true. In some cases that these systems can generate this type of actionable knowledge, then you kind of have an asymmetry over who has access to that knowledge and is able to ask the questions and has the apparatus to turn the system towards asking those questions. And that knowledge looks fundamentally, you know, if you, if you're going to call that knowledge in quotes, it's, it's, it's non sharable, right. In, in ways that other forms of knowledge are supposedly or meant to be or understood to be shareable.
  97. Mark Andrejevic (01:24:27) - And what does it mean to have that form of knowledge monopolized by particular groups? I mean, I think in that sense, the concern about, um, who has ownership and control over the apparatuses for collecting the data and querying it and generating insights from, it seems to me to be a huge political question. I don't know what the answer is, but I, I, I think that, um, that type of knowledge to be concentrated in the hands of a few, very powerful, basically an accountable commercial organizations is incompatible with what, you know, forms of, um, democratic civic life. That might mean that those forms of civic life or on the verge of distinction extinction. But I, you know, I there's, I I'm beholden to them. So I'm, I don't know. I, you know, in terms of practical political terms, I think that means challenging, you know, the ownership and control over those, uh, over those resources and also challenging the version of the economy that we've created, that, that runs on that.
  98. Mark Andrejevic (01:25:34) - I don't know if that it gets to, to, you know, what, how do you generate, I mean, in a sense, the thing that I keep bumping up against is the processes that I'm talking about need to be contested by the practices that they threatened, obviously, but the resources to contest them are increasingly undermined by their ongoing exercise. I'm not sure of the way out of that the book ends with, um, you know, when you write a critical book and you've got that moment at the end, right. There you go. But there's hope, uh, and, and, you know, because why would you write a critical book if there were no hope, right? Why would you spend all your time doing that? But I, it kind of ends up that gesture of there's always hope, right? Because there's always history until there's not, but it was hard for me to conceive of it other than a kind of rebuilding on ruins. You know, the ruins look, I don't know, at this moment it looks kind of inevitable going through that process. So, yeah, I don't know.
  99. James Parker (01:26:38) - I was gonna ask, uh, in that case, the question that I keep coming back to, which is, is the politics of automation, or should the, should the politics of automation effectively be abolitionist? So like, of course in the context of Machine Listening, there are, you said, yeah. You know, applications and use cases that are valuable and hard to, hard to take issue with, but they just seem to be so dwarfed by a sort of seemingly unstoppable logic, which will systematically use those use cases as its wedge for, for starters and, and just overwhelm them in any case that I keep finding myself in, uh, in our, uh, pushing up against the kind of, yeah. Kind of abolitionism like that. I, I just, I can't imagine a world of Listening machines beginning from anywhere close to where we currently are. There isn't a dystopia. And so I just kind of, my, my, my, it seems to me that, like, it might just be best to try to say let's smash the whole thing, but I mean, maybe that's just, um,
  100. Mark Andrejevic (01:28:02) - Um, yeah, I mean, I guess the framework that I've been thinking about is one in which it's, you know, it's the relationship between the technology and the social context and, um, there's no, there's no way to meaningfully change the deployment. I shouldn't put it that strongly. It's going to be difficult to meaningfully change the deployment of the technology without significantly transforming the social context.
  101. Mark Andrejevic (01:28:32) - So there's got to be a change in the way, um, you know, power is, uh, allocated and controlled and reproduced. If we're going to imagine some kind of change in the way in which these automated technologies, service, power, um, and, you know, failing that, it's just hard to see how these technologies don't continue to consolidate, uh, um, and concentrate existing forms of power. Uh, which I, I mean, I'm guess I'm not against, I mean, I w it would be interesting to try a series of, uh, you know, if, if it were possible within the existing social context. Uh, I'm one of the things, one of the things I worry about a lot is cause I'm, you know, mostly immediate studies is, um, the role that the media play in all of this kind of the degradation of the social tendencies that I'm thinking about. And here, when I talk about media, I'm meaning a little bit more now, early, um, in terms of, you know, what we we think of is mass media and social media communications media.
  102. Mark Andrejevic (01:29:39) - And, you know, I think it's one of the real pathologies of the contemporary media system is the way in which automation exacerbated. It's the hyper commercialization with which it's continuous. So it's, whenever I see these kind of, I don't know, like the Fairfax press ripping on social media, I think, you know, it's, it's great for them to have a certain kind of safe scapegoat, but it's okay. Continuous the development of the commercial media into social media, they've been, they, they feed on the same logics and they feed, you know, they feed back into each other. Um, but what if you imagined a different model for the media, you know, uh, kind of, um, public service, social media platform, um, the one that didn't rely on the forms of data collection and hyper targeting, um, ones that didn't privilege algorithmically, you know, engagement and provocation over, you know, accuracy and deliberation, and I'd be game for trying something like that.
  103. Mark Andrejevic (01:30:41) - Um, uh, you know, that would be, we require a kind of wholesale reinventing of the economic model that we use to support our communication systems. But that seems not inconceivable within even the current political arrangements. I mean, very difficult. Yeah, no, nearly impossible, but maybe not inconceivable. It does seem possible if you reach a point where political dysfunction has, um, Galvin, I asked the people to respond in some ways that require recovering the ability to function politically in meaningful ways that you could, you could take those types of actions, uh, you know, abolition. I mean, I agree with you, the abolition looks like I'm really tempted sometimes just to think automation is violence and, and therefore, um, uh, you know, you get an abolitionist stance. Um, but I, you know, I do have a commitment to the idea that automation can function differently in different social contexts. And abolition just looks impossible as a political stance. It looks less, less, it looks less impossible than fundamental transformation of social relations.
  104. James Parker (01:32:00) - I think those are
  105. Mark Andrejevic (01:32:01) - Two rocks,
  106. Sean Dockray (01:32:04) - Uh, there's a Fred Moten quote about like apply it to automation. It'd be like, not, not, it's not about, uh, abolition of automation so much as a society that makes automation possible. Right? So I think the abolitionist stance, it's like very, very sensible one in that sense, but typically the boundaries of the parameters. So the horizon of abolition is on particular problems. Like this device, that's listening to us, let's, let's abolish Listening devices. And yeah, I really agree that like, if those are your horizons and they're gonna, they're gonna fall hopelessly short. Uh, so I'm quite committed to abolition as a, as a okay. And in a certain sense,
  107. Mark Andrejevic (01:32:52) - I believe. So that makes automation possible.
  108. Sean Dockray (01:32:56) - I think that's a perfectly reasonable horizon, uh, but in the interim, um, you know, so I always feel sort of like yanked in two directions, which is like to maintain a certain horizon, a certain like, yeah, that, that I honestly don't have a huge amount of, of seeing in my lifetime. And then, uh, alternatively, like, you know, so, um, what, what do we do in the interim? Um, and those kinds of things are like, I guess, Would take some very non abolitionist forms, like, uh, political engagement and, um, you know, like arguing for regulatory, certain regulatory, um, power, all this kind of stuff. Um, so I often find it really hard to engage in this kind of conversation just because I feel so bifurcated, you know, like, uh, by the, uh, from, from, um, uh, from the get-go. But, um, yeah, and I would agree also with what you were saying, Mark about like, if it ever, if the society ever, like, this is kind of like a profit through them, uh, exploitative society ever does kind of come to the point of abolition. It's not going to be a pretty process. Um, it will, it will take us through some pretty dark and difficult places like collectively as a world and some more than others. And, you know, it's, I think in that sense, some of the work we can do now is sort of like building structures to kind of recognize those moments when things are kind of falling apart, um, to, to activate a new and better, um, set of relations between each other, a better world.
  109. Sean Dockray (01:34:50) - You know, I feel like the people who are most prepared to capitalize on, um, things falling apart are precisely the wrong people. They're like, uh, the ones who are all jumping into the pandemic with lots of, lots of answers. Um, but yeah, I wanted to, there was one thing it's a real huge change of subject. It goes back to the paranoia question and it's just like, one thing I wanted to talk about was, um, throughout the entire project, whenever we're talking to people, whenever I'm thinking about Machine, Listening always approached the subject and I just feel really paranoid when I start having conversations about it. Like something about participating in a conversation around which you Machine Listening instantly makes me feel like a paranoid subject. Like I, um, I don't know that I'm like fantasizing about this like overwhelming power of this thing. I don't really know and understand, and I'm killed connecting so many dots between invisible players and, um, I think James can describe it more articulately than, than that, but, um, I guess I'll just want to return to the paranoia discussion a little bit, particularly in terms of how do we, how do we even think and talk about Machine this thing without becoming paranoid lunatics, or if we're going to become, you know, have that subject position, then like what we do with it? Like what, how do we, how do we talk about Machine Listening I guess,
  110. Mark Andrejevic (01:36:14) - Huh? Um, I don't, I, I mean, my experience about talking about these forms of, um, monitoring is that from relatively early on being considered a kind of, you know, paranoiac, um, you know, who was it Hofstetter? It was it the paranoid style in American politics? Uh, I, early on it friended at the university of Iowa said, you know, you practice the paranoid style in academic discourse. And, um, you know, that feeling, I wrote the stuff early on that was, you know, to my mind, really dystopian and, and really extrapolated from the logics that I was seeing when I look back at that stuff for which I was, you know, kind of, I felt kind of disciplined in academic contexts, like, well, you're dystopian, like, you know, there are those tech people who are like dystopian or utopian, but it's more complicated than that. And like, you're really on the dystopian side, I look back at those things and what I predicted was actually relatively tame compared to what happened.
  111. Mark Andrejevic (01:37:16) - So, you know, like those moments when I thought like, man, should I put this in? This sounds really crazy. And I go, okay, I'll roll that one back. And then 10 years later, it's like, I should've put that in because you know, that's, that's what happened. Um, it's very, just at least if I look back on that relatively short period of time, it's really hard to underestimate the dystopian potential for these technologies. And when you're told that you're overestimating it very often, the people that tell you that are incorrect. Uh, so what does that mean for paranoia? Um, the danger is, uh, and, and this is what, the one that really freaked me out was relatively early on when I was talking about this stuff. Um, I was doing some lectures, um, in Europe, in Slovenia, uh, and, and the students after listening to me very nicely, it was, it was like a compact series of lectures. Like for hours brought me, um, the series of movies. I can't remember what they're called now. Uh, they were pretty big then, and they were, and they said, we think you'd be interested in this. And they were like the equivalent of what Q Anon is. Now. There is conspiracy theory, mashup that just pulled together everything, Illuminati, Rothschild, um, you know, nine 11. Currency, but everything. Uh, and, and this, so I started to watch it and then I realized, wait, this is just conspiracy theory mash. And I asked them, why did you give me this? And they said, well, it just sounded like what you were saying. And, and the, the line between critique, as I understood it and conspiracy, it was completely non-existent for them. I thought I was saying something completely different from what this movie was saying. They thought it was the same stuff. And, uh, that inability to distinguish between conspiracy theory and critique looks to me to be like the impasse of our moment, you know, because I can S I start to see why they thought, you know, I imagined that what distinguished, what I was doing from conspiracy theory is that, um, my stuff was, you know, both evidence-based and therefore refutable, you know, whereas conspiracy theories, you, and, you know, potentially refutable like we could.
  112. Mark Andrejevic (01:39:34) - But, um, but that inability is something that seems to me to be symptomatic of the breakdown of, you know, the, the kind of institutions for thinking and evidence giving and, um, verification that we relied on from a social perspective to adjudicate between those two things between conspiracy theory and critique, when those institutions break down, when they're no longer functional, um, you can't have recourse to something like, well, yeah, mine's true. And yours is crazy. It just doesn't mean anything. Um, what means something is the institutions that are, and the practices and the, uh, and the shared, um, dispositions that allow you to make those kinds of claims. And when those are gone, the two become indistinguishable. Uh, and, and, and so I guess what I'm trying to say is I think that feeling of paranoia that you have is less maybe a function of, um, your own subjective disposition than of the conditions under which you're trying to make that argument,
  113. James Parker (01:40:40) - Just to give a concrete example of that. I mean, I don't know if it's really a problem of scale. Uh, you know, when you say conditions under which I can remember the exact, the turn of phrase was, but you go on, you go, you find your way onto a com the website of a company that does Machine Listening, and you'd never heard of them before. And it turns out that you can quickly find that they got X million dollars in venture capital funding only six months ago. Uh, and then you go on their, um, partners section of their website. And it turns out that there's 15 other partners, um, several of which are connected to universities, um, who you think you read an article by one of the researchers who seem to relatively benign, uh, you know, academic, but it turns out it's country and this competitor, and actually the other, one of the companies that they're related to are in fact related to, um, are funded by the Israeli military, and actually, um, and then, and it's not very hard for you to spend, uh, your evenings going down these black holes, um, of true and real networks of funding, research, you know, capital flows, um, government contracts to deliver COVID, um, voice detection tools.
  114. James Parker (01:42:04) - I mean, that's what it, that's what happened yesterday. Basically I found an article from the Mumbai government had, um, was saying that it was going to be able to deliver a COVID voice detection app soon. And I was like, what, how, and then it turned out that it was this us company and, and, you know, and then I followed the threads, right. That experience is sort of like, in terms of the, kind of, almost like the gamer experience of Q or non fandom, or kind of, you know, it's really similar structurally, it's really similar. I, I go, I went on the internet, I found a link to a thing. And then I, when I got to the thing, I was like, Oh my God, this thing's crazy. And then I get linked to another thing and you, you know, you're drawing connections and, you know, to me, that's research, but you only have to change a couple of the nodes, uh, in the network that I was mapping for myself. And it suddenly your, the question you're following, um, you're certain that bill Gates is going to inject nanobots in a, you know, in a, in a, in a vaccine. And, um, so the sort of, it's almost like they're kind of partly the structure is, uh, an entailed by, you know, internet and hypertext reality or something as, uh, at scale or kind of what sort of trying to draw links between flows of funding and research is it seems like there's.
  115. James Parker (01:43:33) - Yeah. Part of my experience of my paranoid experience and researching Machine Listening is an experience of web surfing, which I imagined to be really, really similar to what Q Anon folks are doing every night.
  116. Mark Andrejevic (01:43:48) - W w yeah. You point out something which I think is really interesting about, about the internet, right? One is the structural, you know, the web structure that you invoke the kind of chain of connections. Uh, but the other is, uh, I don't know if this is right, but like there's, uh, but, um, the world is a really messed up place in terms of power connections, and, you know, um, folks who we think of as being in one position, actually have a whole range of subterranean connections, all that is true and real prior to the internet, it was much so hard to extract and find, um, that w that we had this kind of potentially useful symbolic fiction, that where we could imagine things were better and people were better and structures were better than they were. And, you know, the truth is they're not. And how do you deal with the reality of that truth, which, which, you know, once you see the, those kind of relationships between power money, um, I don't know, research implementation, once you trace all of those, if to some extent there is a functional symbolic, um, you know, symbolically efficient fiction, uh, that allowed us to behave as, imagine that we're better than we are, and to behave in ways based on that, um, the internet breaks that down and means that you actually have to then confront something different, you know, the conspiracy theory responses that it that's kind of intractable and ungovernable, um, and it looks intractable and ungovernable, right?
  117. Mark Andrejevic (01:45:32) - Because you could, you call it a rabbit hole and, um, how can you possibly cognitively map that set of relations it's too much. Um, and I suppose that's the concession of the conspiracy theorists, right? If it's too much, then in a sense you can just choose, um, the story that works for you. And, um, and that, and that way you manage the, you give yourself a kind of filter through which to make sense of, of all of these things. But I think that's one of the aspects of the, I don't know, social transformations that seem to be really connected to the media environment, which is precisely that many of these connections are true and that, you know, many of the disillusioning, um, realities of the world are foregrounded in ways that, you know, they were papered over in, uh, in other ways. Yeah. So, I mean, I don't know, I I'm, I'm really committed to the difference between critiquing conspiracy theory, but I don't, I don't quite know what to do with the fact that the chain of connections that you're talking about is so large as to be probably, um, not fully mappable or governable.
  118. James Parker (01:46:42) - I mean, another, we, we have to let you go, but we're speaking to Vladan Joler, um, recently about the anatomy of an AI system project, where we had literally attempted to map, uh, you know, an AI system. This is one with great Kate Crawford and, and, um, it looks really similar to the conspiracy theory maps that you sort of going around on the internet. And, and he said, you know, one of the problems is that these are pretty much not mappable anymore. You know? Well, of course, of course they would never mappable, but th th th that sort of the, the technical difficulty of doing it, even to people as well, resourced as those two and as smart as those, it's just, it's just not possible. So, so the mapping exercise then like assumes a slightly different kind of role it's meant it's, it's mapping is a kind of as a symbolic, uh, thing it's like meant to point you into some directions, but it's not, it's not true. I mean, well, there's no such thing as room anyway. I actually probably should probably wrap up now because I think I'm, I'm on the barking down, uh, uh, fucking, I can't even speak is the point.
  119. Mark Andrejevic (01:47:54) - I don't think we should worry about worry too much about like, the fact that it's formerly a stretch release, similar to conspiracy theory stuff, since that's the whole point, but conspiracy, if there is like adopt an appropriate, like forms of representation that are kind of, um, like, like maps or like proper institutions, or, you know, that they'll adopt forms of respectability and all that. So I don't think running away from it because it's structurally similar. It's like, that's sort of the, yeah, that's the point.