You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

90 KiB

title status
Dan McQuillan Auto-transcribed by reduct.video with minor edits by James Parker

James Parker (00:00:01) - All right. Thanks so much for joining us then just to start off, I wondered if maybe you could introduce yourself however, feels right to you and we’ll just sort of go from there.

Dan McQuillan (00:00:13) - Yeah, sure. I’m not quite sure what to say. I mean, I’m a lecturer in the university. Uh, I think my official job title is lecturing creative and social computing, but that’s just completely laid up. And, um, uh, so my, my distant history, which is somewhat relevant, I suppose I have a science background. I did a PhD in experimental particle physics, but I didn’t, I didn’t pursue that into academia. And I took some different paths. And so immediately after that work with people, learning disabilities, people, mental health problems. And then, um, I started to try to combine that kind of thing, you know, with the tech. So I did some projects with refugees and asylum seekers in the days when Unicode was young, uh, to try and get translated versions of vital survival information for refugees and asylum seekers that was done with those communities. And, um, you know, I’ve tried to pursue line since then, I suppose, of, um, of, of combining politics, I suppose, or a kind of activism of some kind with, um, uh, you know, sort of close observation of where things might be about to go in terms of technology and its frameworks and so forth, try to mingle those things in practical projects.

Dan McQuillan (00:01:19) - And then also the more of just the into academia, the more, um, I focused also on trying to articulate that and sort of critique it. And I did a stint in, uh, human rights work for honesty for awhile. Yeah. I dunno. Pretty mixed bag.

James Parker (00:01:34) - Yeah. It says on your, um, on your bio here, you worked in the NHS two and citizen science project in Kosovo. It’s an absolutely amazing biography. And, uh, you know, I, I’ve never, I don’t think I’ve ever read a biography, um, from somebody whose, you know, forthcoming book is on AI that doesn’t mention I, you know, um, and it’s no, no, no, not at all. It strikes me as really, um, so interesting to frame, you know, the knowledge or the, but it’s required to, for you to produce a critique of AI as not stemming from, uh, uh, back, uh, background, particularly in AI. You know, that seems like a very, very deliberate choice. Uh, to me, I wonder, you know, could you say a little bit about that or like what it is that you take from the work that you’ve done that, uh, carries over to what you’re doing now?

Dan McQuillan (00:02:42) - Yeah. You know, until you guys ask that I’d never thought of that and you have pointed out a very glaring facts about my bio, but anyway, uh, yeah, I mean, I, you know, like I said, my background is in science originally, you know, that’s, that’s where I started. And I think I developed a kind of a, you know, a suit self-generated a sort of auto critique of science by being involved in it. You know what I mean? I was a bit like, Hmm, I’m not quite sure about some of this. And, uh, I don’t know. I mean, I’ve always had a pretty strong sense of injustice, I guess, and where, where it seems to me to occur. And yet at the same time I’ve had this incredible technical scientific education. And I can see how a lot of the times how those things operate in very separated spheres, even though they have massive impacts on each other.

Dan McQuillan (00:03:30) - So I guess, you know, I I’m, I’m just interested in areas where, you know, it’s just where I am, you know, I’m positioned I’m hypersensitized to potential social consequences, uh, particularly for people who are already in less, less powerful positions or whatever people at marginalized, it’s somehow always been important to me for various reasons I can sometimes think of, but I’m not sure, but that’s my basic interest. I’m really interested. I’m hypersensitized towards the social impacts and political consequences of technologies in general. And on the other hand, I can sort of see inside them, you know, I can understand exactly what’s going on. So I feel at least like I can, um, both debunk some of the mythology and explore some of the actual consequences and sometimes think of, you know, how to, to, to turn them into something less toxic at, at particular moments.

Dan McQuillan (00:04:22) - Um, so yeah, I mean, I was following, I think, cause, you know, you made me think back about it in a way and I never have. I think I came to AI through social media weirdly because I was very closely following unteaching around social media, following the 2011 uprisings in a, particularly in the middle east. And, um, just started to, to sort of sink through the sort of surface of discourse. It was more about the, you know, the, the narrowed, the contesting narratives and the platform, the way the platforms assisted those and into the, you know, what was going on under the hood. And I think that was, that was the pathway to AI for me.

James Parker (00:05:05) - Um, maybe now is a good point to ask you what you mean by AI. I mean, um, I don’t know. I, I kind of want to go there. I kind of want to ask you what specifically your problem was with experimental particle physics, political perspective. I mean, do you have something to say about that or, uh, um, before we get to the hour question, which obviously we do want to talk about, but I don’t want to sort of clear all of that out of the way, you know,

Dan McQuillan (00:05:32) - I mean, I, I’ve probably got too much to say about things in general, so I try and keep the answers fairly short and you keep prompting me. Right? You want to go further? I mean, you know, I have a critique of science, I would say probably prior to my critique of AI or rather in the, uh, somewhere standing behind my critique of hate, because I also think that, you know, scientism let’s say is, um, one of the strands of ideology really that, that also powers AI as authority and in, in, in the world. So, um, I guess again, you know, because I thought I was maybe prepped for AI because I thought a lot about, um, you know, this, this sort of empirical, um, sort of, uh, how would you say empirically-based assertion of knowledge through, um, the ability to have insights that were invisible to the net, to the worker citizen in some way?

Dan McQuillan (00:06:32) - I mean, you know, my background in particle physics was I was motivated by at least two important things for me. One was really just wanting to get to the bottom of things. And the other thing is I didn’t want to get a job. So I did a PhD in that. And, uh, yeah, it’s weird. Like, you know, particle physics is like, at that time as well, you know, it was, I was super interested in the fact that it was so bureaucratic and con and kind of what I would now understand as corporate, I didn’t have the words for it then. And, uh, it was just a really amusing experience and sort of, you know, to see all these kind of hierarchies of power and maneuverings and things like that. And at the same time, I was also really freaked out by the way that nobody seemed to be particularly interested from my point of view in the most interesting thing, which was the complications that content mechanics had surfaced about things like, um, objective reality causality and, and, and so what seemed to be, you know, really interesting sort of mysterious elements that, that, again, you know, had some kind of, uh, possibly important, um, messages about our systemologies and relationships in the sense that it’s basically saying, you know, in a way everything is related and codependent, you know, in, in a very profound way.

Dan McQuillan (00:07:43) - So, and, and, and there’s no interest in that in, you know, in practical experiment, particle physics, even through estimate, you know, it’s, it’s very, um, motivated by, uh, perhaps something similar to AI, which is that, you know, it’s important to get the next result. You know, it’s important to get a result that works, you know, what this might mean, philosophically let alone sociologically is not the primary interest. And that was my primary interest. So yeah, I had a, I had a problem with it and, uh, it had a bit of a problem with me as well. So it was probably best that we parted company at that point.

James Parker (00:08:18) - We were you reading critics of science, like, uh, at the time, like, I mean, with the account that you’re given is close to a kind of, you know, social study of science, um, kind of critique, you know, the institutional dimensions, um, as well as the epistemological kind of dimensions, you know, you’re reading Latour and, and school or any of that. Um, cause obviously that was controversial. Um, back then in the science wars,

Dan McQuillan (00:08:46) - I hadn’t a clue. I never heard of that stuff. And, uh, and, uh, I wouldn’t even read it now to be honest, but like we could go into that one or maybe not, but, uh, uh, no, no. I mean, I went, I did my PhD at Imperial, which is very, you know, I mean the clues in the name. Um, but it’s very, um, you know, it was very, it’s very hard. It was very hardcore science technology place and they, but they had this one little at the Huxley library. I think it’s called there’s one little humanities library in the middle. So I used to go and hide in there and, you know, I’d read like the entire works of George Orwell and various other sort of books that basically I do anything to get away from them, but just to do, to broaden my perspective, I suppose. And I mean, I was, you know, it’s not, I had a, an existence that at the same time was, uh, working in, in, uh, working on my PhD in particle physics and at the same time, getting more involved in community grassroots, community politics, such as, um, you know, the campaign against the poll tax. I don’t know if you’ve ever heard of that, but that was under Margaret Thatcher and stuff. Like, that’s a fine answer. I had other, uh, influences on my thinking.

Sean Dockray (00:09:56) - I’m just thinking one significant difference. It seems between particle physics and AI minimum among many, but one is that part of the political fist. I don’t, I’m totally ignorant about.

Sean Dockray (00:10:08) - What particle the problems, particle physics sets for itself and all, all this. But, um, but I feel like it doesn’t aspire or claim to solve such immediately kind of social problems in the same way that AI does, like, so that the claims that it makes for itself or the, the, the, the way it wants to intervene and kind of our everyday lives is, is, um, a little bit different.

Dan McQuillan (00:10:34) - And I also don’t want to sort of make it sound like a first, first off. I don’t want to make it sound like I’m anti-science or something like that. I didn’t, you know, I’m not, I’m profoundly pro science, but in a very possibly a different way, too. Um, a lot of Orthodox scientists would consider the discipline of science, um, hence the citizen science experiments and so forth, but I would say, and I don’t make a direct parallel at all. I mean, this has been back in my personal sort of journey, but I think there are some, I mean, one of the similarities that would, uh, you know, w would echo for me is the, you know, I didn’t like the, the cathedral, like assumption of, um, epistemological superiority that you get, even in physics actually, you know, even compared to the sciences. I mean, it’s, it’s, it’s really, it’s real, you know, like you ask physicists, think, you know, biology or chemistry or anything like that. It’s just sort of applied physics at a low level. You know, it’s like, it’s a very rustic, um, you know, and in a way that goes beyond any actual rational, uh, justification, you know, because of course, I mean, that’s what drove part in me writing the thing about machining near Neoplatonism because, because the Neoplatonic, uh, urge, or if you like was something that I felt was gave some expression to what I felt was present in a lot of the science that I’d encountered. So, so, so that may be it’s more where the parallel.

James Parker (00:12:03) - So, um, maybe, maybe this is a good point to move on to what you mean by AI, because, um, you know, at different points in your writing, it seems, it seems like it, you know, you could be suggesting a couple of different things. I I’m wondering, like, is AI a field of science and technology? Is it a science, like you, you say at one point in one essay of yours, um, uh, now, whereas an AI purchase its legitimacy from science, right? So in that account, AI itself is not a science, is I, you know, or is AI a kind of a relationship sort of, uh, a marketing purely nothing but a marketing term for, um, or, you know, w w what, what’s the object AI, um, um, that you’re sort of studying or, and critiquing.

Dan McQuillan (00:12:58) - Yeah. Well, you know, that’s why conversations where you guys are so important, right? Cause you guys are keeping the receipts, you know, it’s important to be held to account at that, this sort of shifting nature you’re thinking, but I actually, I don’t know. I mean, I’ve got an easy cop out, right. Which is that it’s quite a slippery thing, you know? Um, um, but I have some, I have some accounts, I suppose, with my own position on it, which is that I became interested in those things that were being described as operational AI sort of right. Then when I was really starting to think about it a few years ago, which is basically also very similar to what’s. I mean, that, that is also, what’s called AI now in operation, I’m talking about, I mean, it was important to me, uh, first off, of course, AI has all the things that you said, right.

Dan McQuillan (00:13:45) - AI is hype, you know, AI is, AI is also a discipline with its own well, uh, you know, well-researched intellectual history, a lot of which, and this is maybe one of the interesting things is not actually the same as the thing that’s called AI, right. There is a, you know, I’m a little bit wary of, of, of, of sort of, um, you know, uh, what’s, what’s the, the historical boundary, the, uh, between different geological era is the one I’m thinking of with where there’s the meteorite layer. Do you know what I mean? But anyway, so I’m a little bit wary of Katie, but I think Katie boundary, I think it is. So, you know, you’ve got to watch that because that’s, that’s maybe a little bit too abstracted, but there is a shift, you know, look of AI history was, um, rule-based, you know, heuristic, um, kind of trying to emulate how people think in a conceptual sense.

Dan McQuillan (00:14:39) - Um, whereas the AI I became interested in because it was the stuff that was emerging since 2012, particularly was, uh, what, as, you know, they call connectionist, which if, if it is imitating anything about the human brain, which it isn’t really, but it has some origins in the idea of neural connections. That’s why they call them here network. So it’s a different way of, um, trying to recapitulate thinking or reason or something like that. Uh, but anyway, I mean, I really sort of back learned that stuff. I was just interested in like, what is this stuff that’s being used right now and developed right now and in which there is so much excitement to the extent that like Sean was saying that, you know, people seem to feel themselves in a position of, um, saying that they can solve all these incredibly diverse and generalized questions, you know, from, from the sort of, uh, from, from business to, to, to poverty, to globe, to climate change and all these things.

Dan McQuillan (00:15:31) - And I was like, I, I, I’d already encountered it a bit in terms of the technical frameworks that underlay social media, which is maybe something I was a bit more interested in at that point. And so I just wanted to engage with the actually right. That’s where I’m getting to. I mean, when I’m talking about AI, I am talking about a specific set of technologies, which themselves are constantly shifting, you know, at the time that I have been writing the book to get it in there, you know, it’s been deep learning, uh, primarily, and, you know, um, reinforcement learning. Um, now I would say the action, this large convolutional neural networks, the action is largely shifting to transform the models. You know, it’s, it’s a constantly moving feast, but it has some commonalities, which these optimization technologies, they are, um, you know, heavily based on, um, faster data collection for training.

Dan McQuillan (00:16:23) - Uh, so they’re all forms of machine learning obviously. And, uh, you know, they, they, they used to disc optimizations, they’re sort of core driving, um, logic, if you like. And so I’m interested in that, but, you know, my perception would be, and this is where I, I realize I totally coincidence. I coincide with a lot of things that are very well established in their own field of science, technology studies. That’s not something I’ve ever been part of, um, you know, to, to understand these technologies is never separate from anything else that’s going on around them. And I’m mainly interested in what they are doing and in what they are claiming to do, and then what actual effects those claims have. So I’m just really interested in what difference it makes in the world. I am therefore interested in maybe some of the hype in as much as it has an impact on that.

Dan McQuillan (00:17:09) - I may be interested in some of the science fiction portrayals in the sense that I realized that informs the ambitions of the next layer, but I’m not interested in science fiction AI because, um, yeah, that’s not, what’s actually going on. And most of the time it’s a bit of a diversion, but I am very interested in, um, how the technologies I’ve talked about concretely operate, what their actual operations, computational data operations are, but in, in as much as they have a resonant effect, you know, with other areas of experience, particularly the social and the political. Um, so, so, you know, I guess, uh, you know, years ago I would have read some food co and, you know, ideas around sort of assemblages and things like that and delays and guitars. And I was like quite into that years ago. So, so those ideas would have stayed with me. And I, I guess I would see AI as a kind of assemblage in a way. Um,

Sean Dockray (00:18:09) - Well, I’m not, can I actually just ask a quick question, just a follow up to this? Cause we’ve been looking at James has been reading a whole bunch of patents lately, and I was just thinking about like, where do you position the patents in between the actual functioning of AI and that science fiction kind of future AI, because those are those seem to mean that to function just a tiny bit differently in that, because they’re written by the, the, the companies that are kind of implementing and trying to sort of pursue some of those imaginaries in, in the technology. Um, would you, yeah. How do you, how do you think, like what, what is the purpose of those patents or how do you think they function? Um, because their job,

Dan McQuillan (00:18:49) - Yeah, I, they are interesting and I haven’t made a study of them, but I have stumbled on a few. And one that I use in my book is one from Airbnb and they had a system, his name, a temporary escapes me, but it was essentially to use machine learning to identify undesirable, undesirable, uh, potential residents. And, um, the language they use in there is amazing because it’s written by, by the, by the tech team, I guess, or some parts of it, you know, and th their way of articulating what they’re trying to do is, um, so unreflective, they’re unaware of what they’re saying, you know, and they’re a bit like it’s a bit, you know, it has to kind of, um, has about the same level of sort of intellectual sophistication as a sort of Putin is the enunciation of the Ukrainians as sort of drug addled, Nazis. I mean, there’s a, you know, we’re trying to identify, you know, any future residents of Airbnb properties. It might be drug, drug, addled, Nazis, or porn stars or stuff like this. And what I find interesting about that is on the one hand it’s, it’s, um, a revelation into what the people making the.

Dan McQuillan (00:20:00) - The thinking and what they think their masters want to hear, you know, what they think the organization wants unmediated by PR gloss or, you know, um, or, or, or, or, um, intellectual, uh, defensive madness in some way. Um, but I guess how I would see them in a way, is it as a kind of battle plan, you know, these things are kind of battle plans and maybe their battle plans in the same sense as the sort of, um, possibly apocryphal and Napoleon, Napoleon quote, you know, they, their battle pans don’t necessarily survive first contact with the enemy, but they do, they do state a series of intentions. And most of the time there’s attention seem to be, to declare war on somebody in some sense,

James Parker (00:20:40) - Th then you’ve written about, um, the relationship between AI and fascism quite a few times now. And I, I think the word fascism is still in the title of your, your forthcoming book. Um, since we’re talking about Putin and Napoleon and battle plans and so on, um, we’d like to say something about that relationship, how you understand, um, you know, AI’s relation to fascism and sort of similar forces.

Dan McQuillan (00:21:09) - Sure. Yeah. I mean, I think, uh, it’s, it’s quite legit to be concerned about fascism in the current moment and not just because of events, you know, in Ukraine. I mean, in actual fact, it’s very heartening to me that when I started writing about, um, potential overlaps with fascistic political tendencies, you know, it was, was a few years ago. And I think it was only really Trump’s election 2016, sort of catalyzed a level of awareness where I now don’t feel like an outsider, you know, when talking about these things. Um, and, and to the extent where I’ve had to re slightly refine my own sort of position in a way, which is clearly not saying that AI is fascist in some, some deterministic centers. I’m not saying tech technologies are anything in, in a deterministic sense, but on the other hand, I definitely am trying to really raise a warning flag about the, the synergies between what the AI we know actually does, how it operates and what are clearly, you know, uh, w w w what, what, what, what composes fascism as, as, as a political force and the fact that it is actually quite, uh, uh, it’s rising at the moment and a very dynamic in many, many different areas and many, many different, uh, locations and domains and types of fascistic political tendency are emerging, right?

Dan McQuillan (00:22:38) - So it seems, it seems important to say, well, look, you know, there’s this, uh, diffuse, but extremely concerning political current. And then there’s this rising pervasive becoming pervasive mode of technological ordering of society. And, you know what, these two things have a very unfortunate, uh, potential feed, mutually reinforcing feedback loop. Um, so the way I position it, these days is much more about anti-fascist, you know, uh, what I’m interested in is having an anti-fascist approach to AI, to head off the tendencies within AI that are, um, you know, could be mutually supported by the way that machine learning operates. So a deep learning operates in particular, I mean, deep learning is, could, could be kind of a gift to a fascistic political solutionism, you know, uh, whether it’s by an explicitly fascist regime or, or just tendencies that way, you know, I know it because it’s, it’s so good at identifying outgroups and it does so in a way that utterly ignores social structures, you know, it completely raises, uh, causation and, and, and sort of structural understanding, identifies out-groups with great facility and in my reading of it.

Dan McQuillan (00:24:00) - And that’s what I tried to write about in the book. You know, one of the sort of asymptotic tendencies of the AI that we’ve got is the state of exception. That’s kind of what it does in a small way. You know, we were talking a minute ago, um, about Airbnb and their, um, weird, uh, what we had, but they’re, they’re, um, intense approach to trying to exclude undesirable people from there as they would see it from their platform. You know, that that’s, uh, um, a sort of mini version of the state of exception in some sentence, you know, this, um, invisible exclusion of, uh, an outgroup who in a particular context will be, um, rendered as having no rights at all. Now in Airbnb, basically nobody’s going to die of it probably, but that same operational facility is something that I, I would identify as, as actually the core of the real AI, the AI we’ve actually got at the moment.

Dan McQuillan (00:25:01) - Statistically statistical systems of statistical optimization based on large scale learning from data. And we do see those operating in, in, in more, you know, potentially Negro political ways. And I can talk about what that means to me. Uh, but we do see that already operating in those ways. Therefore it seems very urgent. It seems really urgent to have an anti-fascist approach to artificial intelligence, but also because my reading on fascism is, is, is, is, is those two things, right? One is seeing what is potentially coming down the road and acting early to intervene, but also to do that, to, to, to, to create a different kind of space, right? Rather than the state of exception, um, you know, a space of inclusion. I mean, the, the idea of anti-fashion to me is about creating a space for actually emancipator activity. So, you know, I, I want to switch the question to, you know, what do we need to do? What do we need to do to create, um, you know, positive ways of being together in society, positive ways of allocating resources and ordering the world, but, but we have to create the space for that. And any space for that, my reading at the moment is, is under intense threat from so many directions. It’s almost hard to, to, um, corroborate. And since my attention is on the frameworks of, uh, of, of, of, uh, technological infrastructure, my concern is how they might play in that process.

James Parker (00:26:36) - Yeah. I mean, there’s a few different things that come to mind for me. So the first, first one, it’s just, it’s a bit crude, which is why I was hesitating. It’s like, you know, the anti-capitalism mentally fat fascist, you know, binary, like, um, it’s just so interesting. I haven’t seen other people write about AI as consistently in the language of anti-fascism. Whereas I have seen a lot of people writing in the language of anti-capitalism and that was all the critique of Shoshana Zuboff, you know, but it’s fundamentally kind of recuperative kind of capitalist project and, you know, and so on and so on. So I’m just thinking a little bit about the capitalism fascism relationship, but maybe that’s a whole rabbit hole. And then, but then I’m also thinking about, you know, um, I’d love to, I’d love to go to talk about, you know, the normative project and it seems like the normative project for you about, you know, recuperating AI or determining whatever that’s that’s where that, that’s the part of your work that comes out of your experience in activism and, uh, you know, solidarity, you say in one essay, an anti-fascist AI is a project based on solidarity mutual aid and collective care.

James Parker (00:27:51) - And, you know, you propose, um, people’s councils and things as, as, uh, you know, a possible way at a building and anti-fascist state. So I mean, those, the two very well they’re related, but different directions that the conversation could go in. And I don’t know if you want to take either one of them

Dan McQuillan (00:28:14) - I’ll take them both.

James Parker (00:28:16) - Okay.

Dan McQuillan (00:28:18) - Uh, because I do think they are connected and not the same as you say. Um, I mean, I, I don’t think that capitalism is fascist or, uh, you know, and so it’s important to distinguish, but I think, um, I’m not claiming to come at this. In fact, I’m very suspicious of sort of grand narrative theories that claim to explain everything. But I think I, you know, I do personally feel that fascistic tendencies let’s say never went away, have always been present as a sort of dark potentiality at the borders of, you know, our current social order. Uh, even though our main problem, if you like on a day-to-day basis has been, you know, the near liberal version of capitalism, you know, which is, um, so murderously harmful in its own. Right. You know, we don’t need to seek other enemies in some sense, but I think that it’s worth understanding that the, you know, I mean, I have one of my historical engagements with activism, you know, what was it anti-fascism because again, um, it was something that I was, I don’t know, I found to be important and it was kind of sensitized to, uh, you know, even though at that time, back in the eighties and nineties, it really was, uh, so much smaller.

Dan McQuillan (00:29:30) - You know, it really was a marginal, um, political concern for those people who were into it. And now it’s, it’s literally everywhere and in so many different formations, which overlapped with AI, they come back to that. But I think it’s important to understand, you know, to think about, you know, fascism as an actual political movement, you know, and it’s, it’s has a few components and this is why I’m trying to connect it to the capitalism thing, but rather to the contestation of AI’s role in, in a sort of near liberal capitalism, you know, one is that, um,

Dan McQuillan (00:30:03) - Fascism as an ideology. I think you can see many the elements of it coming to the fore at the moment. And I like Griffins. I mean, I’m not, no, not no critical fan, but I like this guy. Roger Griffin has a formation formulation, uh, Palin, Palin genetic ultra nationalism, which just means the Palin genetic. That just means it’s kind of, you know, looking back to it to a golden past, basically really, and ultra nationalism is, is ethno-nationalism really. So what you say that the sort of ideological content of an actual fascist movement would have these elements would have these elements of, um, you know, a vigilant call to action to reclaim some kind of lost golden age of this been polluted in some way. And I mean, if this is seems to me to be very prevalent and a lot of political discourses, whether it’s, uh, you know, Brexit or Trump or bowl scenario or, or ban or anything, you know, the, these really are very present Poland at the moment.

Dan McQuillan (00:31:08) - You know, they, they cut across things that we would be, you know, on the other side of concerned about addressing in a racism, patriarchy, the, these, these, these, that, the embrace those things as foundational principles to be, to be advanced. So there’s that fat fascist ideology, but w w I think people who study historical fashion were saying, you know, the time when fascism becomes, um, let’s say attractive as a solution to a large number of people, you know, there’s always, I think there’s always going to be like a minority of sociopath’s who find this idea attractive for that reasons, but when it becomes attractive as a, as a potential larger, um, political project is, is, you know, as, as at a time of crisis or a time of overlapping crisis, um, you know, and again, so here we are, you know, we’ve had the financial crash, we’ve got the climate crisis, and we’ve got, you know, multiple other crises in between that time of speaking.

Dan McQuillan (00:32:06) - We’re talking about, you know, a major land war in Europe, and that we had this morning to talk about, you know, nuclear status, you know, Russian force of blood that, I mean, you know, w we, we are in what seems to be multiple overlapping crisis period, and that that’s should be of concern, particularly I missed it. The stability of neo-liberalism was toxic, but, but crises are their own problem because they make it such that the, the offering of the status quo is seen to be not solving our problems. However, we define our, and that’s also part of the issue. And therefore we might reach for something more, more extreme. So you’ve got that, you’ve got like, you’ve got the ideology as it is what it claims to do, um, which does speak at a very archetypal level to people I think can do, you’ve got the actual crises in the world.

Dan McQuillan (00:32:57) - And then you’ve got the sort of volitional aspect of it where people in power in some way, you know, some kind of elites at some point, decide that things really are coming apart. And they can’t basically black things through the status quo anymore. And some element of them, you know, form an Alliance with that, you know, otherwise anti systemic, fascist political movement, which is in its own its own, uh, its own view of itself would be revolutionary. You know, um, you, you get that Alliance forming where essentially then the state is captured for this, for this fascist project. Now that’s, that’s fascism. Right. And, and I’m not saying that that there’s, um, your question. I think capitalism is the overarching framework, which causes most of the harm in the world that I’m aware of. Um, but I think fascism is I S I see indications of all of those things that I’ve mentioned all over the place. I don’t know about you. I do. And that makes me deeply concerned. And I think it’s worth naming the anti-fascist project as early as possible, because I think, um, for people to self understand that whatever they’re doing, right, whether it’s, uh, you know, music production or, you know, sociological research or, um, childcare, all of these things to me can be done in a, in an anti-fascist way, and now more than ever really need to be. Um, come on, I’ve rambled on so much about fascism. What was the, what was your second point again?

James Parker (00:34:43) - Oh, it’s about, um, you know, people’s councils and solidarity and mutual aid care as a as well, just because it’s, so it seems so important, uh, in your work it’s just expressly, not a critical, uh, a merely critical project. It’s, it’s, um, got a very strong, normative, uh, angle and in so far as it does, it seems to be very oriented towards grassroots, um, critical work in relation to AI. So I guess I’d just love to know a bit about where that comes from and how you handle, how you imagine that playing out in relation to AI.

Dan McQuillan (00:35:24) - Yeah. I mean, I, I wouldn’t claim to have a fully worked out vision of how it is going to play out. And that’s, what’s interesting about these kinds of conversations. I’d like to hear any thoughts you guys might have about that. I mean, I can set up my store to some extent, which is that, um, I guess one of the common political elements would be in, you know, in, in my, um, I don’t know how to describe it, political philosophy or something like that, or ethical epistemological framework or whatever would be, um, a skepticism about representationalism in general. And, um, you know, again, this is one of the areas that for me coincide with, um, say social political concerns and technical concerns, you know, I think, um, AI actually, connectionist AI, deep learning statistical optimization, you know, is actually a representation on projects, um, in its own way.

Dan McQuillan (00:36:23) - But it’s one that, you know, forefronts an absolute capacity in its own representations, which is makes it, makes it fascinating. You know, it’s quite different to many other forms of modeling, which at least attempt to be comprehensible. That’s their purpose. This one is operational modeling, which, which, um, has no care for the idea that anyone might, that it might be, it’s not set up to be comprehensible to anyone other than itself, if you like the machine operating with this model. So, so, so it’s machine learning or deep learning let’s say has a interesting aspect in its own. Right. You know, that, that, that echoes up those layers of, of what it is to model something, what it is to represent something. Um, but at the same time, it also seems to, you know, interestingly for me, pressure, um, existing frameworks of, um, I wouldn’t say social order, but at least, you know, um, even stability seems like a very poor word for it, but let’s say just the status quo.

Dan McQuillan (00:37:30) - It seems to pressure it, you know, exactly at fracture points where its own, uh, what I would see as its own representative representational shortcomings car are at their, uh, at their most inadequate, uh, or at their most, um, harmful. So I’m talking about things like, uh, we could get into this probably James a bit, but you know, that the failure of law and regulation to, to, to deal with, um, the kind of, uh, problems that AI has already led to, um, maybe more broadly, you know, the idea that, uh, social interest is best represented through, um, bureaucratic institutional structures of the kind that we’re familiar with. And so, so for me, there’s a, there’s a, there’s a sort of resonance between, um, the, uh, problematic nature of, of deep learning representations and the problematic nature of, um, our status quo as a system that instantiates certain kinds of institutional arrangement as, um, representationally fair and, uh, sufficient or even efficient, um, where, whereas my perception, they have always been responsible for quite a lot of unnecessary and callous harm. And now these things are feeding off each other. Now, again, that, that reinforcing their self-reinforcing and feedback so that, you know, what I would have always felt was the, um, you know, the, the unconscious to be harmful nature of most bureaucratic operations is now being enhanced by amplified, by, as is, is being put on steroids by the, by, by the similar tendencies within these computational technologies.

Dan McQuillan (00:39:12) - So the positive project would be something that is exactly other to that, you know, so I tried to, again, identify within the end trails of AI, you know, what it is that it’s doing, that, that resonates with, you know, these, these undesirable, uh, modes of, of social relation, and then flip that, you know, and I guess, um, and that’s not claiming to be an answer it’s just, is trying to propose a starting point, you know, um, because I, I’m not a futurist or anything like that, I don’t even believe in it and that, uh, but I do think that, um, I do have absolutely clear areas of allegiance towards things like you say, grassroots politics and autonomy, you know, and, um, uh, the, the, the rights of ordinary people to.

Dan McQuillan (00:40:00) - And the conditions of their own know their own reproduction really instead of social and self reproduction, I do have, and I’m on a commitment to that. And, and I do also have some exposure to political history that has experimented with modes of doing that in the past. Um, particularly in times of crisis, you know, workers, councils, peoples councils, um, general assemblies, you know, it’s all very much of, of, of, of a kind. And they seem to emerge quite spontaneously, uh, you know, in the face of a lot of the same kind of political social, uh, horrors that we’re currently experiencing. And, and, uh, so I wanted to look at those and think, well, w w what is it about those kinds of formulations that, um, acts as a, as an antidote acts potentially, and this is like a grand claim, but as a kind of inversion of, of, of the toxicity, that, that, that I’m positioning myself against that, that mutually reinforcing toxicity, you know, institutional power and computational power. And so I try to read into, or read out of, um, structures like workers and people’s counselors, you know, exactly what did it, what it might be about the qualities of those things that might make them, um, constructive of, you know, both a different society and a different subjectivity.

James Parker (00:41:20) - I sort of, I really want to, I kind of want to know, I want to dig into the details, but I’m unconscious also that, um, I mean, one of the reasons that we, um, one of the reasons that we reached out to you apart from the fact that your work has just fantastic on AI in general, is that this sort of amazing piece that you wrote quite a few years ago now in 2018, or it was published in 2018 on voice diagnostics and the kind of turn to automation and voice diagnostics. We’ve seen this, like, I mean, I, I noticed one of the companies that you, that you talked about SONDA health. I don’t know if you’re aware, but they, um, they recently, uh, uh, purported to be able to do COVID, uh, voice diagnostics. And they’d been sort of trying to roll that out, and that’s been a, sort of a, a big thing, a big sub current sort of, uh, you know, um, throughout the pandemic, but already in 2018, you were writing about voice diagnostics and automation, and this is sort of a really, um, amazing, uh, and sort of, I hope, I hope I hope this is doesn’t, it doesn’t come off from, but let’s sort of a strange essay, you know, it’s written in, in stanza, uh, you know, I’ve got a kind of a poetic sensibility.

James Parker (00:42:34) - So, so yeah, I sort of love on, on one level to turn the conversation to voice and voice diagnostics, which obviously related to your experience in, um, both, um, physical and mental health work that you’ve done. But I kind of wonder if it’s possible to have that conversation and then maybe like, think through what, um, how thinking, you know, through grassroots politics or something like a people’s council. I know I’m asking you to do this on the fly, you know, my mind sort of refract through that. Like how, like, just trying to think about how, what it would mean to kind of constitute a community or Institute or community that was working against, or, or somehow within, against automated voice diagnostic technologies.

Dan McQuillan (00:43:24) - Yeah. I could name that thing for you straight away.

James Parker (00:43:28) - Go for it.

Dan McQuillan (00:43:29) - Yeah. How many patients council? I was, I was, uh, I was a founder member, but I mean, it wasn’t my idea. I was just, I was working as a mental health advocate and, um, for, for people who are detained in the mental health act in Hackney psychiatric hospital, which was one of those, um, crazy bits of architect, you know, like I think it was an old, uh, it was some kind of disease hospital of the Victorian kind. And I would like to either the high towers and, uh, it looked like everyone’s sort of nightmarish idea of, you know, Gothic, psychiatric institution. And I was working there as a mental health advocate, volunteering. Isn’t get the guy who actually organize this and some of the, um, more activist survivors, they probably call themselves, you know, constantly we constituted a Hackney patients council. And so, yeah, I mean, that, that was an actual instantiation and also a kind of, um, very early learning experience for me, uh, you know, of the potential of this, this mode of operation. And it was in two contest as we can maybe say gray, the, the same kinds of things that I was trying to contest by writing that essay in relation to AI and machine listening. So, um, yeah, we can talk about that. Maybe we can talk about the, that the essay in, and then we can talk within about the way that might inform, as you say, some of the more, um, proactive and generalize.

Dan McQuillan (00:45:00) - You know, modes of action to interrupt that kind of a tyranny across the board. I don’t know. I was gonna say like, I mean, uh, you know, yeah, I’m, I’m glad you liked that the essay, you know, uh, or that, or the piece of writing, whatever ever it is. I mean, they insisted on describing it as a poem when they published it and open democracy, which, which I didn’t think of it that way, but I’ve, I’m quite happy, you know, in the sense that it just happens to be the way I think, you know, uh, you know, I’ve got my best, my best sort of intellectual buddy, uh, is, is a guy called cliff Ashcroft, who is, is a poet, you know? And, and I find that way of thinking, I guess I don’t, I don’t even have the terms to articulate why, but I feel the sort of, uh, non-dualistic and fully immersed experiential nature of poetic expression is, and it’s sort of condensation of, of, of realities that are difficult to, uh, to, to sort of, um, delineate or articulate is, is important. Anyway. I mean, the reason it’s written like that is, is because that’s how I write speaker notes. That’s how I write for myself to speak.

James Parker (00:46:09) - I did one day, cause I’ve seen you post those on your blog often. And I mean, yeah, like, uh, could you say a little bit more about why you write, I mean, I know this is a sort of a digression, but I, I feel like on some level it’s like, it’s not because it’s to do with methods of thinking and working and acting against, uh, you know, a form of, uh, knowledge production that would have absolutely no space for the poetic or no interest in aphorism or, you know, uh, so it does seem relevant to me.

Dan McQuillan (00:46:43) - Yeah. Yeah. It is. It is all those things. I suspect you cannot articulate it better than I can. I mean, you know, uh, do you remember, I haven’t thought about this for years, but you remember this guy who used to write and who wrote that used to cause he hasn’t been around for a hundred years, but he wrote under the name Novalis, you know, Novalis. He was a guy who was, ah, yeah, he’s amazing. He wrote, uh, amazing poetry and of a kind of, um, cause cosmology, I suppose. And at the same time, I think he was working as a civil engineer, something like that. I’ll have to, I’ll have to go back and look at it myself, cause it was a source of inspiration years ago, but you know, it’s, it’s, I don’t know. I mean the ultimate mobilizer for me, isn’t the technical knowledge.

Dan McQuillan (00:47:27) - I’m a bit of a nerd obviously. Uh, but what I actually feel is things like pain and anger and, and love and, and those kinds of things. I mean, those are the, my mobilizing urges and I find that I don’t want to dry those out and lay them in the sun in the same neat where I can respect people who write stuff that’s so utterly clear and transparent and logically structured that, you know, you instantly understand it. If you, if you have been lucky enough to have the educational background to get you through the door in first place, you know, I do respect that stuff, but that’s not my way, you know, my way is, is, um, trying to articulate, um, how, how these things, uh, you know, move me in a way I touch with me and to try to move things back, you know?

Dan McQuillan (00:48:18) - So I want, I guess I would speak in that way because I, I don’t just want people to hear what I’m saying to some extent, you know, and I have no delusions, I achieve this in any way, but I would love people sometimes feel what I’m feeling when, when I look at these, um, technical systems and modes of operation is this technical, uh, would this, you know, this computational calculations and the Lang in the calculative rationality that sits on top of them and what it means to people, what it means in the end to, you know, people have no say in these things, you know? Uh, so, so I guess that’s why I cleaved at that. I suspect, as I say that you guys could probably articulate what that is doing and if not better than I can, but that’s what I do. And that’s how, yeah.

James Parker (00:49:00) - Did you remember what first made you angry about voice diagnostics and mental health, you know, in hacking in Hackney like, cause cause I get into a rage, constantly commit these kinds of things. Um, you know, just like, oh yeah, we’re just, uh, doing a voice diagnostic tool. All you need to do is like surveil yourself constantly, um, for the rest of your life. And we’ll tell you when you’re depressed and yeah, clearly that Scott biomarkers and we’ll just sort of rolled this out and everything will be fine and it makes me furious. And I just wondered if there’s a, um, if, uh, if there’s an originary moment for you that sort of sparked you down this particular rabbit hole, or if there’s a story that you have,

Dan McQuillan (00:49:51) - Do you know? I can’t, I can’t remember which app or, or sort of technical claim I came across the first, I mean, I was already.

Dan McQuillan (00:50:01) - It’s the same as, as things, everything else is for me. Anyway, I was, I was sort of pre sensitized to it, I suppose, through my own. Um, you know, rather, um, mixed set of life experiences. You know, I was in stage of that because I had already, you know, worked in the, as a mental health advocate. Okay. Some of you, this is one thing I, in the same way that I tried to call attention to the potential of some of these technical frameworks. It’s because that is the, you know, that is the condensation of the worst things that can happen. And I think if you want to see the exercise of arbitrary authority, you know, over the, um, absolute conditions of people’s lives, you know, arrange into an interpersonal relationship, you only have to look at psychiatry, you know, and I, I’m not trying to, to condemn all psychiatrists, I’ve met some really good committed psychiatrists.

Dan McQuillan (00:51:02) - I mean, radical minority, you have to say, but, uh, you know, um, you know, so probably I’m going to get into trouble. That’s, uh, you know, I, I don’t really like psychiatry as a discipline. I don’t respect it in that way. Um, and, uh, but I have plenty of encounters with it. Not personally, I’ve never had to have a psychologist I’ve never been sectioned. I’ve never had a mental health problem that has a reason to the layer level where I’ve been diagnosed with anything like that. Um, but I have as a mental health advocate, working with people who have those, uh, challenges, encounter psychiatry, and it is the, it is absolute authority because it has it under the mental health act in the UK, you know, it has absolute authority over your Liberty, uh, in absolute physical sense. You know, you can be detained for four years, uh, perhaps in some consequences you can be, but also because it has this maybe more, uh, know more, um, invasive, because it claims authority over your thoughts, in some sense, you know, it claims authority over your, um, your own sense of your own existence.

Dan McQuillan (00:52:08) - And I’m not saying that, you know, mental health problems are just a misunderstanding by the psychiatric establishment. We’re far from it. You know, I mean, I, I have known many, many people lived with people with profound mental problems, including schizophrenia and things like that. So I don’t underestimate the health, uh, as a, as a destabilization of personhood. Um, but what I do know is that people who have experienced mental health problems in the, particularly in the periods between experiencing intense, uh, you know, uh, disruption are, you know, are quite capable of being reflective about their own conditions and are seeking, um, appropriate support for that. And, um, there are experiments in mental health that, that, you know, from the sixties onwards, particularly as you probably know, um, that, that tried to provide that. And there are those things that exist again in a tiny sliver of, of services right now, but the dominant, um, approach to, to mental health has so much in common.

Dan McQuillan (00:53:07) - And in fact, massively overlaps with sort of near liberal approach to society that, um, yeah, that those things seem to sort of cleave on a sort of psychoanalytical level almost, um, you know, almost, uh, become, become one another. So, yeah, so I just was very sensitized to the idea of arbitrary authority. I was very decent, it’s tight every already sensitized the idea that somebody would use what to me, extremely crude, um, mechanistic, reductive, uh, indicators of someone’s state of being, and elevate those to, again, you know, it comes back to question authority, elevate those to a state of authority where they would feel able to, you know, to, to diagnose someone, to label someone in a way that has real, um, concrete life consequences, uh, in which they may have radically reduced or zero say for themselves. Um, or even those around them. They have no say in that as well.

Dan McQuillan (00:54:06) - So I don’t know if that’s at all articulating it, but, but I was, I was ready to see it. You know, I was ready to see this kind of, um, this, this, uh, authoritarian streak, wherever it emerges. And lo and behold, I stumbled on one of those. I stumbled on one of those apps, you know, as a result of starting to look into machine learning, I stumbled, you know, because machine learning has this kind of, it’s like a machine gun of solutionism, you know, it’s, it’s, it’s spraying the landscape with silver bullets claiming to solve all sorts of problems and, and, and really interesting. And this is something that may be, none of us have kind of explored quite enough is like how quickly it seems to become entangled or associated with the most profound problems. Right. It’s no surprise, I guess, is that, you know, the things that really do seem to stump us socially or politically, or economically, you know, absolutely unsolvable problems like, you know, mental health problems or the issues of child protection or child welfare, you know, these, you know, you might think a rational, um, uh, how would you say a rational rollout of a new technology might cautiously start with things that are reasonably well, you know, reasonably deterministic and would reasonably all determined and see how well it works there and, you know, sort of gradually roll it out to more, um, you know, more problematic sort of wicked problems, but, but natural fact that they get to go straight.

Dan McQuillan (00:55:26) - There they go straight for the juggler straight for the most problematic. And, you know, again, you know, you know, you guys researched these technologies, you know, more about them than I do, but I would say they like most of AI to me, it’s like, yeah, some of it is quite clever. It’s, it’s a lot of, it can be a lot of fun. And I think these are great creative technologies, but inside what they’re doing, it’s just a trick. There’s not that much to it. You know, it’s, it’s really, it’s kind of crude. Okay. It’s computationally sophisticated, but conceptually quite crude actually, you know, and, and, and the idea of taking something. So, um, mechanically reductive, and essentially quite simplistic, and then claiming to be able to sort of wave that as a magic wand, over something as profound as, as, uh, a serious mental health problems, you know, in a way that would, um, alter people’s lives, you know, seems to me, um, you know, like, like a sort of a military coup of the mind, you know, and I just, I just, yeah, I just wanted to say something about it.

James Parker (00:56:29) - Can I just, um, give, uh, an extremely clear example of that phenomenon that crudeness, um, the curve, the mind as you put it, because one of the article that one of the companies that you mentioned in the piece that I hadn’t come across before health rhythms, so, um, it’s called health prisms. And so this is a company that’s, uh, was co founded by one of the people who produced the DSM five. And so, you know, this is already the politics of the diet of the production of the DSM. Five is something that’s been written about. It’s extremely controversial and so on. And so health rhythms, uh, has a feature in one of its apps that will wake up when an order. This is a quote, by the way, when an audio stream is detected and to measure the amount of a human voice versus other noise, it then calculates a log of sociability that can indicate social withdrawal often seen in depression and anxiety.

James Parker (00:57:28) - So this is effectively, um, microphone on your smartphone, but is apparently working out how much you’re hanging out with other people based on the audio inputs must be listening constantly. Uh, otherwise I don’t. Yeah. And it’s measuring the producers new metric, but sociability from that sound that, and then they’re using that as a proxy for depression and anxiety. I mean, that is that it’s sort of, I don’t know. I just, I sort of find it, my mind boggles that somebody could, could think that that was a good idea. And then, and then to think that that is, you know, being rolled out under conditions of austerity, as you point out in the article as an alternative to other methods as a more efficient, cheaper, um, uh, more opaque, uh, form of diagnosis with re to which real consequences attach. I mean, as you say, it’s just incredibly crude and it’s amazing that people seem to get away with this level of, crudeness just consistent that they put it on the website. I mean, there’s, it’s, it’s like, it’s a bizarre day job to spend all of my time surfing through websites of all these people who are just saying what they’re doing

James Parker (00:59:01) - Out loud, they get, they get venture capital for it. And as you say, it’s so crude, um, something to do with the kind of the mythology and the, yeah. I mean, I suppose not to return to the scifi thing, but the faith in AI that belies the crudeness,

Dan McQuillan (00:59:21) - But I think we should. Yeah, absolutely. And, and I, I’m sorry for your trouble is as they say, spend too much time reading that stuff. Yeah. I mean, uh, but, but I was thinking as you were talking, you know, that how much we should take this stuff seriously, because clearly there’s, you know, we, we, we already suffer from a kind of, um, one of the things that there’s an ongoing industrial disputes in UK between, uh, you know, around universities and so on and so forth. And one of the things I would say from my colleagues who, you know, are really great and, um, uh, are often very articulate and very politicized, you know, in, in important ways, um, much more than I am. Uh, but.

Dan McQuillan (01:00:00) - As a, as, as a broader body, I would say academics are, you know, I don’t know what you would think about this statement or perhaps a bit too prone to rationality in the sense of, you know, believing that, you know, people generally operate according to rational principles, perhaps believing that, you know, people would be interested in listening to rational explanations of things. And that’s not my perception of how the world works at all. And, you know, uh, so the reason I think we should take these deranged experiments in, in the sort of psychiatric tyranny seriously is because, um, it’s not just that space, which, which in itself has always been some kind of zone of exception. You know, the, the, the, the labeling of somebody as, as insane or mad or whatever term is used, has always been an excuse for moving them from the body politic and, you know, um, stripping them with their civic existence, to some extent and subject them to whatever happens to be that this story, uh, you know, remedy for that.

Dan McQuillan (01:00:57) - But that is there. That’s really just the Canary in the coal mine, you know, that as a crisis, broader crises, you know, uh, re rupture, um, social structures, those ideas of enforced ordering are just going to be much more widely applied. And again, that’s my, my concern with AI that it provides is it, you know, I believed in semi-pro I buy it. It’s very generalizable just in the wrong way. You know, it’s a very generalizable technology that can apply similarly, um, you know, similarly unfathomable or unfounded, uh, forms of ordering across all sorts of areas of social interaction. Yeah, go ahead. Yeah.

Sean Dockray (01:01:39) - I was just thinking, I mean, as you’re speaking, even if it can’t apply generally, even if it doesn’t work, um, you know, the, the, the, the, your description just amazing this machine gun of silver bullets like that, that, that gun is constantly spraying, you know, so there’s this a general belief that it will work, even if it just isn’t generalizable. And there is, you know, just to refer to one of those chapters in your book, there is collateral damage. Every time, every time one of those silver bullet sort of strikes some situation there’s, uh, there’s new problems that are created, which in turn creates, you know, more and more.

Dan McQuillan (01:02:18) - Yeah.

Dan McQuillan (01:02:19) - Yeah, absolutely. I, a hundred percent agree. Yeah, no, no. I mean, I don’t, when I say it’s generalizable, I really mean exactly what you were saying that it’s not, it’s not generalizable because it works. It’s generalizable in that it doesn’t work, but it does work in another sense, right? There’s like, it’s the technology of control and, um, you know, whatever else is going on in, in society in general. And I do think that unfortunately, one of the, uh, features of the social systems that, you know, uh, are, are prevalent in most places is, um, the desire of those who believe themselves to be in power, to stay in power in a very simple sense, you know, and as, uh, for all sorts complicated reasons, and as things fragment, the more they will reach for whatever is to hand, you know, in Heidegger sense. And this is, this is very much to hand at them, and that’s why this stuff, that’s why we will, I guess, paying attention to this stuff.

Dan McQuillan (01:03:17) - You guys have, you know, your particular focus as well, but I guess why we’re paying attention to this stuff is because it’s to hand and it is being reached for actual fact, I had a, I had a thing and I was, I was thinking about coming to talk to you. And, uh, I just, a couple of things came together in my head because I do want to, to, to try to come back to what James is asking about, about specifically machine listening as this is what, you know, what you guys have a focus on. And I was just thinking about the phrase, listening, like a state, you know, it’s like that you see the James C. Scott text used a lot now, like I came across it a few years ago, this idea of seeing like a state and, um, you know, I used it, I think, you know, one or two papers a few years ago, I haven’t written a paper for a few years, I think.

Dan McQuillan (01:04:03) - Um, and you know, it seemed to me to articulate something important about what AI is doing, you know, it is seeing like a state and, and it’s also listening like a state. It’s almost like it’s, uh, the problem with the generalizability is it’s just like, I don’t know if there’s a verb here, but like syntheses a timing, like a state, you know, it’s, it’s operating on all across all spectrums and, and, and, and sort of bring it all together. But in a, a rather singular worldview, like a sort of group, you know, a world drone and, uh, you know, it’s, um, it’s very unfortunate.

James Parker (01:04:38) - So let’s say you’re in Hackney. I can’t remember the exact name of the institution that, uh, had, can we

Dan McQuillan (01:04:47) - Cancel?

James Parker (01:04:49) - Yeah. And we, you, you encounter, uh, this app that wants to produce.

James Parker (01:05:00) - Uh, by means of some neural network or something, uh, determine your social ability. Um, and then use that as a proxy for depression and anxiety. What is the, what’s the, you know, the people councils kind of response to that going to be, how would that work? Like, I don’t mean per day, but like what, so, I mean, I’m thinking first off, they’re gonna say not, uh, so that like a kind of, um, refusal or politics of refusal is just going to be a huge part of a people’s councils. The ability to say no to AI to say, no, we want other forms of care. We, we, we just, um, that’s going to be a huge part of it, but, but you’re also quite committed. It seems to some kind of determining of AI or machine learning, in some instances in that you want to hold open, the possibility that an organization like this might be able to craft something, you know? Cause you’re, you’re you say you’re not, you don’t think deterministically, so could, could you work through some, some of those possibilities or imagine some of those possibilities?

Dan McQuillan (01:06:11) - Yes. Well, I don’t know. I’ll try it. Um, I think, uh, yeah, that’s an interesting thought experiment. I’m imagining back to those times, and I absolutely agree with you and, and, and, and, and sort of well-named, you know, I think that, uh, politics, refusal, I don’t know if I’ve ever used that phrase, but definitely that is something I entirely identify with as, as an important starting point. And to me, I guess anti-fascism is it is itself a particular ticket, a format of politics or refusal, but I’ve also been inspired by reading about, you know, Latin American social movements, who I understand from, from a pretty great distance, you know, um, explicitly identified themselves as having, you know, politics of refusal, partly because they’d been under, you know, military dictatorships and, and, and sort of, you know, horrible neo-liberal experiments for, for so long. So if it was, um, Hackney patients council, absolutely.

Dan McQuillan (01:07:02) - It would know. I imagine that if it was that group of people there at that time, and this technology came along, it would be seen, uh, as, as of an elk with, um, the CT machine, which was still extent at that time, you know, these UCT shock therapy was still being used in, um, in, in Hackney psychiatric hospital. And, um, you know, one of our, one of our early members of the council had, had had form, um, from a couple of years earlier, where I think two of them broke into the ECT unit and check the machine out of the window. So, you know, that might link to a conversation about letters of Mitch. I noticed something else that we kind of have, uh, neutrally mentioned, uh, as a sort of potential response. So, so I think, you know, for something like an app, I mean, it’s kind of interesting.

Dan McQuillan (01:07:47) - I did think about checking an app out the window, cause you’re just checking your own phone out the window, which is maybe not quite where we’re trying to achieve. I think the patient’s counselor was there first off for mutual support and care, you know, um, I can still feel it now. You know, that when I used to go to these, it wasn’t just, the patients comes, we used to have poetry readings, funny enough, there’s a thing called survivors poetry. And those evenings, when people would just get up and, you know, say the thing about their, whatever it is, I wanted to say it mostly about their lives and their experiences, what I found utterly amazing about those way more than any, you know, leftist group or something else that I might’ve been passing through was this utter sense of mutual acceptance and care. You know, everybody, everybody in there was weird in some way, you know, even in between having not maybe having a sort of episode of Ted Smith Hebron, most people would be, you know, you, you would sort of identify as being quirky personalities in some kind of way.

Dan McQuillan (01:08:46) - Um, you know, uh, and I include myself in that, you know, but, but it was just such a brilliant environment because it was, it was, uh, um, people came together on the assumption that, that they were there for each other and that being weird was okay. And, um, that wasn’t a tool to problem. And what we were there to do was to try and understand each other and try to understand the situations that people are in. And so that’s the first instance of what a patient’s council is doing on encountering an app like that would be to understand with each other, what it is that is so alarming about a development like that. People wouldn’t need to think too long about it, obviously, uh, to sort of, um, care for each other, in the sense of trying to figure out what the consequences might be in the first instance and how that might need, um, activities of care and support. And, and then to get to that bit of the politics refusal, I think, and, and, you know, obviously the most basic sort of unit of this, this is where I think it’s quite important with the politics of AI in general is to, to, to understand that refusal is not sufficiently, uh, empowered by good critique. You know, good critique is important, but, uh,

Dan McQuillan (01:10:02) - You need collective action. You know, you need to be able, these things are in their own ways, a force in the world, and you need to have some kind of counter force and, you know, what’s available to most people who are not obviously the owners of capital infrastructure or wherever else is themselves and each other, you know, like the collectivity. So I think Peter’s counts was that it wasn’t attempt to come together as a collective body, as people with wider reach in that sort of community of survivors and their families and other community organizations, anything like that to, to mobilize, you know, so they would, I think have said, okay, this is being introduced or possibly forced on people. You know, you must have this app as a condition of being released back into the community. That would be absolutely typical, I imagine. And, you know, if you don’t use it properly three times, if you don’t get three data points from you every hour, you know, we’re going to come and take you back into, uh, you know, uh, detain you again.

Dan McQuillan (01:11:04) - And the mental health act, um, there would be, uh, you know, collective resistance to that. And that, that comes then down to, uh, essentially what community politics comes down to, which is a combination of, you know, of, of collective action of public narrative, you know, of, uh, of network networking strength in some sense, um, uh, to, to try and make it as difficult as possible for that app to be introduced. Um, and I think, you know, people with mental health problems, even in that time, we’re already, you know, quite, um, quite aware of the idea of the internet as a sort of potential, um, substrate of support, you know, at times, you know, I think there all these communities that seem to be on the internet really early on, uh, people with mental health problems, w w w what are those communities, as you know, in my memory, you know, um, as were the Nazis, unfortunately, um, P various different communities who realized the importance of, of, you know, um, immediate remote and urgent communication miniseries.

Dan McQuillan (01:12:25) - So I think people with mental problems would be very at home with the idea of, um, being able to take advantage of a technical infrastructure that allowed them to find help and support where they needed it on their terms. Um, and as part of a kind of understood collectively understood and collectively agreed process now with apps and things. And I’m trying to read, read your challenge if you like backwards into some of the questions that apply more broadly as well, it’s, it’s you, it definitely wouldn’t be. And I’m using that, trying to use this as a kind of instance, to, to stand in from, for a larger point. You know, it definitely wouldn’t be, uh, okay, this app as it is, is bad, but if we put it under community control, it would be good, you know, because the app is the tiny tip of the iceberg of so many other necessary things need to be there in order to make that at work like it does.

Dan McQuillan (01:13:27) - And those necessary things are, as you were saying, you know, absolutely Staci like data collection, um, but also the data centers, you know, the infrastructures of, um, data aggregation, the, you know, the, the energy hungry, um, infrastructures of, of cloud computing. I mean, there’s so many, um, you know, the, the, uh, the PA the patents that we were talking about earlier on the, the legal frameworks in which this would have been inserted, you know, if it’s made into something, uh, compulsory for people to take out of them with hospital, it would have been worked into mental health legislation and, or, or applied into someplace. So you’ve got, again, you know, that’s where you’ve got this kind of assemblage happening. You’ve got this, um, intersecting overlapping, uh, you know, uh, tower block of, of, of sort of segmented ideas about what it is to have a mental health problem and what the response should be all condensing, you know, uh, on, on a tiny screen. And, and so, so the idea that there may be something that could be helpful to people, and the idea that it has any resemblance to this app whatsoever, I think are two different questions.

James Parker (01:14:39) - Seems to me that that’s quite related to a point you’ve made in, I can’t remember which piece about the need to produce a kind of, I think you call it a counterculture of AI or a counterculture of machine learning, so that it’s sort of like, yeah, it’s the people’s counsel is one node of a, of a much broader political, uh, project, obviously. Um, I mean, I don’t know how, what did you are to the specific, like histories of the counterculture and so on, and obviously they’re kind of, uh, embroiled in, in kind of intriguing ways, um, with the history of AI. And so the ability to get into that, but yeah,

Dan McQuillan (01:15:22) - No, no. Yeah. And Silicon valley, and it’s all it’s entrepreneurial ways. Yeah. I mean,

James Parker (01:15:31) - I was just, I was just made me think of that term that you use. That’s all, that’s all, it’s really thing.

Dan McQuillan (01:15:36) - Yeah. But I think it’s relevant maybe, and, you know, obviously my obsession not obsession, but my hobby horse at the moment would be, uh, advocating for a consciously antifascist element in cultures in general, uh, whatever those cultures are. Um, that’s, specifically necessary form of counter at the moment, but I think, you know, it is, it is important and that’s, I would find it difficult to mix, I think with, I listened to a really useful podcast, actually, quite a cool podcast called twill. I don’t know if you know this week in machine learning. Um, yeah. Yeah. And it’s, I mean, you know, it’s hit and miss from the point of view of, if I might say people like us in the sense that, you know, every other episode is, is, you know, is just a kind of industry focused, focused, you know, um, mutual admiration session about something very, um, you know, just being applied pragmatically, but, but I’ve forgotten the host name, but he, you know, he, he does explore.

Dan McQuillan (01:16:44) - It’s a, it’s a great way to get in touch with the, um, what the industry is kind of saying to itself, uh, in general. And, and also he does ask broader questions, not really socially political ones, per se, but by reflecting on, you know, like what, where are we going with this? And what’s the biggest, what’s the bigger, what’s the bigger question is that this is right or raising. It does give you some useful insight, but, but, you know, it would be the culture. I really struggle to imagine something positive emerging from anything like the current culture of AI, as I would call it the culture of AI engineering or, you know, the deep learning space, pretty much the opposite. I know there’s lots of good people in there, and I know that there’s a hugely increased interest in things like ethics and social consequences and even politics at, you know, at the fringe of that culture, but politics yeah.

Dan McQuillan (01:17:42) - Even politics. Right, right, right. But the waffle on about it, I mean, the culture is extremely, um, you know, it w you know, it’s, it’s, um, it’s carbon monoxide, you know, and, and, and, but the flip side is the more important thing, right? What is, what is the cultural, what is, what is the alternative, not the alternative? What is the counterculture to that counterculture to me is it’s not wedded to any specific sixes instantiation on it. It’s, it’s, it’s a generalized concept. And I believe that there has to be a culture. And so, you know, I guess I’ve been in and out of, you know, my, uh, uh, feeling, oh, maybe hacker culture. That, that looks interesting. And then I’m like, oh, shit, oh, look at that. What’s going on in there? You know, and, um, you know, I, I had my time when I was, I would have, I would have found an affinity with the early days of anonymous, you know, but that was in my own defense, you know, when they were trying to be active during the, the so-called Arab spring, you know, and, and, and most of those people who I would have felt some affinity for were fairly shortly arrested by the FBI, you know, so, uh, you know, open source, you know, again, it’s, uh, I’m, I guess I’m, I’m I understand my empirical allegiances and they haven’t really shifted in a long time.

Dan McQuillan (01:19:00) - So I guess I’m just stuck with them. I am often interested in spaces where there might be experimentation exploration, invention within technical frameworks that might speak to something of that, you know, that might have some kind of punk element to them in some way, um, as, as potential allies or, or starting points for, for a kind of culture within, um, technically, uh, technically capable spaces. And I’m, I’m, I’m kind of still looking. So I guess I’m trying to also, um, advanced that in some way, you know, I’m trying to, to, to create a crossing point between people who are in or in, in those spaces, but who might have, uh, might have a sufficient honeys or a sufficient ambition in a very different direction, defined, uh, a space of overlap with those people who are, um, utterly unexposed, those technical spaces, but absolutely concerned with things on which they’re going to impact. And that’s where the kind of idea of, of people’s counts.

Dan McQuillan (01:20:00) - Just a worker’s council, which has people within a, um, you know, within a space of production, addressing the conditions, uh, of what they’re producing the conditions under, which they’re doing it, but explicitly the crossover of that. So that’s why things like, you know, the idea of that, that there are these kinds of seventies and eighties ideas. We surfacing now things like social production, um, you know, which, which in this country, the UK, um, you know, sort of often inspired by the, I think, or that, that the, um, the Lucas plan. And have you ever heard of that one? Yeah. So the Lucas plan was the, the, um, plan that works inside Lucas aerospace, which is an arms company, you know, uh, came up with how to transform both literally the machine tools they had, but also more importantly, their own skills to create early versions of, you know, sustainable technology, you know, w when, when generators and battery powered, you know, road rail vehicles and, and, and really crazy stuff, you know, uh, in a good way. And they did that in the 1970s. And, you know, that, that went on that, that, that, that, that, that cooked for a while and under the heading of social production. And then it was stamped out, you know, in the early days of sort of fats, right near liberalism. So, um, you know, I I’m, I’m on the lookout for four aspects of technical activity that have, um, uh, sort of cognizant culture of refusal, but also of, um, recapture.

Sean Dockray (01:21:33) - I was thinking a little bit about how, like, yeah, open source software, you know, there’s a whole culture around it and everything, and obviously, like it’s complicated, but one of the most complicated things about it is that it, it opened like a, the majority of the code base of the fangs is open source software. Right. And that kind of like, um, just thinking about that level of recuperation and that, um, and we fast forward to now, and actually think that looking at the, the kind of, um, not schism, but all the kind of debate around web three

Sean Dockray (01:22:07) - Now in the way that it kind of, uh, it’s kind of a marketing thing, it’s a set of protocols. It’s a set of, it’s like a culture, you know, there’s, there’s a lot, it’s complex, all this stuff that’s happening with it. And it’s, um, it’s kind of defining itself in opposition to the, you know, platform monopoly, behemoths of web two and kind of reproducing certain aspects of it, but also there’s, you know, there are good people in it, and they’re kind of trying to develop a kind of, um, a culture that is unwinding a lot of the centralization, you know, surveillance and financialization. And you know, that, um, that I must feel like observing that kind of, um, that it’s not a schism, but it’s a observing that, that tension and, and activity that’s playing out around web web three is one way to think about everything that you were, I don’t know that you were just saying Dan, um, oh, that’s what I was trying to do. I’m just trying to like, listen and absorb it and then think about it. There is this thing that’s happening right now, which I am kind of ambivalent about, but, um, that seems to be performing some of what you’re, you’re, um, kind of reaching to.

Dan McQuillan (01:23:20) - Yeah. Yeah. I mean, I find it fascinating. I think if I, if I had time to read anything at the moment I’ve got like, uh, I’m accumulating, you know, pretty solid reading list to really immerse myself in with three blockchain, smart contracts, you know, Dow or that, I mean, even the name, right. Decentralized autonomous organization for God’s sake. I mean, that’s, that’s, that’s naming the things that, you know, I would, I would personally align myself with, you know, like I am, you know, the D what I would, what I would call it, decentralized autonomous organizations, you know, like work, it’s kind of people’s counseling, but, but I don’t know. I mean, I don’t, I don’t want to, um, I, I don’t claim any insight into that stuff, especially since I haven’t actually done the reading yet, but, but I would say that it it’s a bit like, you know, if it’s revolution it’s in the same way that, that fascist politics is, is a revolution that is genuinely revolutionary, you know, um, it’s just not the right kind of revolution. And, uh, you know, I, I w also acknowledge what you’re saying that there’s some really good people involved in there so that if something could be broken out of it, and I’m very, you know, I’d be very happy about it. I would say it does seem like it to me at the moment. It seems like, um, it seems like a sort of blockchain powered version of the coming storm, as they say, you know, it’s, it’s, it’s sort of, um,

Dan McQuillan (01:24:48) - You know, it’s, it’s a, it’s a cryptocurrency Q Anon. And, uh, yeah, I feel very, very wary of it. I mean, it really has in the back of my hand, you know, um, but that doesn’t mean, you know, that, that doesn’t mean that there isn’t something that could be grappled with in the way we’re talking about. So it may be that it’s not about determining AI because AI itself, you know, if you go to the core of, it has very little to offer, I think, to liberate your agenda. Whereas something that, that who’s who’s, you know, functional foundation is actually decentralized in some sense, you know, at least in principle has something to offer. So yeah, we’ll have to see.

Sean Dockray (01:25:31) - Yeah. That’s one of the biggest obstacles for me when I think about AI is that like, uh, if you just sort of bracket it off and just try to think of use cases, it’s just plain, it’s just challenging to think of anything good. It can be used for, but I guess it’s occasionally possible, but then when you don’t bracket it off and you like, actually look in really well, it sort of depends on super computers. And I was like, um, that this, this, uh, amount of hardware that that’s only accessible to two giant corporations with, you know, huge amounts of, um, capital and infrastructure, I just don’t understand how, how to think positively about, um,

Dan McQuillan (01:26:13) - Let’s invent a climate killer technology. Oh yeah, we’ve done it. Yeah, no, no, totally. It’s hard. But, but then, you know, I mean, okay. My, my, my, my auntie pitch to you, if you like about web three would be my understanding of at least, um, blockchain technologies

Dan McQuillan (01:26:34) - Let’s say would be that one of the problems that they’re trying to solve is one of trust, as they say, and they posit that as a problem, you know, they’re trying to solve the problem of trust. And I don’t think that’s a problem to be solved. You know, certainly not in that way. I’ve no interest in doing business with people who I have no other way of trusting. I actually want to trust people. So I want to come together in, in, and, and scale in a way that allows me to actually trust the people I’m working with. I mean, I have to, I work, you know, I work in my computing department, we have a solidarity group of those few people who, who we can absolutely rely on sporty gentlemen under our conditions at the moment are extremely threatened and, and threatening. You know, you just, I that’s the starting point for trust and, you know, obviously, um, means of scaling anything.

Dan McQuillan (01:27:24) - Good are welcome. I mean, I’m totally, that’s why I’m still interested in the possibilities of technical frameworks. And as I say at the moment, maybe more of the cybernetic possibilities of technical frameworks, I am interested in scaling the part, the positive. And, and so that’s, that’s maybe where, where we’ve yet to see where we’ve yet to see with this emerging frameworks where they’re going, but we’ve got, I suppose, yeah, we’ve gotta be part of it. Right. But whether we’re part of it and in it, or part of it and trying to roadblock it is the question

Joel Stern (01:27:57) - Then I’ve been very, um, quiet, but I’ve just been listening and enjoying the conversation. Um, and thanks so much. Um, we won’t keep you much longer, but I just wanted to say, I was sort of very struck, um, you know, by the way you were talking about the sort of need for a space for experimentation and sort of, um, you know, the paths forward that are sort of, uh, not, not clear at the outset, but they can sort of be arrived at through certain kinds of experimental means and, you know, scaling up trust for instance. Uh, and I, I was just struck by, you know, the, the chapter outline, um, which he gave us in the concluding chapter, had this great quote about, you know, in place of AI, as we know it, that the recursive horizontality of a new apparatus and, you know, so I just thought maybe you could say something about the, that apparatus, you know, cause you, you sort of leave it open that it’s, it may be technical. It may be sort of non-technical may be something else altogether, but, um, is that the kind of experimental space that sort of needs to be filled? Or is it something more concrete than that?

Dan McQuillan (01:29:15) - Um, that, that is a space in which it would be great to see any experimentation. I think it’s maybe where, um, you know, where, where does the, the potential computation of recurrent neural network.

Dan McQuillan (01:29:32) - Meet the particular experimentation of Rojava. You know, that’s the kind of open question for me and I guess where I got to with it, I mean, I started out, you know, very naive ideas or thinking, oh, we need AI for good or AI for the people or something like that. And I guess where I got to at the end was, um, a bit like Sean was saying, having, you know, dug down to the bottom of AI, you know, just find the kind of wasteland, um, of, of possibility to some extent. So, um, um, so I guess I got almost to a point where, you know, I’m, I’m thinking like maybe we’ve already got more computation than we already need. Um, I’m not completely against competition, obviously, you know, and I mean, maybe it’s not obvious, but I’m not. And, and, and I find that stuff fascinating, a lot of fun and potentially useful, and obviously it does have some uses.

Dan McQuillan (01:30:24) - Um, but I guess, um, I try to imagine a social framework for so utterly, um, other than what we’ve currently got, but that has historical precedent and contemporary precedent, some extent as the one I just mentioned, you know, that within which, uh, mode forms of computation, computation and technical, um, or the, you know, the other related technologies can be implemented to construction operators for me is the thing that, um, so I kind of Nick the term from Karen Barad in, in, in some extent, you know, it’s the way of, uh, uh, that, that sort of symmetry breaking between object and subject that symmetry breaking between, um, you know, thing and the thing, and the social, you know, is, is, is a constructed moment or something that is, is, is produced by, by machinery by now, by an apparatus in its wider sense, like, like an assemblage.

Dan McQuillan (01:31:18) - But I found that that way of talking about an apparatus more, more, um, amenable. So I, you know, I, I, I’m very excited about the possibility that there’s something in the internet or there’s something in, I don’t know, maybe blockchain or there’s something in who knows, uh, you know, connecting connectionists technologies or statistical optimization. I kind of doubt, but you know, that there’s something in there that could be taken, but in a way that can only be seen by people who already have posited the right problems for themselves and have the right challenge and you see that kind of stuff. I mean, one of the weird things about our times and it’s chilling and, you know, I feel very concerned about it cause I’ve got kids, you know, and I really, I, years ago I had friends who said, oh, I’m never going to have, because we were all like very political and critical and all that kind of stuff.

Dan McQuillan (01:32:09) - Now I’m never going to have kids look at the world. And I was like, yeah, come on. No, the young always managed to make a change and things are weird. There’s always hope. It’s, it’s hard to feel hope at the moment. But one of the things that you do see is the things coming to such a critical point that people do have will have the climate is this biggest one, but we’ll just have to act, they will just have to act because they don’t have any alternative for survival. And as it in a, in a way it was happening, Ukraine might now look at that kind of innovation, if you like, that’s happening on the streets. They’re right at this moment because people just don’t have a choice. And I think it’s only from that sort of space of, you know, that, that situation, that situatedness, that people would be able to see the other affordances in these kinds of things that, um, would, would meet them, uh, socially progressive to my mind, progressive technology.

Dan McQuillan (01:33:01) - And, and I guess what I’m interested in that moment is just trying to, um, prototype those spaces where possible, so that we don’t have to wait until the, you know, I don’t know if we can say things on this audio that the shit hits the fan. As much as that, before we get some idea about what we might do with it, you know, when, uh, there have been positive social experiments, uh, that have had some sustainable, some have mentioned sustained to some extent. Um, it’s because it has been years of preparation beforehand, you know, and I think that’s the stage we’re at with this stuff. You know, we have to put in the groundwork right now to try to figure out what might be the socially constructive uses of technology, uh, in the face of social, in the face of social and ecological crisis. We have to do that right now because you know, some kinds of breaks are coming and we need to be ready.

James Parker (01:33:54) - Well, that’s the note to end on.

Dan McQuillan (01:33:57) - I keep providing endnotes.

James Parker (01:33:58) - Yeah. Um, actually you’ve got an amazing turn of phrase. I, um, uh, the machine gun of silver bullets. Did you make that up on the spot?

Dan McQuillan (01:34:08) - Yeah.

James Parker (01:34:09) - I mean, that is, that’s an article

Dan McQuillan (01:34:16) - Caffeine I’m afraid to say,

James Parker (01:34:17) - But we’ve had you for too long. Thank you