You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

maier.md 33 KiB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
  1. ---
  2. title: "Stefan Maier"
  3. status: "Auto-transcribed by reduct.video"
  4. ---
  5. James Parker (00:00:00) - Well, Stefan would you maybe just begin by introducing yourself, however makes sense for you.
  6. Stefan Maier (00:00:06) - Yeah, sure. Yeah. So, um, yeah, I'm Stefan Maier. I'm, uh, I'm a composer, uh, primarily of electronic music. I'm, uh, born and raised in Vancouver, Canada, which is where I'm I am right now. I was in Berlin for a little bit working as a composer, and then I studied in the States and now I teach at Simon Fraser university here in Vancouver. I teach electronic music. I kind of work between the experimental electronic music world, um, contemporary classical music, and then multimedia installation. What kind of unifies? All of it is a examination of both emergent and historical sound technologies. And in that I try to highlight kind of material instability or unruliness. And, um, yeah, I'm really interested in kind of mapping the flows of kind of chaotic Sonic matter. And in that mapping, I'm always trying to uncover alternate modes of both authorship and listening practices possible within this specific technologically mediated situation.
  7. Stefan Maier (00:01:15) - Yeah, I guess for me, like really one of the primary departure points for my work is this idea of the prepared instrument I'm trained as a classical composer primarily. And now all of my work is always kind of trying to investigate the specificity of the, um, of the, the technical apparatus itself and then trying to coax out, uh, a certain kind of logic, a certain kind of strangeness and attending to the specificity of that. So the reason why I bring up prepared instruments is just because that's kind of been an ongoing fixation of mine. So, you know, cage John cage puts bolts and various objects into piano strings and do familiarize. Is this kind of like historically rot instruments such that, you know, it creates this kind of a sound, which, you know, he was aspiring to make it sound like a gamble on an orchestra.
  8. Stefan Maier (00:02:08) - And there's a long history of preparing instruments, kind of tinkering with the instruments and dealing with like the technical possibilities of an instrument to bring out kind of like a, let's say, repressed character of that instrument or transforming it into something else. And I guess for me, that's kind of like a primary point of departure regardless of what I'm I'm doing. And like I say, that manifests itself in a variety of different contexts. Like I have a classical music practice, like I, I write for ensembles, uh, quite regularly. And then I, yeah, I, I do a lot of improvised electronics and then I use kind of increasingly larger scale and multimedia installations.
  9. Joel Stern (00:02:50) - Um, Stephan, could, could you just expand a little bit on sort of the idea of, you know, Machine Listening at, as a kind of pre prepared instrument, you know, Machine Listening is as something that can be instrumentalized by a composer?
  10. Stefan Maier (00:03:05) - Yeah, sure. Happily, I guess, yeah, much like, I mean, in parallel to the, the idea that I was talking about with the piano I'm with, with these Machine Listening tools, I'm really interested in kind of determining the technical possibility space. That's kind of given to me and trying to place these tools in context where the kind of like latent unruliness can kind of come out. So for me, oftentimes I'm using these, uh, these technical objects as kind of black boxes, which then I place into specific contents that are really outside of where they were designed for. Um, so using speech synthesizers, but not speaking to them, uh, you know, in the way that they were designed, but rather, you know, having them converse with yeah. I don't know, like a soundscape even, or yeah, just, I mean, in the case of the, um, what I speak about in the dossier, um, this idea of just kind of letting the speech that the size, or just kind of generate its bosses.
  11. Stefan Maier (00:04:06) - I mean, sometimes I'm also actually intervening with the code. So I work with technologists who, uh, well, possibly mistrained, um, a neural network. That's what I did in, uh, in a recent project is DB and chain project where we mistrained, uh, uh, speech synthesizer such that it generated its own language, um, based on it's kind of incomplete training. And for me, I really see that as being parallel to this logic of taking kind of a, ready-made a cello, uh, a piano or something, and then trying to, um, yeah, remove it from its context, such that then it starts to do something else. But again, always specific to the technical possibilities that are forwarded. Like I'm not really interested in kind of like, like fanciful alteration. I'm interested in actually investigating what's going on under the hood. If that makes sense.
  12. Sean Dockray (00:04:58) - I think thinking about under the hood and speech synthesizers, you, your curatorial essay begins with a discussion of Wavenet. Uh, and I was just wondering if you could, uh, talk a little bit about what it is about Wavenet that sort of interests you and how Wavenet kind of acts as this entry point into, uh, your particular interest in Machine Listening.
  13. Stefan Maier (00:05:19) - Wavenet was kind of like the, um, the catalyst that got me interested in all this stuff in the first place. I mean, I, I wasn't particularly interested in even, even artificial intelligence up until I read this, this essay that Google released when Wavenet was first kind of, uh, released, I mean, on one hand you have a tool which is, you know, one of the most streamlined kind of speech synthesizers ever, ever made. You know, as I know in the essay, it kind of easily passes the Turing test. It's extremely mutable. The training is kind of like, you know, it has no parallel and it's like the technical achievement in so far as it can, you know, I mean, it's now employed in Google assistant and all these things, but I was really interested that even in this kind of, um, as I say, kind of like streamlined application of Machine Listening, you also have this possible, this possibility of kind of, I guess what I S I S I think about it in terms of like, really like digital objection or something like this, where, um, when the speech synthesizer is left to its own, you know, L left on its own or it's, or it's allow to kind of just speak freely without a human interlocutor, you know, it generates these strange glosses, which correspond to, to know kind of like known human language.
  14. Stefan Maier (00:06:41) - So I was really interested in this kind of this duality, I guess, that here we have, like, Google this kind of technical giant generating this tool, which is like, kind of incredible to, uh, if, if you're trying to, yeah. I mean, have, um, computer human interactions be as seamless as possible from kind of a normative perspective, but then at the same time with the same code you have, um, this kind of this strangeness, which can kind of come up to the surface. And so I was really interested in kind of like parsing out how, how to reconcile that ambivalence, that, that parody, I guess the, yeah, the, the technology's oscillation between like, you know, it's, um, technical constitution, which is capable of both realism and objection, and then also what we've projected onto it as being this kind of the voice of Google assistant. And so, yeah, I guess for me in the dossier that that's a, that's a really central paradox, the oscillation between, uh, rational objectivity and then a projected subject position, which is kind of never, never really fully congruous, if that makes sense. So, so wave not really seemed like a, yeah, a really good starting point for speaking about, um, yeah, a lot of the artists that I was interested in curating, especially somebody like, uh, George Lewis whose work, I think really, um, deals with a lot of these, these ideas of at least implicitly
  15. James Parker (00:08:12) - I'd love for you to draw out that connection between Machine Listening and music or a little or composition, or a little bit further, because, you know, it really comes through in the dossier that a lot of artists who've been working musicians, who've been working with Machine Listening and for a very long time. And as far as I can tell, you know, a relatively early stage in the project, it's, it's really a musical context that the phrase Machine Listening starts to be taken up and used regularly as a result. I think of Robert Rowe's, um, book, uh, interactive music systems, um, in the nineties. But, but you know, you're, you're also that you, that shouldn't get sort of bogged down in the, in the word or the phrase, because obviously, um, somebody like George Lewis is working much earlier than that, and you discuss other artists too. Could, could you say a little bit about the, kind of the history of Machine Listening techniques in music and the relationship between those two fields, to the extent that there are different fields at all, and, you know, also like the methods, because what you were describing in terms of determine more is not on my rudimentary understanding of George Lewis.
  16. James Parker (00:09:20) - Exactly what he's trying to do with the Machine Listening systems he's working with. So, yeah, that's a very open-ended question, but I just wonder. Yeah.
  17. Stefan Maier (00:09:28) - Yeah, it is. And it's, and it's kind of difficult to speak to in some ways, because like,
  18. Stefan Maier (00:09:33) - I'm thinking about Machine Listening, both in the context of applied artificial intelligence, but then also in a broader way. And as you know, like George Lewis, do I think that he anticipates a lot of the things that I I'm interested in that applies specifically to, um, AI stuff. I mean, he, he hasn't really worked with like neural networks at all. I mean, that, that was technology that wasn't really, you know, at his disposal at the time he's using, he's using Machine Listening and more like, kind of a, a broader sense. I'll, I'll speak a little bit to the, to the history of yeah. The uses of specific, uh, yeah, kind of, I guess you're right with RO it's, it's more about like interactivity is really the crux of the matter. And I think you're right. That, I mean, I sent that email. I, I kind of, um, yeah, I looked through some of my resources and it seems like that really is one of the earliest examples of that.
  19. Stefan Maier (00:10:29) - Um, so it, so it's a fairly recent phenomenon, but it's a very old, there are, there are other things that are in the history of electronic music with the chart anticipate a lot of these ideas that, um, yeah, I mean, go back even to the, to the sixties, I think about somebody like Xenakis where, uh, Yanis Xenakis, the Greek composer and his use of these like kind of highly formalized mathematical systems that are influenced by kind of ideas of human listening and then kind of, um, allowing those, uh, algorithmic systems to basically generate these extremely kind of incomprehensible jarring compositions. And this is, you know, this is, you can also speak about somebody like David tutor who was doing something similar or Subotnick in terms of really like the first person who's using Machine Listening software. I mean, I would say that it's, it is George Lewis where it's like a live interactive system where you have kind of Machine, which is, um, actively responding to input, which is coming out and then changing its behavior based on that.
  20. Stefan Maier (00:11:35) - I mean, after rainbow family, which is where the first software was developed, he created a system called Voyager, which I believe does start to use kind of more, um, at least its most current iteration has some Machine Listening underlying it. Yeah. So I would say that he's kind of like a major kind of, he anticipates a lot of the things that we're seeing now with like, you know, Holly harmed in and, um, an empty, sad and uh, Jennifer Walsh and all these people. So, so I guess, I guess my answer is that George Louis really is this kind of like central figure. It's hard to generalize about, you know, what's actually going on like under the hood because the technology has changed so much and frankly I'm not a computer scientist or even a music technologist to release speed to that history in any with really any depth, I guess, um,
  21. James Parker (00:12:27) - You know, placing Machine Listening into a history of sort of proto Machine Listening is really an important thing to do, uh, at the same time as retaining some kind of specificity. I mean, could you maybe talk through some of the different artists works that you gather in the dossier and some of the different things they're doing with Machine Listening and all proton Machine Listening. So you've already mentioned a rainbow family by George Lewis, but is there, is there a sort of another good, good entry point into the dossier for you?
  22. Stefan Maier (00:12:56) - Yeah, sure. I think that, um, Jennifer Walsh's, um, entry, which is ultra chunk, which, um, she worked, uh, with the technologists artists memo Atkins, who's kind of like a deep learning guru guy. I think that, uh, like Jenny's contribution is kind of yeah, in a way. I mean, uh, her and Florian are the only artists in the dossier, Florian Hecker, the only artists in the dossier who were working specifically with kind of like deep learning, um, Machine listen software, her work is really fascinating. Um, in my mind, especially in light of like this, this, this, this conversation of like a functional utility and then kind of objection where she improvised for many, many, many months and, um, kind of cataloged these improvisations and then Machine listener was basically trained on all of her improvisations. So it was this kind of like this, uh, musical subconscious or something like this, which then she improvises with in real time.
  23. Stefan Maier (00:13:57) - I mean, it's also like an extremely bizarre piece of music. Like it's just like so strange. I like when I, when she sent me the recording, I was just like completely floored because it was just like, so compositionally bizarre. And she speaks very freely about like this premiere at Somerset house in London where, you know, she was just kind of like blown away by, you know, how she both identified certain elements of her compositional language to what was coming out. But at the same time she felt like, you know, there were certain issues of like timing and also of kind of let's say improvisitory syntax, which were presented in this like totally abstracting and kind of warped melted way where, um, yeah, it took this kind of imperfect mirror where you think that you're projecting this, um, self portrait, right. This very like intimate thing where she's improvising with herself every day and this like kind of daily practice almost. I mean, I know my improvisitory practices almost a meditative practice and then having this very personal thing kind of.
  24. Stefan Maier (00:15:01) - Exploded and, um, transformed into something which is deeply uncanny and unsettling and both for the, for the, for the performer. And then also for the audience member. I thought that, yeah, that was really kind of striking. That would be this kind of, this, this is intimate gesture of kind of offering something to the, to the black box and then having something totally, you know, abject come out of it. Then on the other hand, you have somebody like Florian Hecker, who's working with these very specific Machine Listening algorithms, which are designed to imitate the way that the human ear Prestos as Tambor like the quality of the sound. Yeah. It's just like totally, you know, kind of cutting edge, um, computational model of how the ear presses this kind of, I guess, parameter, which is, um, very nebulous. Like if you read any literature on, psychoacoustics like the science of how humans kind of process, um, sound and how human, the human, uh, human society, um, actually, um, like really parses audio, um, Tambor is a very nebulous category that oftentimes described as like a wastebasket category.
  25. Stefan Maier (00:16:10) - Anyway. So it's just kind of this algorithm, which was, um, kind of designed to, to unpack tambour, at least the way that it works cognitively and Florian basically put in, he had the, uh, uh, previously written composition, basically resynthesize by this, this Machine Listening algorithm and what ends up coming out is, again, this like very worked strange distorted thing, which doesn't draw a comparison to the, to the original. So it's just kind of very detached, let's say process where there is a rational, um, kind of scientific model of listening, which then also distorts Florian's, um, already very, very formalistic kind of compositional kind of practice. Um, so I feel like those are, those are two kind of like very radically different approaches that kind of like come back to this kind of this, the abject unruly output of, of these specific tools. Yeah. And then, and then on the other hand then you kind of have Machine Listening, being dealt with in a more, um, a broader, a more poetic way, I think with, um, with both Ben Vida and C Spencer Yeh where Ben basically, um, had a, a text to speech synthesizer, um, reading some of his kind of concrete poetry.
  26. Stefan Maier (00:17:29) - And then from that, uh, and, and oftentimes, um, a lot of text to speech synthesis is now employing kind of deep learning. So as to have a more realistic model of the prosody and also the pronunciation of certain, um, phonemes, um, and he really worked with the kind of the idiosyncrasy facilitated by the specific text to speech synthesizer, and then that kind of produced all these interesting rhythms and yeah, it's kind of an in dialogue with much of his previous work using a similar process of translation Spencer on the other hand. Um, yeah, I was really interested in using also, um, uh, Texas speech synthesizer, but to different ends. Um, he took three different models of three different kinds of, let's say, uh, yeah. Dialects of, um, Cantonese, I believe. Yeah. Is Cantonese, which, uh, yeah, also the model was based in a Machine Listening and then he fed the same kind of the same tech next to the, to the three different speakers, um, such that, you know, like the Texas speech synthesizer became kind of like confused and it created all these kinds of also strange kind of sounds. And then Spencer kind of internalized those sounds and started to imitate them as kind of a fourth, third or fourth voice, which is kind of in contrapuntal dialogue with those things.
  27. Stefan Maier (00:18:50) - So you have this kind of, this, this feedback network, let's say between the technical distortion and then, um, Spencer imitating that for me really also, uh, you know, a really important, uh, person that, you know, couldn't contribute to the dialogue of the dossier, but rather looms kind of above it in many different ways as the American composer, Maryanne Amacher , who was throughout her entire career was very much interested in the idea of computation, uh, assisted listening. So using like different, she was, she was kind of an, uh, uh, devoted to people like Ballard and like the JG Ballard and like status law lamb. And she was kind of interested in different futures where humans would be able to use different programs, such that they would be able to hear like as a different animal. So have, uh, a program which can make you hear as a whale or can make you hear Beethoven underwater, or hear a Beethoven.
  28. Stefan Maier (00:19:48) - Yeah. The same Beethoven symphony, you know, uh, under the atmospheres of like, you know, kind of like, um, the Cambrian period or something like this. So that, that's something that you wrote about a lot. And then in her, um, unrealized media, opera, intelligent life, she kind of imagines an Institute for computationally assisted listening where you can actually, you know, kind of export the way that any individual's human kind of listens and then have that be a program which could be something that then somebody could, so I could listen to as, as you, I could listen to as whatever Sam cook, I could listen as, uh, early human, whatever. Um, I mean, AMA his writings are all about this and she, yes, she had this massive project that was supposed to be, uh, a kind of a television mini series, which was all about, uh, yeah, very like campy a mini series, um, aware.
  29. Stefan Maier (00:20:41) - Yeah, there's this Institute is kind of grappling with the epistemological issues of experiencing sound as an other. And so, so Amy Semini at the, um, uh, American musicologists. Yeah. She, she writes a little bit about the context that this work was coming out of. One thing about Marianne is that like, I mean, basically everybody in the, who participated in the dossier is at least deeply influenced by, uh, Marianne or knew her in the case of George, um, Florian's work foreign hackers work is really kind of, I think, a continuation in many ways of, of amateurs project of kind of, yeah. Literally trying to use technology to, to kind of like listen in, in kind of a, a different way. Yeah. I mean, I could speak about the, the, the others as well, if you like, Oh yeah. Terry tablets. Yeah. Derek tablets is, um, presented, um, the liner notes from a record that he did, I guess, in the nineties Terry's work doesn't really, really has never worked with, um, Machine Listening software specifically, but, um, was really, yeah.
  30. Stefan Maier (00:21:51) - I mean, it's a record that came out on this kind of like glitch label mill plateau in the nineties, and Terry's, um, really fascinated by this idea of specific technologies, having an understanding of, um, a normative human underlying them and then using technology to kind of just store that image. So, um, yeah, basically, um, in a couple of Terry's records, there are a number of different source materials, which are oftentimes very charged in terms of like the gender politics underlying the material that's employed, which are then kind of destroyed or, um, yeah. Made kind of unruly through these different technical operations, like in, in, in tablets is writing, there's almost this idea of like the technology being a form of like, of drag for this, uh, for, for the original source material, or it becomes something else. Um, in terms of, uh, uh, kind of a non-normative gender position, you know, through this kind of technical processing. Yeah. I think that, I think that that pretty much sums it up.
  31. Joel Stern (00:22:57) - No, that was great. Stephan, I mean, th thank you so much for gut for going through all of the contributions. It's, it's such a rich dossier and, um, you know, the, the contributions are so creative. Um, I mean, it sort of was occurring to me as, as, as you were, as you were describing them one after the other, that sort of, so many of them kind of hinge on this dissonance between the kind of the human and the machine and, or the sort of original and the rate and the reproduction and the sort of human qualities as the sort of exit, you know, I guess the Machine and the human both have a certain sort of excess that gets re reproduced in the work that kind of.
  32. Joel Stern (00:23:40) - The element that's kind of in commands. Sure. It, you know, to both it's re it's sort of does seem like one of the artistic projects around Machine Listening is to sort of continually point to that incommensurable difference, which is often, you know, the project of let's say the, um, companies, the big tech companies producing Machine Listening is possibly to kind of oblique her, right. That difference, or at least to sort of make the human subject somehow indistinguishable, let's say from the voice assistant. And one of the things that we, one of the feedback that we got from unsound on our sort of initial, you know, essay was that it was sort of quiet, you know, pessimistic and critical and sort of focusing on a certain sort of techno politics that's always already captured by capital and is in a way leaves us sort of needing to work against Machine Listening.
  33. Joel Stern (00:24:37) - You know, so the title of the first zooms sort of session we propose was against the coming world of Listening machines. And I'm just sort of wondering, um, how you feel about those, you know, utopian and dystopian kind of horizons. Um, I think, you know, one of the great things about the works that you described is the imagination is sort of put to work in really positive ways. Um, the quest, these technologies are not taken as sort of, um, intrinsically, um, repressive, but sort of as, as platforms for some kind of, if not emancipatory, at least unruly, you know, as you've put it, um, kind of expression. So I wonder if you could say something about your sense of the politics of Machine Listening across this sort of spectrum from the utopian to the dystopian horizons of it.
  34. Stefan Maier (00:25:35) - Yeah. I, I think that for me, a really important distinction to be made when speaking about adding technology is the distinction between like a, uh, the utility of the technology. And then it's technical operational logic, like the operational logic, which is underlying it. Um, and that's something that I I'm very much influenced by the thought of jail, Betsy Mondo, um, especially the way that he distinguishes between how a technology is used within a certain cultural context. And then what the machine is actually doing. And Simone doll, um, you know, speaks about, you know, kind of the, the alienation of subjugated peoples to, um, the machines in terms of not really being able to understand how that technology can actually behave or can be used in, in, in ways that are in contrast to utility. I mean, you know, uh, I was being a little bit, I think, provocative, let's say in the, um, the dossier, uh, by not speaking, uh, very much through the insidious uses that you just spoke of in terms of like, kind of, yeah.
  35. Stefan Maier (00:26:44) - The vested interest of, of technical comp of tech companies and then also governments. Um, I mean, there's a lot of vital work that I know that you guys are in dialogue with. And, um, that, that speaks to that. I suppose, what I was interested in doing was kind of just drawing attention to the fact that much, um, especially in the history of electronic music more generally, you know, there's been a, a long history of, of people who've been on kind of the outskirts of experimental music who have been using technologies that are designed to do a specific thing, especially technologies that are designed by the industrial military complex, especially in the context of like the vocoder, you know, and then kind of discovering something else. And there being this, as, as you said, I like this term of like, like thinking about the excessiveness, both of technical activity of rationality and also of like the design, like the design, um, of a, of a specific tool.
  36. Stefan Maier (00:27:40) - One can even think of like, you know, it's like, I mean, the, the Dawn of like, you know, how some techno, um, and of course, like acid house, I mean, it really comes from taking a tool that was supposed to be used for like, like dad rock bands to kind of like jam to, and then discovering that there's this entirely, there's a, there's a new world. There's an, both, both in terms of like, you know, um, aesthetics and in terms of like, kind of the sociality that emerged around this specific kind of music. So I guess I'm like.
  37. Stefan Maier (00:28:09) - To go to the question of, uh, or the duality, let's say between like the techno pessimists and the techno files, I'm, I'm totally agnostic. You know, um, all what I'm interested in is trying to attend to the, um, the specificity of the technical object that we're working with and trying to then understand, you know, what's actually going on, even if it's a black box, like in the context of, uh, Machine Listening, uh, algorithm, that's driven by deep learning. I mean, it's literally, it's an incomprehensible space. It's a end dimensional space of statistical inference that many of the people who are training these things, like don't even really understand what's going on under the hood for me, I guess. Um, what's interesting is yeah. Trying to find the specific tools that will then be able to offer, um, yeah. Uh, determined, um, their, their, their, their Machine Listening softwares that are so trained and so functional that, you know, like the only thing that they can do outside of, you know, the thing that they're designed for in the case of Wavenet is make this in comprehensible babble.
  38. Stefan Maier (00:29:13) - For me, that is a source of at least some sort of potential emancipatory potential. I don't know, but at least there, there are crafts, you know, and for me, a really important theorist in, in, in thinking about all these things is, uh, is Benjamin Bratton and his thought around, um, synthetic sensation and how, yeah, there's, there's a, uh, an accidental kind of, let's say, I mean, the way that I interpret is almost this kind of like machinic alterity, which is present in certain emergent superstructures, um, in digital technology, which actually might project elsewhere, then the kind of Silicon Valley Valley ideologues who are, you know, kind of extracting the hell out of all of us. And for me, that's, that's at least cause for some sort of positivity, but nevertheless, I would say that, um, it's always specific to, you know, the technology that we're, speaking of, it depends how it was trained.
  39. Stefan Maier (00:30:08) - Like, you know, for me, I mean, in my own work, oftentimes I'm, I'm working with unsupervised learning. So the data sets aren't labeled before such that the Machine listener is kind of inferring the deep statistical kind of knowledge about whatever it's, it's dealing with in the case of, um, deviant chain, this work of mine, it's like, we train this, this, this Corpus on like reading, like readings of like nebulous, Dani, and then also like Luddite texts and all these kinds of like, totally like a smorgasborg of just kind of like, yeah, different theoretical positions around kind of like philosophies of technology and philosophies of synthetic sensation. And then from that Corpus, um, the Machine Listening software kind of made this, um, you know, if you guys are familiar with these terms of like, like feature extractions and stuff like this, like these features, um, that, that, that the machine is hearing that are totally incomprehensible to us.
  40. Stefan Maier (00:31:01) - Um, but are nevertheless these kind of like high dimensional, um, parameters, uh, which it seeing as being, this is the most crucial information of Stephan reading this Neil Luddite texts, you know, but it doesn't, yeah. I understand any content, you know, it's just kind of, um, inferring kind of yeah. Uh, this, this kind of deep statistical structure that, you know, uh, has nothing has very little to do price. I mean, I can't, I can't really speak to what it actually has to do with the meaning of the text, but I know that the output is extremely strange because these features aren't correlated to, to any categories that we have intuitively. Um, and so when those things are unleashed, then kind of a strangeness kind of unfolds and, and yeah, like I say, this is something that's very much influenced, I think by, uh, by Bratton's, um, conception of Machine X sensation and Machine thought indeed rationality in some ways as being very different.
  41. Stefan Maier (00:31:57) - Yeah. Then our ideas of rationality. So for me, that's, that's definitely, um, that has political ramifications for sure. I think that it's, um, it's, uh, interesting maybe to bring up the idea of like in humanism versus post humanism, like I'm very much interested in this idea of like taking like a rational system or rational technical system and, um, not seeing like how I can interface it with my body or something like this in this kind of post-human context, but rather push the technology as far as it will go in terms of what it actually does. Um, and then seeing what happens. So if there's this humanist self portrait, which is projected onto the, the technical activity, but then the technical activity goes, goes elsewhere. And I would see that as being kind of like parallel to, um, this kind of this, uh, Neo Copernican sensibility that like maybe rational activity might be, um, facilitating where, um, it projects us elsewhere than we thought if that makes sense. And for me that's yeah, that's, that's intrinsically political, um, thing in so far as it's, um, it's questioning this, uh, the rigidity, the givenness of the human that we started with.I mean, will that be unleashed, um, you know, uh, will the, will the possibility of that be unleashed, um, in terms of the way that these technologies are being developed? Um, probably not by Facebook that's for sure. But, um, like I say, this is one of the reasons why I brought up, um, uh, Google wave as well. It's like, even with this kind of like hegemonic, you know, kind of, um, force behind it, there's still kind of some sort of line of flight, uh, which is possible.
  42. Joel Stern (00:33:37) - This is a stupid thing to say, but I was just gonna say that, you know, Skynet didn't know what they were producing either. So ... .