Selaa lähdekoodia

Add 'content/transcript/JSER.md'

master
james 3 vuotta sitten
vanhempi
commit
0d287def13
1 muutettua tiedostoa jossa 247 lisäystä ja 0 poistoa
  1. +247
    -0
      content/transcript/JSER.md

+ 247
- 0
content/transcript/JSER.md Näytä tiedosto

@@ -0,0 +1,247 @@
---
title: "Jonathan Sterne and Elena Razlogova"
status: "Auto-transcribed by reduct.video with minor edits by James Parker"
---

Elena Razlogova (00:00:00) - So I'll start as the, um, junior scholar here. So, um, I, uh, my work has been originally in the history of American radio and, um, my first book is on radio and then its audiences in the, uh, between the twenties and the, and the end of the forties, like their old time radio period. Um, and then after that, I got interested in freeform radio in its forays into open access and open source experimentation in the nineties and two thousands. And I discovered that a lot of, um, even though, um, in the discourse, uh, the freeform radio is considered the opposite of a Machine Listening machine making of music. It's actually quite intertwined with the history of algorithmic music. And, um, so that's what my work is focusing on. And the smart as Machine Listening, the Lander projects kind of an outcome, I think of them, of my work and, um, on Montreal music scene and they algorithmic experiments there because it's based in Montreal. And, um, my work on Shazam was also part of that. So in both cases, one is on recognizing songs. Um, algorithmically in LANDRis in mastering, um, music algorithmically. So that's how I came to the project.

James Parker (00:01:21) - Fantastic. Thanks so much, Elena. I, you didn't actually say your name, your full name, get that on the record. Uh, thanks so much. Um, and Jonathan,

Jonathan Sterne (00:01:32) - My name is Jonathan Sterne. Um, I guess I'm the notjunior scholar, deep hold. Uh, yeah, so I've been writing about sound in one way or another for over two decades. Um, my first book was on the origins of sound reproduction technologies, uh, in the late 19th and early 20th centuries. And specifically thinking about them, not as like transformative agents, but as cultural artifacts, basically, um, that reflected sort of existing practices, politics and ideas about sound, um, uh, because that's what I felt was needed at the time. I mean, the book was also conceived in the nineties when we had sort of, uh, one of the earlier waves of computer based techno phelia that we're sort of now currently trying to recover from. Um, my second book was on the MP3 format and it's a bit of a conceit cause it's, it's a hundred year history of what was then a 19 year old format, uh, with the idea, uh, that to understand MP3s there's a whole sort of at that time, not very well attended to history of the relationship between theories of how humans here and how sound works and the design of sound technologies and media aesthetics.

Jonathan Sterne (00:03:03) - Um, and so that's him basically MP3 the meaning of a format in like two sentences. Um, and, uh, out of the, I finished that project in Silicon valley and, uh, I sort of looked up and I'm surrounded by all these businesses. Uh, the, the thing that got me into the MP3 was like every, uh, sound technology is this sort of model of human beings in it. And then I looked around and like, there were all these other people trying to design new sound technologies. Uh, and so over the last decade I've been sort of studying, um, audio signal processing and the politics of audio signal processing and the machine learning stuff is really an outgrowth of that as well as the work on, um, sound technology. And I'm just going to plug, I have a new book coming out. That's not about this, it's called diminished faculties, a political phenomenology of impairment, uh, which is really much more about impairment, disability and sound. And that's another major thread of that first book, the audible past, uh, because, uh, uh, because it's like ideas about deafness and not hearing, um, and, and sort of, uh, even damaged years were sort of built into sound technologies as well. So that's sort of the other thread of my research that maybe we won't get into as much today

James Parker (00:04:28) - And another project on time shifting

Jonathan Sterne (00:04:29) - All yeah. The time stretching book with Mara (Mills) that's ongoing. Uh, yeah, it's a little bit on pause just because Mara is finishing her first book. Um, yes. Uh, so that boat that is re that's, like got a nice COVID hook because all these professors suddenly freaked out when they learned that their undergrads were like speeding up their recorded lectures. Um, but the book is about basically speeding up and slowing down sound and also a bit of pitch shifting. So it's.

Jonathan Sterne (00:05:00) - The way I explain it to non-experts is imagine that film studies has existed for several decades, but that nobody's ever written a history of slow motion or high-speed playback and that sort of what we're doing. So we start with blind listeners, uh, in the 1930s. And, uh, by the end of it, we've like written about avant-garde composers, broadcasters are nos devoid of funk Auto-Tune, um, ornithologists and all sorts of other people that are interested in speeding up and slowing down sound and advertisers and students and broadcasters. So there you go.

James Parker (00:05:37) - Oh, that all sounds amazing. And similarly, Elena, um, so you've sort of joined forces on this project on land, uh, and written a couple of papers now. I mean, um, maybe the thing to do is to briefly introduce land, uh, and yeah, just sort of go from there, see where the conversation takes us back into various tributaries in your, in your previous work and so on. Um, I mean, did somebody want to have a go at saying what LAN LANDRis or how you arrive at this project or how you ended up working together on it?

Elena Razlogova (00:06:11) - Well, um, Lander is a company based in Montreal. Um, that started it's actually started as mixed genius, uh, and tried to, um, uh, mix music algorithmically, but that didn't work. And then in the end, it, um, um, began to advertise itself as a company that can, um, that uses machine learning to master music. Um, and, uh, we both came to this project separately, I think. And then we met and I met some of the people Homelanders through my work, um, in the, um, in the music scene. And, uh, uh, Jonathan was interested in the question of whether you can algorithmically master music. Um, so we got together and we interviewed, um, one of the founders of the company who now no longer a part of the company anymore who had, um, a flamboyant kind of, um, almost a PR guy, I think, who, um, explained to us, uh, the public version of what Lander is. And then we went from there. Maybe don't can continue. Yeah. I was actually on a panel

Jonathan Sterne (00:07:23) - Of mixed genius a few years before it became Lander. And I they'd like in the basement. No, I don't know if it was like in the basement of that church in mile end or whether it was in the Solid, yeah,

Jonathan Sterne (00:07:37) - I don't remember if it was there or Sala Rosa. It was like for one of the many Montreal festivals when they were like, sure that they were going to automate stuff, uh, the automate, the automate, the mixing of music. Um, and then I sort of stumbled into it a few years later as a natural outgrowth of all this interest in signal processing, because it was one of the first cases where I'd seen a company claim that they had successfully used AI to automate signal processing. Um, and just for those listeners who don't know what signal processing is basically it's the cooking part of audio. So you have audio that's either recorded or synthesize, uh, we'll leave that. We'll leave synthesizers out of it for now because it's easier to understand. So like my voice goes into this microphone, it gets turned into electricity, gets turned into data, eventually comes out of your, um, your phones or your speakers, or however you're listening to me and signal processing is everything that happens to that signal between the time of dinners, the mic and comes out the speakers.

Jonathan Sterne (00:08:41) - Uh, so for instance, uh, it might be compressed, which means, uh, that the distance between my, the variations in my voice between louder and quieter, um, my change that frequency balance might change. Maybe somebody would add artificial echo, not on zoom, but if this were, you know, some kind of glossy production, um, so signal processing exits like color balance and video or something else, that's like a pretty subtle process, but without it, nothing looks or sounds right. Um, and it sort of defines the sound and look, a media Lander was the first company that I brought across. It actually claimed successfully to use it Elena and I like have known each other since forever. Um, and we're both in the same sort of Montreal, middle you, and like, at some of the same dinner parties and hang out sometimes. And so when we both discover, I don't know how we did it, but we sort of both discovered you're interested in it. And then it became like a field trip and then it became a couple of papers.

Elena Razlogova (00:09:41) - Um, it was supposed to be just one paper originally. Yeah.

James Parker (00:09:45) - Is it going to be more papers? Well,

Jonathan Sterne (00:09:47) - No, I think, I think two is enough on LANDR I don't know. So the LANDR but it's not necessarily our last collaboration.

James Parker (00:09:58) - Is it that if you're in Montreal, you know, you sort of heard about Lander because I, I hadn't heard about land Landa, uh, myself. I mean, is it, are they kind of, cause it seems that they, they.

James Parker (00:10:11) - You know, apart from this kind of supposedly revolutionary technology, they also, um, grounded themselves in the Montreal music scene. Is it, were they something that people knew about, um, that made them well, they employed

Elena Razlogova (00:10:27) - A lot of musicians. They employed a lot of people. A lot of musicians in Montreal are either unemployed or live on grants and it's, it's becoming less possible, but it has been possible for a few years. They employed all the musicians and radio DJs as well, whom I knew. So I found out about them from people I knew through security radio, um, which is, um, McGill radio station. And then they also sponsored events at pop Montreal music festival, which made them even more involved in the community and, um, people, um, the use these events, as we explain in the article to promote, um, the, the company. So on one hand, like to make it seem more human and more, um, underground, um, and, um, alluring to, um, small time musicians, um, and people who may be wanting to create their own music in their garage and then use, um, Lander for mastering. For example,

Sean Dockray (00:11:33) - May I ask a simple, not simple question, which is, um, what, what is mastering and what makes it more of a, an attractive, um, thing to do than, uh, mixing how they started out?

Elena Razlogova (00:11:53) - Oh yes. Um, technical explanation is much more your thing. Yeah,

Jonathan Sterne (00:12:01) - It is much more my thing. Uh, um, okay. So mixing is about setting the relative levels of sounds in a recording. Uh, so, um, when most people make recordings today, it's not everything's recorded at once, unless you're talking about putting a mic in front of an orchestra or something, uh it's, you know, a bunch of musicians go to a studio and then either at the same time or in sequence, they get recorded on two different tracks so that you can, like, after the fact balance, say the levels of the drums and the guitar and the synthesizer and the vocals that's mixing, uh, there are a lot of reasons why you can't automate mixing. And in general, I think this has something to do with sound and big data because you also see it in things like music, information retrieval, um, where the sound actually doesn't give you, like, just looking at the sound, it doesn't give you the information you need to make culturally meaningful decisions.

Jonathan Sterne (00:13:05) - So for instance, if I'm the bass player in the band is the most famous member of the band, like let's say they're the lead million singer. They're the best looking. They're the one that's promoted by the label. Their track might be louder in the mix. Then if the bass player is like your usual, like not famous person in the band and not the most important face. Um, so there's like all these extra musical decisions that get made in mixing plus, um, in an actual mix, uh, you know, if you're dealing with a group of music, like multiple musicians, as opposed to like a single artist working at home, you're also negotiating, um, competing visions for how the music's supposed to go. Now, obviously, if you're like an individual electronic music producer or sound artist, that's a different, that's a different proposition where it's one person, but then you're almost certainly recording things serially as opposed to in a single shot, unless it's like live improvised or something I'm doing that scare quotes.

Jonathan Sterne (00:14:11) - Um, so mastering is after the mix has done. And you have justice stereo recording. Although, I mean, if you're doing mastering for video or gaming or something, it might be 5.1 or 7.1, depending on the, depending on the platform and how it's going to be distributed. And master is like the final polishing. So it's like the page proofs like to use a publishing metaphor, mastering is like type setting and page proofs and things like that. It's the moment where the music will sound like what it sounds like when it comes out of the speakers and a mastering engineer's job is basically to make the mix translate across as many contexts as possible. So it sounds as good coming out of the crappy speaker of like a mobile phone as it does coming out of a big sound center system. Um, and you might master specifically for, like, I know this is going to primarily circulate on social media, or I know this is an ad and it's going to mostly come out of TV speakers. So it really depends on the, um,

Jonathan Sterne (00:15:12) - Uh, you know, so mastering engineer applies sort of contextual knowledge. Um, but there's another reason that mastering could be automated more than mixing, which is most musicians don't know what it is. So, uh, and there, and it's absent for some musicians, like either record themselves or someone records them. And they haven't the interaction with a person who records them, even if that's like them interacting with themselves. The mastering engineer often mastering sessions are unattended. You send the material to the mastering engineers, a mastering engineer, and they send it back to you. Um, so socially speaking, it's easier to automate because you're removing a person that already isn't someone, the musician usually interacts with. Um, so that's the, that's the, that's the, that's the, I mean, it's not totally technical explanation, but that's sort of what mastering is or was. Um, and, um, you know, it also, it has a nice name.

Jonathan Sterne (00:16:12) - It sounds finished. Like I've always said the highest degree should be master, not doctor. Um, although, I mean, there's also like master also has like some pretty awful, uh, historical connotations and, and, uh, um, I mean, you know, it does, it does multiple duties. It's like, you know, part of the history of shadow slate, it plays slavery, but also S and M like it's, it's, it's got multiple, uh, connotations. So, um, uh, mastering has this like a lure to it too. There's like this wizard like dimension to what, how mastering engineers are understood in the industry.

James Parker (00:16:50) - I love the section, um, in one of the essays where you sort of say that one of the main functions of Lander is, is to sort of give the musician confidence that, that the work has been mastered, you know, that it's just a, sort of an external signifier and that in some ways that that's enough, regardless of any kind of, you know, change to the audio.

Jonathan Sterne (00:17:17) - It's true for mastering engineers too, though. There are lots of interviews with mastering engineers. Well, they'll say I didn't touch it. I didn't touch the audio. I was perfect. It was good. I didn't do anything. I just called it

James Parker (00:17:29) - Sometimes. Mastering is simply saying it's all it's already finished.

Jonathan Sterne (00:17:34) - Yeah. Yeah. And it's partly.

Elena Razlogova (00:17:37) - Yeah. And then, um, another point that we make in the article is that LANDR doesn't do that like that wetland, or never leaves the recording alone. And also, um, the difference, I think the difference between the, um, the live engineer and the algorithm is, um, psychological too, because one of, um, Montreal musicians are describing a local, uh, mastering engineer is a psychoanalyst. I think, like somebody who is helping you through with your insecurities and of course, an app it's a really difficult for an app to be, to embody that.

James Parker (00:18:14) - Although, so that's what most apps are. Aren't they they're psychiatrists psychoanalysts in the, you know, we'll say social media platforms anyway. They're ways to kind of, uh, yeah. Calibrate your psychology against everyone else. Yeah. Oh God. Well look. Um, so they try and fail as mixed genius at mixing because, because there's something about music, which is as music as opposed to as data, or, you know, as that sort of makes that impossible maybe, or maybe even people are just better at hearing mixing or something. Um, you know, they can tell that it's a bad mix, so they won't accept it, um, the kind of automated version or something like that. So, but then they rebrand and then they, they say that they do, you know, that some of the kind of promotional claims on the marketing claims and the website, I kind of extraordinary revolutionary Armand AI, mastering Lander is the place to create master and sell your music, the creative platform, musicians.

James Parker (00:19:15) - They even say that your music was sound just like it was produced by Timbaland, which is a weird thing to say, because if you don't make He's a producer, not a masterer anyway, but like, so, okay. So that there there's a marketing claim going on and, you know, what's the move. W what, what are you, what do you do with that? Um, is it, you know, pull the curtain back and reveal what's really going on. What do you think in, what, how do you confront an accompany, like land? Uh, what, what, what is it that interested you, um, as researchers, or, or kind of, or politically speaking in the context of the, the Montreal music scene or, you know, or global, you know, or Silicon valley or, you know, however it is, you want to think about it.

Jonathan Sterne (00:20:07) - Well, we really looked at it from a bunch of different angles. So the first structuring, like, you know, in classic sort of dirty and fashion, like this structuring absence of the whole thing is corporate secrecy. Like they're not going to tell us how it actually works. Um, although we did a little fishing around the edges and have, uh, we, we have a pretty good guess of how it worked at the time of the article. Um, but we really did, uh, we, we did a sort of, I don't know what you'd call it like a multimodal study of it. We did everything, we, everything else we could with it. So I worked with it. We interviewed people, we, you know, went around town Alaina. You want to say a bit about, uh, sort of how it connected with your work on the music scene and their promotionals. So

Elena Razlogova (00:20:55) - We, yeah, so we figured out what Lander people did in the community and how they emerged a lot of things that they did were pretty cool. Like they organized workshops on how to survive as a, um, as a beginner musician, they invited artists to speak on panels. They worked spitting in, uh, festivals. Um, MUTEK is another Alec electronic music festival is, is another one that, where they were connected to. Um, unfortunately the other thing we found out is that they tended to, um, especially the people they employed as, um, to work with the software. Um, and here we interviewed one person who wishes to remain anonymous. So we can't really, um, like get into that specifically, but, um, it, they invited people to help them find you in the software. Um, and they paid some of these people. And what happens is that the software, uh, turned out what say, say it was bad at electronic music initially.

Elena Razlogova (00:21:56) - And then if few months later it became better because it got a lot of criticism. Then they paid somebody to, um, do a bunch of tests, uh, tracks, and then it became better. And then they could advertise, um, the software is improving through AI, through machine learning, but they would tend to employ these people and then kind of leave them, um, threw them out. So people would leave their job, their regular jobs to get a high paid position at Lander and then leave. And this kind of happened gradually. They began as this grassroots, um, grassroots, small company, kind of an upstart and revolutionary, and gradually grew into this monster that a lot of people were bitter. A lot of musicians in the community were bitter about interactions, um, with, uh, with a company, um, and skeptical about it claimed its claims. Um, and in the meantime, the company Lander became more of a multinational entity with offices in LA and Berlin and connections to Hollywood. One of their biggest businesses now is, uh, doing sound for, um, ads and TV shows.

Elena Razlogova (00:23:10) - So, yeah, so this is kind of my angle in other, another important thing that I think we both are interested in is this conceit on eventually, um, the, the data for machine learning will eventually be complete. This was part of the, um, of the point that Justin Evans made to us during the interview that, um, machine learning is based on the learning of big data. And once we have the big data in the cloud, um, we will be all set. Um, and that I think is a concede that cannot be achieved. Um, and that was one of the points that we try to make, make in this article. Some things will always be left out for political reasons or for, uh, technical reasons, um, than others

James Parker (00:23:52) - In the sense that, that the data set that they're using to train the machine learning on will not include some genre of music from wherever it might be, or some subculture or some scene or something. So sort of premise on the, kind of the hoovering up of all music into the cloud for, for that, that kind of, that claim to work.

Elena Razlogova (00:24:17) - Exactly. So they would say, well, eventually all the genres will be available to us, but in fact, not only they will not be available, but also the company only focuses on genres that that can, can, that it can monetize. Of course. So hip hop and electronic music are the ones that they focused on initially. And, um, some genres will never be interesting enough because they're too niche or too, um, experimental the program can not work with experimental music at all.

Joel Stern (00:24:48) - Yeah. Because he experimental music also to be properly experimental has to subvert its own conventions. It's like it or not kind of like operate within sort of the constrictions of genre, you know, as an ocean. Exactly. I mean, it's sort of hearing people talk about it. as the kind of logical end point of understanding means either that, like, if we get all the genres got all the music, it's sort of quiet, um, a reductive sort of starting point in any way, but, um, sorry, I was going to say something about it. Cause I, I use, I, I I've been using isotope, you know, for, for, for a few months. Um, and I was actually sort of feeling more and more guilty about how I've been using it when I was reading the, um, your pace and cuddle it more and more duped in a certain way, because with the isotope, um, you know, with neutron and neck nectar and ozone and these, you know, apps that they have when you, um, get them to do the automatic mask, automated, mastering, or automated mixing this, your music plays for a few seconds and this ear appears on the screen and it says, listening, listening, listening, listening, and then it sort of transforms your audio and processes that, um, and yeah, I guess without trying, you know, I'm not sure if you were going to say something a little bit about sort of genre that Jonathan, but just, um, I was sort of trying to move the conversation onto this, the role that this sort of fantasy that the application is listening. Um, you know, and then the way that that sort of listening is sort of symbolized and represented and, and signified and you know, what, what, what that's actually doing. Um, and how much of it is just sort of playing on the desire of the user to kind of believe that something magical is happening there,

Jonathan Sterne (00:26:55) - Or even just finally, someone's listening to my audio, someone's listening to my music. Finally, I have one audience member it's the computer, but so I'll just say one quick thing on genre, which was, we have to remember that genre itself is non-organic category. It is also an industrial product of an attempt to segment markets and produce predictable music sales and like popular music scholars. Who've looked at the music industry, like keep negative, documented this David Brackett. My colleague has this wonderful, fairly new book on the history of genre. That sort of goes all the way back, um, in the recording industry. Um, but John was old enough that people really live it, right? Like technically the guitar sound in certain metal and certain punk genres are exactly the same. And yet you'll never like people who are devotees of those genres won't necessarily accept the music from the other genre.

Jonathan Sterne (00:27:51) - You could say the same thing about certain synthesizer tones that are used in funk and ambient music, for instance, right? It's like the exact same setting on the exact same piece of gear. But like these two things don't translate. There's no word as of now, there's no way in machine learning to like figure out the difference between those two things. Um, the Isaiah took them. We briefly mentioned them in one of the articles, uh, because I got to see a presentation of areas at Nam, which is like the big corporate, um, trade show where companies basically present to other companies. And, uh, you know, there's, there's just a lot of bullshit about machine learning in general. I mean, this is something that's, well-documented in the broader literature, Kate Crawford, um, Mary Gray and Siddarth Suri, Tarleton Gillespie, all these people have documented a great lengthy amount of human labor behind machine learning and the sort of discursive expansiveness around automation, claiming things that are claiming things are automated when they're actually not.

Jonathan Sterne (00:28:58) - And, um, or where they're claiming no humans are involved when in fact there are, and, uh, Lander and isotope are no different. Um, my favorite anecdote from that isotope presentation was we saw some, we saw a progress bar, um, where, I mean, it was basically the same thing as like the spinning pizza on an apple system or something like, it's just, it's doing its thing. It's busy. And he's like, well, now it's doing some machine learning, right? And it's the same thing with the isotope plugins. The thing that bewilders me is why it only analyzes 30 seconds of the track. I, if you're doing anything RD or whatever, you might have huge dynamic and temporal shifts, like why can't it analyze all five minutes or all 20 minutes are all one hours better, uh, solution. Well, one, one reason is it's not actually doing machine learning. It's probably doing, um,

Jonathan Sterne (00:29:57) - It's probably doing music information retrieval and like recognition based on categories that may have been formed at the company from machine learning. But like, it's not doing machine learning on your computer and then it's just picking presets and modifying them, maybe modifying them a little bit. The one thing I like about isotope is what it or not, well isotope, yeah, in general, all their automated plugins, what results is not the mastered or finished mix or the finished track, but suggestions. And that's that to me is not a trivial difference because suggestions, it's not that different from a, it's not, it's, it's more honest and it's more like presets in the sense of presets are also a suggestion. You get a synthesizer that can make like, again, air quotes, any sound imaginable, you pick a preset and you're like, well, I want a little more echo or a little less echo or some other part of it.

Jonathan Sterne (00:30:55) - And you start messing around with it. If you want, if, if the interface is like understandable and a isotope lays out all of its processes for you, Lander conceals it and mystifies it. Um, and so it's really about, um, it's really about preserving the social relationship of absence around mastering and co-opting it. Um, whereas isotope, at least there's an opportunity for learning and seeing how the decisions are made. Um, which I also think, like we say a lot of paranoid things about, and I mean that in the, not in the, the pathological sense, but, um, in the hermeneutics sense, say a lot of paranoid things about Machine Listening and it's certainly deserves it, uh, especially when we talk about natural language processing and like voice printing and stuff like that. But it's also true that musicians have been using automation basically since forever in one way or another, if you include frats and reads and things like that and do creative things with the, I have no problem with people interacting with automated machinery to make creative decisions.

Jonathan Sterne (00:32:04) - Like I, I have no problem with that whatsoever. Um, and so I like with isotope, I think the AI stuff is like mostly marketing. Like yeah, maybe they use machine learning at corporate. They won't take my calls so I can kind of get in closer to that. Um, uh, but, uh, um, you know, what they're doing on my computer is not machine learning, so that's just like bullshitty marketing, but it's no different than like the Woodside panels you see on software plugins. Like there's no reason for that wood panel, except it's like, it makes you feel a certain way about the sound. Like it's just a picture it's like a badly drawn picture of wood on your screen. So, uh, I, so that's the, that's the difference there? I

James Parker (00:32:50) - Think, I think I'll ask a follow on that. I don't know. I sort of don't know where it's going, but let's see what happens. So when you're speaking, Jonathan, you know, you said bullshit, honesty, concealing. Um, I don't have a problem with, you know, these kinds of things and there's a kind of like political, you know, I don't know, it's not anger exactly, but it's like, you know, you sound piss a bit pissed off. Right. But in the article, that's not, that's not the tone of the article and, um, the article sort of the articles, sorry, read, like, you know, you're partly pushing back on that kind of tone in, in relation to, well, AI discourse in general, and you're saying, well, look, actually, you know, you know, I mean, there are some, there's some interesting anecdotes of like artists who are doing interesting things and you said, oh, you know, I have no problem with automation.

James Parker (00:33:52) - And I just wondered if you could maybe elaborate a little bit, I mean, I'm not asking you to say why you write one way and speak another way, um, or maybe you don't at all, but just to elaborate on how you, you both think about, you know, the politics of somewhere of an organization like LANDRA, uh, um, you know, what sounds like does sound like you, you have a bone to pick with them now. Um, and I just wonder if you could, you could say, yeah, say a little bit more about what that bone is and how it relates to the moments when you're a bit more sympathetic to the project.

Jonathan Sterne (00:34:28) - Uh, well, I've been talking a lot Elena, well,

Elena Razlogova (00:34:31) - I wanted to make a distinction between being angry at, um, machine learning and being angry at Lander. I'm angry at Lander because I think they misrepresent something, some things about what they're doing and they profit from it. Um, but I also, I like the idea of automation and I agree completely with Johnson, that automation is something that musicians or photographers, like we should talk about ocular centuries later, but.

Elena Razlogova (00:35:00) - Um, basically artists have used automation from decades and decades, centuries go. So there's nothing especially wrong with that. And the holy pardon Herndon and her AI baby project is like the example that I usually give when I teach, like it's a, it's an artist collaborating with them Machine. And, um, it's an interesting experiment and there should be more of that experimentation, but Lander is doing something different because they are in, in the law being, um, their technology and the mystique, um, and then trying to, uh, profit from it. They don't really create art and they don't, you can even argue that they, um, try to level the field, um, that makes certain art possible because they create a standard for sound that is a little bit more flat than existed before, um, like in terms of loudness and other technical aspects. Um, and they, I think another thing that we're making, um, another point that we're making in the articles is that, um, the idea, what is it finished sound is also different now because of automatic mastering, because some tracks that would have been considered finished before, um, now, uh, in the minds of musicians will seem like they're imperfect, unless they're run through some kind of, uh, automation process,

Jonathan Sterne (00:36:28) - Jonathan, they went through that. Yeah, yeah. Yeah. It's adding an expectation, I think. And I could be wrong because it's been a few months since I'd done this, but when you upload something to SoundCloud, it says like, do you want us to master this for you now? And like, what the hell does that even mean? Right. Probably means just pumping up the levels. And it's true. When you do upload to any platform, it does things, it expects certain levels for the audio and it does things to them and different platforms, um, ask for, ask for different things and set different standards. Okay. So the bullshit part of it is the venture capital Silicon valley lying about what you do in order to make it seem cooler, more advanced, more automatic, more brilliant than anything else. AI is a buzz word. It's the I prefix of the 2020s.

Jonathan Sterne (00:37:19) - Um, and I guarantee in the tech industry, like I remember in the mid 2010s, when people who are doing machine learning, like actual computer scientists had their work rebranded as artificial intelligence by corporations like Microsoft, like Google, um, like Facebook, right? Because AI sounds more powerful than machine learning. Machine learning is also a metaphor. Um, so that's like, I mean, it's, it's, um, it's marketing BS in some cases it's outright lying about the level of automation. Uh, and, uh, it's also like in the case of Lander, it's also working on the classic Silicon valley disruption model of let's take an industry in this case, mastering engineers, let's try to automate their jobs and then let's try to profit from it. And it's important to note that from the, like one of the, and this is again, a point we make in one of the articles, um, as the venture capital has become, has it does, like they've gone through another round of funding and stuff.

Jonathan Sterne (00:38:27) - They're behaving more and more like a corporation, less and less like a part of the Montreal music scene or any other music scene. And they're also expanding out, right? So all this platformization, they're doing, they're just searching for profitability. I don't know if they're making a profit or not, they're making income. Um, but, uh, but yeah, this is all about the venture capitalists, like getting return on investment or, and the founders getting return on like IP that they can sell off. Um, now, I mean, all companies are in business to make a profit, but it's different when your business where your profit is actually dependent on the quality of your service, like a mastering engineer. Right. And you could even argue to a certain extent, isotope is, is bound by that because they've sort of chosen a market niche to exit. Um, so that's the problem part of it.

Jonathan Sterne (00:39:21) - I mean, along with it, Holly Herndon, we could go back to someone like George Lewis, who's doing it long before, you know, a computer could really, um, uh, do something like a mission learning at a rate that would be useful for a musician. Right. Anyway, uh, you know, and he he's been improvising machinery for decades and arguing about it with other sort of philosophers of improvisation. Um, so I mean, I always ask like machine learning for who and for what purpose. Right? So the idea, this is, uh, you know, Shoshana Zuboff uses the term inevitable ism, right? That there's this inevitable March towards.

Jonathan Sterne (00:40:01) - This automation is just not true. It's a corporate ambition. It's just another version. It's a business version of manifest destiny. So, um, yeah, I, I just, uh, I guess I'll, I'll just leave it there. So, so I have real political concerns about dishonesty, corporate concentration, extraction of profit at the expense of people's quality of life. And I have a real belief. I mean, the whole reason I'm interested in this at all is I like in some very embarrassingly romantic way for an academic believe in music and believe in like people talking with one another. I mean, I guess at a certain level I'm a humanist, right? Like I like humans. Um, and like, you know, that, that even as a proposition is debatable at this point in history. Uh, but, uh, I wanna, I wanna believe in that anyway. So there's that and yeah, part of it is just the difference between speaking and talking. Um, I really think like now is a time in history for academics to speak plainly. If what they're dealing with is political and easily can be easily comprehended. And I think words like bullshit are really important for us to utter when we're talking about corporations, that line conceal what they do.

Sean Dockray (00:41:23) - I felt like that quote, um, near the beginning of the article where the, the, the person from, from lenders talking about the short runway, uh, that they have after getting around a venture capital, you know, to, to producing some data points, you can almost feel the pressure that, that puts on to people who are on the, you know, living in the Montreal music scene. And you can feel the squeeze that, that, uh, that kind of, um, the, that venture capital and the pressure and everything that, that generates. Um, one thing that, um, uh, when you were talking about that Lander just produces suggestions. It occurred to me that actually, when you choose a suggestion, you may actually be training, um, the model for the future, uh, that you, you know, that the choice that you make potentially, you know, that you get wrapped in and sort of like, um, labor, um, and Jonathan, you mentioned, uh, uh, the way that, you know, automation, um, I mean, uh, mastering engineers are sort of being put out of work by, um, automation at some level.

Sean Dockray (00:42:31) - And I just actually wanted to shift to something that came up in, um, Elena's, uh, article and also again in this one. Uh, but it's just about the way that, um, you know, aside from the dishonesty of a lot of these companies, um, that, that the way the, the, the point about primitive accumulation and the way that they, um, uh, capitalize on kind of, uh, open source, um, that they make, um, open source, um, software or communities, um, that they, they kind of buy them up or make them, um, like a non-functional in order to, um, you know, take them as property, sorry, I'm, I'm, I'm saying it in a very elegant way, but it's, um, this point that you make go in, I think is just super important. And it's the thing that kind of makes me almost most angry, even more so than dishonesty is, um, is that we actually build these things, um, you know, in our own kind of communities that are not for profit, uh, and then the, the, the capacity to, to, to be doing these things gets taken away, uh, and then sold off for profit. Um, and I was just hoping that you could talk to us a little bit more about, um, um, especially, you know, you were you, uh, what I'm thinking of in particular, um, is at the echo nest and WFMU and the free music archive. Um, but if you would just talk a little bit about the way that the, um, these social practices are already built into Machine Listening, even before Machine Listening comes up as a, as a thing.

Elena Razlogova (00:44:14) - Yeah, I think the period, maybe, um, the last decade of the nineties and the first decade of the 21st century there a little bit misunderstood because they're considered as this birth of, uh, corporate AI, um, in music, um, especially like Shazaam started, um, at, at, at, at that point. And then by the end of the decade, you have, um, iPhones and apps and everything. But in fact, that was the time when the interaction between the open access and open source movement and, um, the beginnings of, um, automation and music and, um, and other art forms. Um, there were really very closely intertwined and some of the companies that became commercial later and echo nest is a perfect example because they became part of Spotify, um, or their engine drives Spotify. And, um, they worked like that. The founders worked for, uh, for Spotify for awhile though. They left now, I think.

Elena Razlogova (00:45:11) - Um, to start other projects, are they benefited from co co from collaborations, with entities that are entirely anti-corporate and alternative, and it only interested like w for me is a station that is only interested in the music that's not popular. Um, there was an article about it, um, in the times and the New York times that's that was headlined, um, no hits all the time, so they're not interested in popular music. Um, but yet they were interested in automation and they, and they used it during the, um, Republican national convention to protest the war in Iraq for political purposes. Um, they ran automatic or automated radio streams, um, during these periods when, um, people were protesting on the streets. Um, so, uh, that history is completely raised from contemporary histories of, of Spotify. And another aspect that you mentioned, Sean is, um, the primitive art accumulation companies, such as Spotify and Shazam.

Elena Razlogova (00:46:13) - They use datasets that were obtained semi illegally, um, in the early 21st century to, uh, work out their out in their algorithms. And almost nobody knows about this now. And I think that's why books like Spotify tear-down, which is a very interesting volume that goes back and like uncovers that early history of Spotify. They're so important because, um, corporate histories will never tell you that, that, that they basically took, um, music, uh, like coal libraries of music recordings, either online in places like pirate bay or through, um, companies like Shazaam used the company in great Britain to, uh, to digitize. They digitize the records of a small, um, rec a record company, and then use that data to work out their, um, music recognition algorithm. Um, and that is, that was not legal at the time because the law was kind of murky and they played in interstices of it. Um, and, um, and now they're in the business of protecting their intellectual property, but the way it was created, uh, was through open to the idea that, um, information should be free. So they use that idea and they use the work of open access and open source advocates. Um, and then, um, to make themselves into the huge corporate entities today. Um, I think it's a really important story.

Jonathan Sterne (00:47:48) - Yeah. I'll um, I'll add to that. I have one clarification, which is Lander doesn't really make suggestions. Isotope makes suggestions. I mean, it's not that important, except we're talking about this stuff, so let's parse it. Um, okay. So everything Elena just said, he's also happening with speech and language. So when everything online with COVID a lot of disability access advocates said, we need transcription on zoom meetings. Uh, we need, and which I'm like totally in agreement with. And actually I found in teaching this year, uh, it isn't just around say people who are hard of hearing or people with ADHD or something like that, where like the transcriptions are really useful for the, what just happened, but also non-native speakers or maybe just tired people like there's now. Okay. So you have a need for transcription. So there's one of two things you can do, like historically what you would do is hire a person to transcribe, right?

Jonathan Sterne (00:48:52) - It's the same thing as like, um, what do you call it? Simultaneous translation, right? These are skills. These are things people can do. Did institutions do that? No, because there's this, uh, service that, um, you know, and there's many companies now that like more or less instantly turned speech into text. So in this very zoom meeting, we could turn on closed captioning and have a dumb assignment to a person or have it done automatically. Um, the company that provides the automatic captioning for, um, zoom is otter.ai, Otter, uh, stores, their data on Amazon, they Amazon web services servers. Um, and if you look at the user agreement for Otter, they say, you own the content of what you do, which is of course, you know, that that's that sort of awful word again from the sort of Silicon valley industry, all this stuff we care about, like what we're saying to one another, the music we listened to, uh, the journalism we read that's content. That's not important. What's important is all this other stuff. What we don't own is the product of machine learning performed on the data that we, that goes through their servers. And that includes things like their attempts.

Jonathan Sterne (00:50:11) - To voiceprint us to identify us and to be able to connect our speech and the sound of our voices with other things. Now, some of this is just like outright fantasy. It's like phrenology. Like I could, from the tone of someone's voice, determine whether they're lying or not. That's as likely as, um, a machine learning system, being able to make accurate genre distinctions, it's a very unlikely. Um, but then in the last year, then this like call for access has actually produced a massive theft of data from people that they don't even really know is happening. Um, and I've written about this in a piece I'll have coming out with Mehak Sawhney who I know Joel knows, uh, uh, in, uh, in, uh, hopefully by December the piece will be out in California. Um, uh, and so we looked at that and we also looked at, uh, low resource languages, languages, where there's not, um, all large Corpus of recorded audio to train an AI system.

Jonathan Sterne (00:51:13) - Um, and in both cases, like sort of arguments for access and inclusiveness also have been co-opted by these industries to sort of collect more data and, you know, this is quite, yeah, it was Sean put it like classic primitive accumulation, take something that's commonly owned and turn it into private property. Um, so I mean, I think it's pretty amazing. It's like, you're, you know, it's like your credit history, like you don't really have the right to access to your own voiceprint. Like, I mean, we even, we have more rights to our fingerprints and we have to our voice sprints right now. Um, even though voiceprint is a little bit more complicated in terms of, uh, being able to, uh, identify people in being useful and things like that, at least for now.

Elena Razlogova (00:51:58) - Yeah. Um, I think, and that reminds me of a work by I'm going to put her the name, uh, Xiaochang Li, um, from Stanford on the history of voice printing, which basically shows a very similar pattern to what's happening with machine learning now is that it wasn't working. And yet it was used in, um, in law and it was advertised as something that worked because people, um, there were people in power for whom it was necessary, um, to pretend or use the technology, even though it was swabbed. Um, and I think something similar is happening out also, I forgot to say something in response to John's question about methodology, because what happens with, um, the collaboration with WFMU and echo nest, um, that I was working on is that, and it's probably true of what John, how Jonathan described his, uh, attempt to like research isotope when you call, um, and they don't return your messages.

Elena Razlogova (00:52:58) - So people who are, who made it in the corporate world don't return my messages and even Lander. I think the only reason I was able to speak to one or people who've become Jonathan was my collaborator. He has a little bit of a higher status in Montreal community. And then they did give us an interview and before it was completely impossible for me to get it, that was another kind of benefit of collaborating with Johnson, but he can work around it. Um, uh, and, um, with echo nest and, um, their early attempts at, um, music recognition, they worked with an academic from Columbia who published a lot of articles who is available for an inner, for an interview. And there were a bunch of people who worked on, um, early, um, hacks, uh, with, um, uh, with, uh, with, uh, nests are also available. Um, so you don't need to have the head of the company if you're interested in, um, researching its history, you just need to research around it. And, uh, we'll get a pretty clear picture that way as well.

James Parker (00:54:02) - Do you mean Dan Ellis?

Elena Razlogova (00:54:06) - Yeah,

James Parker (00:54:07) - it's, it's, I'm going to try and join some dots here. I don't know if I'm going to be able to do it acrobatic. We had enough, but, you know, so you, you, you mentioned Dan Dan Ellis sort of by implication, and we've spoken with Xiaochang Li as well. And her work is fantastic. And, um, Liz Pelly about Spotify, about Spotify And

James Parker (00:54:31) - Mara Mara's work. When we spoke with Mara mills, you know, she also made that, that point about access, but you're making Jonathan. So, you know, so there's a growing community of scholars, activists, journalists, uh, researchers, and so on, clearly working, you know, in this vicinity. But you know, you, you pointed out the analysis of special and cause he's sort of, he's sort of eat inside the infrastructure rather than sort of speculating on it from the outside somehow. But you know, you point out, well, we have less rights in relation to our voiceprint than, uh,

James Parker (00:55:11) - Then our fingerprint and, you know, it seems, so what I'm trying to say is that there's a growing community, but the fact that we know the names that you mentioned, the also the names that we've found suggests maybe that it's still quite small and, you know, um, and that there's quite a lot of work to be done to kind of attain the kind of status and profile for this work that would enable us to maybe have some rights in relation to our voice sprints in the same way that we do fingerprints and so on. So in other words, I'm trying to say that, how do, how, how should we think about the, kind of the current status of, um, work on, I don't, I mean, I don't know if we should call it Machine Listening yet. That's maybe another conversation that we should have, but like, is it, is it the case that we're, you know, this community is small and quote unquote behind, um, and if that's the case, um, you know, do you see promising signs that it's growing or do you see particular points of orientation that we should, you know, we in so far as we are, we begin to, you know, be mobilizing around or problems that we should be addressing?

James Parker (00:56:37) - Um, yeah, I might leave it there. Cause that was, that was already a bit of a, um, a muddle.

Jonathan Sterne (00:56:49) - Sure. Um, so there's probably a point to be made here about academics, ocular centrism, or God, it's not ocular centrism. It's like ocular prefer ism. I don't know what the right word is, but there are more people working on the visual side of it. Then the audio audio side from like a critical or cultural standpoint. And that's pretty much always the case. Um, you know, I think the community you'll get better, bigger, more people be interested on teaching my first graduate seminar on my critical approaches to AI in sound next year. Uh, we'll be using some of your curriculum for, uh, where you put your curriculum,

James Parker (00:57:29) - Um, online, you know, I'm sorry to just, I know you do.

Jonathan Sterne (00:57:33) - I always put my, I would, I always put my cell by up. Yeah. Shout Out to this because you know, Sean Sean's background is in open source and public education and things. And, you know, as an academic in Australia, um, uh, in a law faculty, I first came across, I was in basically in inducted into sound studies via your website and the curriculum that you shared on that. And that's how I sort of, self-taught my based basically on that as a resource. And I just want to sort of, I want to say thank you for that, but also just, uh, you know, I guess, um, draw out the, the connections, I suppose, between, you know, the, the, the political importance of making something, you know, producing something like a curriculum, putting it online, sharing it, um, especially on the internet where it crosses borders and so on and so on. So yeah, just that, sorry, that's a little, it sort of belongs in brackets, but yeah. Continue.

Jonathan Sterne (00:58:31) - Well, thanks James.

Jonathan Sterne (00:58:34) - Um, but yeah, yeah, yeah. I absolutely agree. Like, you know, people should be, I don't understand why people don't post syllabis in general. Um, although I also think part of that and academics, aren't that good at this, including me, I've started doing it as like crediting other people when you're borrowing ideas from their syllabus, just to say, like, I didn't come up with this Sui generis, like just as we do that, we're very good on our citation politics on reading, but I especially think, you know, um, it's especially important for white dudes. Um, but not just like, you know, this, this white men to do it, just to show that like our ideas come from other places. Okay. So I think the community is growing. I think the number is not so important either. Like if you say, well, what can we do about this?

Jonathan Sterne (00:59:25) - Well, I mean, stuff can be legislated. The later there is a dis there is an active discourse in Canada right now about banning public facial recognition, for instance, will that actually happen? Well, I don't know. Will the liberal stay power? Will they stay interested in the subject? Like, that's always a question, right. And I mean, you can see the same thing in the United States with machine learning. The Obama white house was interested in it. I'm sure the Biden white house will be interested in it. Um, and so there's these moments where there is a possibility for intervening in policy and there it's like, you know, that standard straightforward, like, um, white papers and then like getting on the phone and calling legislators. And like, it's not, it's not sexy and it's not stuff that academics are always rewarded for. I guess it depends on what you're.

Jonathan Sterne (01:00:15) - Position looks like, but there are ways to, to, um, get people, get people engaged and change things. And then of course there's consciousness raising and just educating people about it, which isn't enough on its own. Like some consciousness to me, consciousness raising sort of happens in the middle of activism, right? Like you need enough people who are already energized around the subject to have a community, to bring people into. Um, and then you have to make more people aware of how this stuff works. And I feel like we're in a moment, I mean the pendulum, the public discourse is always quite simplistic and that's a risk of using terms like bullshit is that it sounds very dismissive. Like there isn't a bunch of research and thinking behind it. Uh, but I certainly think that there's opportunities for organizing and especially around the enclosure stuff, because it's about the conversion of like non-private goods into private goods.

Jonathan Sterne (01:01:10) - It is precisely the kinds of market behavior that can and is regulated all the time. Uh, Tarleton Gillespie, and, um, uh, Luke star actually is Luke stark at Western who said, uh, that, uh, right now the petroleum industry is better self regulated in the machine learning industry. And I just want that to sink in for a minute, right? The industry that is like really, really effectively collect, contributing to climate change is more self, more effectively regulated and self regulated, then data expropriation at the moment. Um, and you know, it's not that effectively regulated or self regulated as we can see. So, uh, there's a long way to go and there's a lot of precedence, um, and we're not going to get hell. I mean, you know, the flip side of it is the tech industry is producing all these billionaires who have tremendous political influence. So it's going to have to be a lot of organizing, but it can be done. Go ahead.

Elena Razlogova (01:02:15) - Um, I just want to add that maybe local organizing or like city or statewide organizing is also the way to go. Cause I know in California there was really successful movements for banning, um, facial rec, uh, facial rec recognition, for example, and here we can take, we can use Okla centers because the work has been done there that they're gotten a little farther to stopping these forms of surveillance, but, um, doing it at the, at the, uh, scale of a city or a province or a state seems like it's feasible and then maybe expanding. Um, so yeah, white papers and local organizing, organizing, I think is the way to go.

James Parker (01:03:01) - I suppose then the question is what are the white papers about though? You know, um, so, you know, what are, what, what is the, what's the field field in so far as you understand it, um, at the moment, either work that you've been doing or work that, you know, you think needs to be done, but it hasn't been done. Maybe this is also an opportune moment to talk about the phrase Machine Listening. Cause I noticed that it crops up in one of the papers, but it's not sort of an organizing theme. And sometimes I wonder if it's a distraction because, um, you know, well maybe if we just talked about expropriation or primitive accumulation and we're agnostic as to the medium or the sensory mode

Joel Stern (01:03:45) - We're obviously, I mean, we're obviously quite invested in it because, you know, we call the project Machine Listening curriculum. And so we kind of want the term sort of basic suggestive and to sort of do some work and to, um, you know, in a way be a provocation to sort of think about, you know, how the qualifier machine transforms, you know, the verb listening angle or sort of what what's, what it actually means to put those two terms together in this context. I think that was kind of something that in a way, w we we've, um, really wanted to speak with you both about, you know, because you, you have used the term in certain ways, you know, what w what you think it could mean, whether it's sort of, um, what it suggests, what it suggests for you and kind of what the sort of critical horizons are, or, you know, how it might operate as a sort of fantasy or, or desire in the way that artificial intelligence does. Um, you know, because we think there's sort of quite a lot of potential protect, you know, obviously to mobilize people around, not just this term, but that, that the term can sort of help us do that.

Elena Razlogova (01:05:06) - Okay. Well, I can start, I would be interested in what you guys think this thing is. Um, but.

Elena Razlogova (01:05:13) - To my mind. I don't think I used it in my articles until, until the Lander article, but it seems like there's some practices that definitely fall under the categories, such as a song recognition, speech recognition, um, recommend me music recommendation is not so much because some of it is just text analysis. Um, so technically I'm not using AI or machine learning or neural networks to recognize something in the sound the way, um, image rec rec recognition works to me, that sounds like Machine Listening, but also maybe, um, that phrase could be used to explore metaphors because there's a new feature in apple. Um, OAS, the chest came out, um, Mac, or I O S that allows you to find, um, objects that are tagged. And it's described in society as listening that your iPhone is listening for these objects, even though there's no audio involved, but the metaphor is still, I'm not a visual metaphor, but an audio metaphor. So it can be expanded to that level. Also, I think,

Jonathan Sterne (01:06:29) - Yeah, I'd, I'd sort of go both ways on the term, right? On one level machines don't listen in the same way that they don't see, right. Those are that's that, you know, they're there, I'm like with the quote unquote hard media archeologists, right? That the computer works on a different logic than a human brain. I don't think that that's, I think that's an important observation to maintain at a historical moment where words like intelligence and learning are being used as if it's the same thing as people, the same time. Listening's a social relationship. It's not a, it's, it's always been a social relationship. It's never not been, um, a sound always has to have multiple causes. A sound is only a sound because there's a percipient. Uh, so listening's always a relationship. So of course, you know, yeah. Maybe like in a very, like in a metaphorical or discursive way, but also in a real way, since listening is always a social relationship, like, you know, my phone can listen for my keys, if I can get one of those $15 tags for them.

Jonathan Sterne (01:07:29) - Uh, and so I, you know, for me, I actually, so the other part of this is that computer scientists, like, you know, the, um, I don't, I've been working on material about, uh, and with engineers for years, but actually I don't view the primary audience are the people I want to engage with around this work going forward as engineers. But one of the things I want to do before I move away from that space, he is, um, on paper, on the listening and Machine Listening because people in natural language processing and a music information retrieval don't agree on this term. And so I, and several research assistants summer actually going to look at like, what did they think listening is? And do they included or excluded from sort of their discourses about what they're doing. And then contrast that with the very rich, philosophical, and intellectual history of listening and sound studies where there's a lot of reflection and again, no agreement, no consensus on what listening is. I have my, I, I'm very firm in my positions after doing this for a long time, but I also recognize that, like, not everybody agrees with me.

James Parker (01:08:47) - It is one of the people you have in mind, by any chance, um, Richard Lyon, um, because he's got this book Machine hearing, I don't know if you know it he's, um, and he has this section, like it's just a little block, like, you know, pop out from the text where he goes, oh, I didn't like Machine Listening. Machine Listening is a word, a phrase used by the kind of the music guys. And, uh, you know, I think in, in the way that he, he describes machine hearing, this preference for Machine hearing is it's sort of, he's trying to draw on the, kind of the biologist stick, um, kind of associations that come with hearing, whereas, you know, listening is kind of parked in the culture camp basically. And so as far as I understand, you know, the, the, the, the phrase Machine Listening starts to get used regularly out of the computer music sort of scene, um, and particularly Robert Rose book. Um, and so, yeah, it's just interesting the way that they, yeah. That particular version of that argument played out. But I mean, I'll be, I'll be fascinated to see that, that research, I mean, on some level, I sort of feel like you have to have some historical kind of fidelity and, you know, watch the, the phrase sort of gather meaning as it grows. And it does seem that it's kind of big gun to, you know, a lot of people that we've spoken to.

James Parker (01:10:13) - Have been using it and they don't necessarily know why. Um, partly because it kind of is just, it's just there in the ether. But I also think it's a kind of a term that people can invent, you know, in the same way that like people just autonomous. Yeah. They just thought it makes there's machine learning. So there's Machine Listening, we spoke to an Australian academic, um, uh, sort of a computer scientist who seemed to have simultaneously invented the phrase Machine Listening as far as she was concerned at about the same time that Robert Rose was beginning to use it, um, at MIT. So it's just seems like a sort of there for the taking on some level, but I also feel like, um, it can be loaded up with meaning and that may be something worth doing is to begin to load it with certain kinds of meaning early, um, you know, to, to, you know, to say that Machine Listening includes as an object, you know, the technical machine learning stuff, but also the, the bullshittery and the kind of wizard of Oz, um, parlor tricks and that the field of Machine Listening, you know, isn't just a technical field, but it's also a field that, you know, of that comprises and includes all of that political discursive.

James Parker (01:11:37) - You know, you could also add economic and extractive and whatever, um, work as well. And so part of my inclination is to sort of pick it up from those heart, you know, the hard sciences or computer music, and then begin to kind of load it with associations, but I don't know whether that's, um,

Joel Stern (01:11:54) - But also the social relations that you mentioned, Jonathan, sort of that come from, sort of leave leaving in a world in which our kind of smart speakers appeared to be listening to us. Even if what they're technically doing is is, is something else entirely, you know, with voice user interfaces, you, you, you, you sort of speak to devices and they speak back and that produces a social relation that is similar to the, one of being listened to

James Parker (01:12:24) - We've been toying with the, the term listening effects or something to try and get at that experience. Like, you know, to some extent it's not very satisfying to say, ah, but it doesn't really listen, um, because the experiences of being listened to, you know, and so that's also something worth noticing, and that is doing certain kinds of cultural and political work as well. So yeah, because I've encountered a couple of times where that, Hey, Presto yeah. Falls flat. Uh, and I think, yeah, I think you sort of need to do both. Sure. And where do you come down on the term?

Sean Dockray (01:13:15) - My jury is still out. I'm listening. Yeah.

Elena Razlogova (01:13:21) - This reminds me again, like to cross the boundaries. Like I understand that it's important to keep listening to the audio field, but it really reminds me of the, um, Rogerian therapist app, which wasn't about listening at all, but the way James described it, it seems like the importance of connection and, uh, the effect of a human that you're speaking to, uh, that was captured in the very first one, um, Eliza, um, um, up that like everybody teaches, uh, in the, um, introductory teaching, introductory courses, computers and history. Um, so I would still say that, um, there are things outside of the sound wave field that could be definitely incorporated into the idea of Machine Listening, emotional.

Jonathan Sterne (01:14:14) - And I think it's our responsibility to do that, right. I mean, this is like, otherwise you're basically seeding the ground to people who say, and I mean, not all computer scientists believe this by any measure, but like this certain aggrandizement of computer science that says it is now the master discipline. And if you understand data, you can understand everything about the world, right. Which reduces sound to like audio that can be processed, right. When in fact listening is meaningless without all these relational, uh, without all these relational dimensions. So I think that's really important, right. And I think the Eliza does listen, Kate Crawford has this whole thing about, um, Twitter, basically being ambient sound like, you know, you would have it on, on your desktop in the same way that people would have a radio on or a TV on, in the background as they're doing something else. And I think that's actually right.

Jonathan Sterne (01:15:12) - Right then it's metaphorical, but it's also like structural. Um, and I think that that part of a theory of listening is really important for understanding Machine Listening, because otherwise we're saying that the technical explains everything else and I don't think that that's true. Um,

James Parker (01:15:34) - how do you avoid, Um, uh, kind of an, a essentialising , like the, the audio visual litany, basically. Like if, um, if Twitter is kind of Sonic because it's, you know, ambient and you sort of it's a stream, um, you know, cause I, cause when I, I just keep getting, whenever I think of that kind of thing, I always think, ah, the audio visual litany like you're, you're, you're sort of forcing, uh, you know, uh, very specific and historically situated account of sound on to this in order to make that metaphor work. And so, I don't know, I don't know where that gets us

Jonathan Sterne (01:16:16) - At first. Thank you. You are the first person who's ever accused me of actually using the audio visual litany... app. ...

James Parker (01:16:29) - God, I w I didn't mean you, you know, I know,

Joel Stern (01:16:33) - I know I'm going to just, just go with it.

Joel Stern (01:16:35) - But the point Of coming up with a theory is, is the potential it could be used against you obviously,

Jonathan Sterne (01:16:40) - Obviously. Great. No, I don't actually think about it that way. I think of that in historically situated and the way practices move from one field to another. So the bad, like the let's take Crawford's point, which is, it's not, this is how sound works. This is how radio and television worked in the domestic sphere. And this is something that's even a fairly well understood thing in television practice, but not so much in television theory, right? The role of television sound, you don't always have to be facing and looking at it, um, in order for it to do its work, it's also the structure it's television soundtracks, right? Think about sporting events where announcers play-by-play people like raise their voices as something might happen, and then they keep yelling after it happens, or like they boost up the crowd noise in order to draw a person back to the screen, same thing with, um, with narrative music and television.

Jonathan Sterne (01:17:37) - Um, and I think so these practices are built into like, again, it's not the technology, it's like all of the things around it, the way people interact with it in the domestic space versus how people, uh, then design it in anticipation and in cultivation of those interactions and, uh, social media platforms like Twitter. And I mean, the metaphor I like to use for Facebook is a house party. Not a, not, not, um, audio, but just going with, with, uh, Kate Crawford's point, um, it's that it works like sound it's that it works the works socially in ways like radio and television and silent, which is a different thing. That's not a good, it's not that the sound fills up the space or whatever it's that this like coming to and moving away from, you know, distracted relationship as you're doing something else already exists as a set of embodied embedded systematized media practices with histories and contexts and all that.

Jonathan Sterne (01:18:41) - And obviously like any metaphor it's not universally applicable. So yeah, if you say, dude, it's like sound because sound envelops, you or sound is all around or sound is ambient then no, but if you say it's like the way certain kinds of Tello, and then obviously, you know, she was writing about like Australia and the United States and Western Europe, and it may not work that way everywhere. Um, and I would say the same thing for social media, right? Like it's very different interacting with Twitter on a desktop and Twitter on a mobile phone, for instance.

Elena Razlogova (01:19:17) - Yeah. Or in continuous, um, wifi zone versus intermittent internet. Yeah,

Jonathan Sterne (01:19:26) - Yeah, yeah. Or like back in the days when like people actually interacted with it by text message, which I don't think anybody does anymore, maybe they do. I don't know. Uh, but it certainly, I was, it was very important like Andre Brock and his stuff on black Twitter covers this pretty well. Um, right. And that difference that now retrospectively like, of course Twitter's not like text messages wasn't apparent necessarily from the inner PE interface for like a very significant user base early in its history.

Jonathan Sterne (01:20:03) - So now we're like way off Machine Listening. Yeah.

Elena Razlogova (01:22:49) - Well, um, actually should we talk about the possible visual article? I know it's not about mission listening, but I wanted to mention it because I'm still, um, so this is something on automated. Um, it's, uh, we're thinking of doing a companion piece on automated seeing Machine seeing, um, that would, um, consider certain automated of editing photographs, um, in the historical context. And like one example is, um, I'm forgetting it now, uh, the rule of thirds because you guys know what it is. Um, so basically it's, uh, it's, uh, it's, it's, uh, it's a rule that allows you to place the most important object on the photograph and there's software that emerged a while ago now, like five or seven years ago that got some pushback from photographers that allow you to do that automatically. And this is a great example where, um, there were artists, um, especially photographers and filmmakers who broke this rule of thirds and became famous by break and, uh, such as 10 Mrs.

Elena Razlogova (01:23:57) - Stanley Kubrick his entire over as a photographer and the filmmakers based on placing an object in the center. Um, and, uh, there are companies such as Polaroid who also encouraged, uh, users in their materials, promotional Madame materials to put the object in the center, whereas this software makes it kind of makes a choice for you or what is, what is important in the picture, but also in a particular, um, enlightenment tradition, um, that is then gets in shrined in, um, in the, in the software and makes only certain forms of art possible. So we're thinking of doing something with that and having a companion piece to wander with, um, does that particular, uh, method of Machine editing,

Sean Dockray (01:24:44) - Most of the stuff that's built into the cameras. Right. But,

Elena Razlogova (01:24:47) - Uh, yeah. Yeah. It's that Instagram is based on that. Yep. Yeah. Yeah.

Joel Stern (01:24:54) - But it's also like zoom it's background, noise reduction tools, which work pretty well when you're having a conversation. But if you, you know, cause I was teaching as Sonic art class on zoom last semester and we were, you know, working with a lot of different forms of experimental audio and sound and playing music to each other. And, uh, you know, it was so amazing. It was like hearing the zoom algorithm at work, like, as it was filtering these different pieces of music to try to, you know, separate foreground from background and decide what is, or isn't important in the scene. I mean, it became really an amazing kind of study of different form of mediation in a very new form. Um, but yeah, it was also sort of obvious to kind of like, yeah, here, what, what was considered to be important in the audio signal.

Elena Razlogova (01:25:52) - Exactly. Did you turn on original sound? There's a setting for like, if you want to make music on it?

Joel Stern (01:25:59) - Yeah, no, we did all sorts of stuff like that and, and kind of, um, it, actually, some of those experiments culminated in a work for the last Machine Listening live session that we did, um, coding ... and control where we're one of the artists we worked with, um, Matin made a, made a score for zoom where the score was, or, um, a whole group of participants, you know, in the same, same session to improvise with the audio settings, um, and to kind of play and to sort of bring their microphone and speakers up to just the threshold of feedback and then improvise with the audio settings, changing the, um, you know, background and foreground and noise reduction. And it was, it was really, it was actually really quite beautiful and subtle and interesting soundscape that was produced just out of these kinds of zoom, audio artifacts of the zoom engine, trying to kind of deal with, um, these different sound events. Yeah.

Elena Razlogova (01:27:01) - That they're also incredibly time specific because of course zoom would change all of its, uh, the next version of zoom will not produce the same effect. I like that John Cages, radio experiments, like the radio is pretty much the same through all of those experiments, but let's zoom. It wouldn't be the same way.

Joel Stern (01:27:23) - That's right. Yep.

Jonathan Sterne (01:27:25) - There's probably a whole article In that one sentence, Elena, let me think about that. Yeah. I mean the backgrounds that James and Elena have on it, it's amazing how I, you know, simultaneously like powerful these computer judgments are and how utterly incompetent. Right. It's people's arms and shoulders disappear and reappear into the backgrounds and like the outlines of people's heads and stuff. So like on one level it works really well. And on the other, it like just gets basic stuff wrong. Right. And, uh, yeah, no, no, there you go. Yeah. Great. Um, uh, I'm gonna, I'm gonna do the same thing as soon as I stopped talking, but, but yeah, so it's the same kind of thing with both the visual and auditory, like simple questions like foreground and background that human beings are very good at figuring out the machinery just really struggles with, um, or it doesn't depending on like, depending on your tolerance for those kinds of edge errors.

James Parker (01:28:31) - Um, is there anything else anybody wants to talk about before we wrap up, I'm gonna try to rush us to wrap up. I just don't want it to time beyond the time. One thing

Elena Razlogova (01:28:42) - That came to my mind, and again, it's not exactly about Machine Listening, but it is about zoom and similar software where it's the medium of transnational talking basically is that there was a debate about considering zoom, a public utility after several pro-Palestinian talks were canceled by the Zoom corporation. And then the protest, um, panel in relation to the first canceling was canceled by corporations is kept going on and on and on. And people were saying, well, this is, this is like a, um, this is a highway or this is a, it's a means of, um, it's an infrastructure that needs to be publicly owned. And I think that is also a way to, um, to think about the sound that we're producing through this highway, because that would give us additional rights if we consider zoom and other software like that as public utility that can be regulated in different way and hopefully protect our privacy in the process here, here.

Joel Stern (01:29:48) - Exactly. That's awesome. Yeah. Thanks for making that point. And I mean, I was horrified about the cancellation of that panel and, um, I actually saw Leila Khaled give a speech in London about 20 years, you know, 20 years ago before anything was ever sort of live streamed. Um, and it's yeah, amazing that a company has the power to kind of in a way curtail a conversation like that. Um, but yeah, we we've thought a lot about the sort of, um, political implications of doing the Machine Listening project, largely via zoom as, as, as we have been.

James Parker - But I think they're changing aren't they, because initially when we were thinking of it as a curriculum, as universities were kind of being Zoomified, it felt like they kind of ready-made for us to be working in and against to some extent. But I feel like that that is beginning to change as that, that move doesn't feel so interesting a year later.


Ladataan…
Peruuta
Tallenna