You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

72 KiB

title status
Jonathan Sterne and Elena Razlogova Auto-transcribed by reduct.video with edits by Zoë de Luca and James Parker

Elena Razlogova (00:00:00) - I’ll start as the junior scholar, here. My work has been originally in the history of American radio and my first book, The Listener’s Voice: Early Radio and the American Public (2011) is on radio and its audiences between the twenties and the end of the forties. After that, I got interested in freeform radio and its forays into open access and open source experimentation in the nineties and two thousands. And I discovered that – even though, in the discourse, freeform radio is considered the opposite of a machine listening kind of machine making of music – it is actually quite intertwined with the history of algorithmic music. And, so that’s what my work is focusing on. As far as Machine Listening goes, the LANDR project is an outcome of my work on the Montreal music scene and the algorithmic experiments there (because LANDR is based in Montreal). My work on Shazam was also part of that. It’s through these two cases, where one is on recognizing songs algorithmically, and LANDR is mastering music algorithmically, that I came to the project.

James Parker (00:01:21) - Fantastic. Thanks so much, Elena. And Jonathan?

Jonathan Sterne (00:01:32) - My name is Jonathan Sterne I guess I’m the not junior scholar. I’ve been writing about sound in one way or another for over two decades. My first book, The Audible Past (2003) was on the origins of sound reproduction technologies in the late 19th and early 20th centuries. And specifically thinking about them, not as like transformative agents, but as cultural artefacts, basically, that reflected existing practices, politics and ideas about sound because that’s what I felt was needed at the time. I mean, the book was also conceived in the nineties when we had sort of one of the earlier waves of computer based technophilia that we’re sort of now currently trying to recover from. My second book was on the MP3 format and it’s a bit of a conceit because it’s a hundred year history of what was then a 19 year old format, with the idea that to understand MP3s there’s a whole sort of (at that time) not very well attended to history of the relationship between theories of how humans here and how sound works and the design of sound technologies and media aesthetics.

Jonathan Sterne (00:03:03) So that’s basically MP3: the meaning of a format in like two sentences. I finished that project in Silicon Valley and I sort of looked up and I’m surrounded by all these businesses the thing that got me into the MP3 was like every, sound technology has this sort of model of human beings in it. And then I looked around and like, there were all these other people trying to design new sound technologies, and so over the last decade I’ve been sort of studying audio signal processing and the politics of audio signal processing and the machine learning stuff is really an outgrowth of that as well as the work on sound technology. And I’m just going to plug, I have a new book coming out (that’s not about this), it’s called Diminished Faculties: A Political Phenomenology of Impairment, which is really much more about impairment, disability and sound. And that’s another major thread of that first book, the audible past because it’s like ideas about d/Deafness and not hearing, and sort of even damaged ears were sort of built into sound technologies as well. So that’s the other thread of my research that maybe we won’t get into as much today

James Parker (00:04:28) - And another project on time shifting?

Jonathan Sterne (00:04:29) - Oh yeah. The time stretching book with Mara (Mills) that’s ongoing. It’s a little bit on pause just because Mara is finishing her first book. That’s got a nice COVID hook because all these professors suddenly freaked out when they learned that their undergrads were like speeding up their recorded lectures but the book is about basically speeding up and slowing down sound and also a bit of pitch shifting. The way I explain it to non-experts is: imagine that film studies has existed for several decades, but that nobody’s ever written a history of slow motion or high-speed playback and that’s sort of what we’re doing. So we start with blind listeners in the 1930s. And by the end of it, we’ve written about avant-garde composers, broadcasters, ‘Sir Nose D Voidoffunk’, Auto-Tune, ornithologists and all sorts of other people that are interested in speeding up and slowing down sound and advertisers and students and broadcasters. So there you go.

James Parker (00:05:37) - Oh, that all sounds amazing. And similarly, Elena, so you’ve sort of joined forces on this project on LANDR, and written a couple of papers now. I mean, maybe the thing to do is to briefly introduce LANDR, and yeah, just sort of go from there, see where the conversation takes us back into various tributaries in your, in your previous work and so on. I mean, did somebody want to have a go at saying what LANDR is or how you arrive at this project or how you ended up working together on it?

Elena Razlogova (00:06:11) - LANDR is a company based in Montreal that actually started as Mix Genius, and tried to mix music algorithmically, but that didn’t work. And then in the end, it began to advertise itself as a company that uses machine learning to master music. We both came to this project separately, I think. And then we met, and I met some of the people from LANDR through my work in the in the music scene. And Jonathan was interested in the question of whether you can algorithmically master music, so we got together and we interviewed one of the founders of the company, who is now no longer a part of the company, who then had a flamboyant kind of almost a PR guy explain to us the public version of what LANDR is. And then we went from there. Maybe Jonathan can continue.

Jonathan Sterne (00:07:23) - Yeah. I was actually on a panel with Mix Genius a few years before it became LANDR. I don’t know if it was like in the basement of that church in Mile End or whether it was Sala Rosa. It was like for one of the many Montreal festivals when they were like, sure that they were going to automate stuff, automate the mixing of music, and then I sort of stumbled into it a few years later as a natural outgrowth of all this interest in signal processing, because it was one of the first cases where I’d seen a company claim that they had successfully used AI to automate signal processing. And just for those listeners who don’t know what signal processing is basically it’s the cooking part of audio. So you have audio that’s either recorded or synthesized. (We’ll leave synthesizers out of it for now because it’s easier to understand.) So like my voice goes into this microphone, it gets turned into electricity, gets turned into data, eventually comes out of your phones or your speakers, or however you’re listening to me and signal processing is everything that happens to that signal between the time it enters the mic and comes out the speakers.

Jonathan Sterne (00:08:41) For instance, it might be compressed, which means, that the distance between the variations in my voice between louder and quieter change, that frequency balance might change. Maybe somebody would add artificial echo, not on zoom, but if this were, you know, some kind of glossy production, so signal processing exits like color balance and video or something else, that’s like a pretty subtle process, but without it, nothing looks or sounds right, and it sort of defines the sound and look of media. LANDR was the first company that I came across that actually claimed successfully to use it. Elena and I like have known each other since forever, and we’re both in the same sort of Montreal, milieu, and at some of the same dinner parties and hang out sometimes. And so when we both discover, I don’t know how we did it, but we sort of both discovered each other was interested in it. And then it became like a field trip and then it became a couple of papers.

Elena Razlogova (00:09:41) – It was supposed to be just one paper originally. Yeah.

James Parker (00:09:45) - Is it going to be more papers?

Jonathan Sterne (00:09:47) - No, I think, two is enough on LANDR. But it’s not necessarily our last collaboration.

James Parker (00:09:58) - Is it that if you’re in Montreal, you know, you sort of heard about LANDR? Because I hadn’t heard about LANDR, myself. I mean, is it that they are – apart from this kind of supposedly revolutionary technology – also grounded in the Montreal music scene. Were they something that people knew about?

Elena Razlogova (00:10:27) - A lot of musicians. They employed a lot of people. A lot of musicians in Montreal are either unemployed or live on grants and it’s becoming less possible, but it has been possible for a few years. They employed all the musicians and radio DJs as well, whom I knew. So, I found out about them from people I knew through CKUT which is the McGill radio station. And then they also sponsored events at Pop Montreal music festival, which made them even more involved in the community. People use these events, as we explain in the article, to promote the, the company. So, on one hand, to make it seem more human and more underground and alluring to small time musicians, and people who may be wanting to create their own music in their garage, and then use LANDR for mastering, for example.

Sean Dockray (00:11:33) - May I ask a simple, not simple question: what is mastering and what makes it more of an attractive thing to do than how they started out?

Elena Razlogova (00:11:53) - Oh yes. Technical explanation is much more your thing.

Jonathan Sterne (00:12:01) - Yeah, it is much more my thing. So, mixing is about setting the relative levels of sounds in a recording. When most people make recordings today, not everything’s recorded at once. Unless you’re talking about putting a mic in front of an orchestra or something, it’s , a bunch of musicians go to a studio and then, either at the same time or in sequence, they get recorded on two different tracks so that you can after the fact balance, say the levels of the drums and the guitar and the synthesizer and the vocals. That’s mixing. There are a lot of reasons why you can’t automate mixing. And in general, I think this has something to do with sound and big data because you also see it in things like music information retrieval, where the sound actually doesn’t give you, like, just looking at the sound, it doesn’t give you the information you need to make culturally meaningful decisions.

Jonathan Sterne (00:13:05) - So for instance, if the bass player in the band is the most famous member of the band, like let’s say they’re the lead singer, they’re the best looking. They’re the one that’s promoted by the label. Their track might be louder in the mix than if the bass player is like your usual, like not famous person in the band and not the most important face. So there’s like all these extra musical decisions that get made in mixing plus, in an actual mix, if you’re dealing with a group of multiple musicians, as opposed to like a single artist working at home, you’re also negotiating competing visions for how the music’s supposed to go. Now, obviously, if you’re like an individual electronic music producer or sound artist, that’s a different proposition. But then you’re almost certainly recording things serially as opposed to in a single shot, unless it’s like live improvised or something. (I’m doing that scare quotes.)

Jonathan Sterne (00:14:11) – So, mastering is after the mix has done and you have stereo recording. (Although, I mean, if you’re doing mastering for video or gaming or something, it might be 5.1 or 7.1, depending on the platform and how it’s going to be distributed.) Mastering is like the final polishing. So it’s like the page proofs, to use a publishing metaphor, mastering is like type setting and page proofs and things like that. It’s the moment where the music will sound like what it sounds like when it comes out of the speakers and a mastering engineer’s job is basically to make the mix translate across as many contexts as possible. So it sounds as good coming out of the crappy speaker of like a mobile phone as it does coming out of a big sound centre system and you might master specifically for, like, I know this is going to primarily circulate on social media, or I know this is an ad and it’s going to mostly come out of TV speakers. So it really depends on the mastering engineer, who applies sort of contextual knowledge. But there’s another reason that mastering could be automated more than mixing, which is most musicians don’t know what it is. So, it’s absent for some musicians, who like either record themselves or someone records them, and they don’t have interaction the mastering engineer. Often mastering sessions are unattended. You send the material to the mastering engineers/a mastering engineer, and they send it back to you. So, socially speaking, it’s easier to automate because you’re removing a person that already isn’t someone, the musician usually interacts with. So it’s not totally technical explanation, but that’s sort of what mastering is or was. And it has a nice name. It sounds finished. Like I’ve always said the highest degree should be master, not doctor. Although, master also has like some pretty awful, historical connotations… and you know, it does multiple duties. It’s like, you know, part of the history chattel slavery, but also S and M like it’s got multiple connotations. So, mastering has this like a lure to it too. There’s like this wizard like dimension to what, how mastering engineers are understood in the industry.

Joel Stern (00:16:50) - I love the section in one of the essays where you sort of say that one of the main functions of LANDR is to sort of give the musician confidence that the work has been mastered, you know, that it’s just sort of an external signifier and that in some ways that’s enough, regardless of any change to the audio.

Jonathan Sterne (00:17:17) - It’s true for mastering engineers too, though. There are lots of interviews with mastering engineers where they’ll say ‘I didn’t touch it. I didn’t touch the audio. It was perfect. It was good. I didn’t do anything. I just called it.’

Joel Stern (00:17:29) – Sometimes mastering is simply saying it’s all it’s already finished.

Jonathan Sterne (00:17:34) - Yeah.

Elena Razlogova (00:17:37) - Yeah. And then another point that we make in the article is that LANDR doesn’t do that. LANDR never leaves the recording alone. And I think the difference between the live engineer and the algorithm is psychological too, because Montreal musicians describe a local, mastering engineer as a psychoanalyst. I think, like somebody who is helping you through with your insecurities and of course, an App it’s a really difficult for an App to be, to embody that.

Joel Stern (00:18:14) – Although, so that’s what most Apps are. Aren’t they? They’re psychiatrists, psychoanalysts in social media platforms anyway. They’re ways to kind of calibrate your psychology against everyone else.

James Parker: Well look, so: they try and fail as Mix Genius because there’s something about music, which is music as opposed to data. That sort of makes that impossible or maybe even people are just better at hearing mixing or something, they can tell that it’s a bad mix, so they won’t accept the kind of automated version or something like that. So, then they rebrand and then they say that they do, you know, that some of the kind of promotional claims on the marketing claims and the website, are kind of extraordinary revolutionary. LANDR is the place to create master and sell your music, the creative platform, musicians.

James Parker (00:19:15) - They even say that your music will sound just like it was produced by Timbaland, which is a weird thing to say, because he’s a producer, not a masterer anyway, but like, so, okay. So that there there’s a marketing claim going on and what’s the move. What do you do with that? Is it, pull the curtain back and reveal what’s really going on? What do you think, how do you confront a accompany, like LANDR? What is it that interested you, as researchers, or politically speaking in the context of the Montreal music scene or global Silicon Valley or however it is, you want to think about it.

Jonathan Sterne (00:20:07) - Well, we really looked at it from a bunch of different angles. So the first structuring, in classic sort Derridean fashion, like this structuring absence of the whole thing is corporate secrecy. Like they’re not going to tell us how it actually works. Although we did a little fishing around the edges and we have a pretty good guess of how it worked at the time of the article. But we did a sort of, I don’t know what you’d call it like a multimodal study of it. We did everything, everything else we could with it. So I worked with it. We interviewed people, you went around town, Elena. You want to say a bit about sort of how it connected with your work on the music scene and their promotion?

Elena Razlogova (00:20:55) – So we figured out what LANDR people did in the community and how they emerged. A lot of things that they did were pretty cool. Like they organized workshops on how to survive as a beginner musician, they invited artists to speak on panels. They worked festivals. MUTEK is another electronic music festival that they were connected to. Unfortunately, the other thing we found out is that they tended to invite people to help them in the software and they paid some of these people. And here we interviewed one person who wishes to remain anonymous. So we can’t really get into that specifically. What happens is that the software, turned out to be, say, bad at electronic music initially.

Elena Razlogova (00:21:56) - And then few months later it became better because it got a lot of criticism. Then they paid somebody to do a bunch of test tracks, and then it became better. And then they could advertise the software is improving through AI, through machine learning, but they would tend to employ these people and then kind of leave them, throw them out. So people would leave their job, their regular jobs to get a high paid position at LANDR and then leave. And this kind of happened gradually. They began as this grassroots small company, kind of an upstart and revolutionary, and gradually grew into this monster. A lot of people were bitter. A lot of musicians in the community were bitter about interactions with the company, and sceptical about its claims. And in the meantime, the company LANDR became more of a multinational entity with offices in LA and Berlin and connections to Hollywood. One of their biggest businesses now is doing sound for ads and TV shows.

Elena Razlogova (00:23:10) - So, yeah, so this is kind of my angle. Another important thing that I think we both are interested in is this conceit that, eventually, data for machine learning will eventually be complete. This was part of the point that Justin Evans made to us during the interview, that machine learning is based on the learning of big data. And once we have the big data in the cloud, we will be all set. And that I think is a conceit that cannot be achieved. And that was one of the points that we try to make in this article. Some things will always be left out for political reasons or for technical reasons.

James Parker (00:23:52) - In the sense that the data set that they’re using to train the machine learning on will not include some genre of music from wherever it might be, or some subculture or some scene or something – so premised on the kind of the hoovering up of all music into the cloud for that claim to work?

Elena Razlogova (00:24:17) - Exactly. So they would say, ‘well, eventually all the genres will be available to us.’ But in fact, not only will they not be available, but also the company only focuses on genres that it can monetize. Of course. So hip hop and electronic music are the ones that they focused on initially. And, some genres will never be interesting enough because they’re too niche or too experimental. The program cannot work with experimental music at all.

Joel Stern (00:24:48) - Yeah. Because experimental music, also to be properly experimental has to subvert its own conventions. It’s not operating within sort of the constrictions of genre, you know, as a notion. Exactly. I mean, it’s sort of hearing people talk about it as the kind of logical end point of understanding means either that, like, if we get all the genres get all the music. It’s sort of quite reductive sort of starting point anyway. Sorry, I was going to say something about it. Cause I’ve been using iZotope, you know, for a few months, and I was actually sort of feeling more and more guilty about how I’ve been using it when I was reading your piece and it more and more duped in a certain way, because with the iZotope, with Neutron and Nectar and other apps, when you get them to do the automatic mask, automated mastering, or automated mixing, your music plays for a few seconds and this ear appears on the screen and it says, listening, listening, listening, listening, and then it sort of transforms your audio and processes that and yeah, I guess without trying. I’m not sure if you were going to say something a little bit about sort of genre that Jonathan, but I was sort of trying to move the conversation onto this, this sort of fantasy that the application is listening. And then the way that that sort of listening is sort of symbolized and represented and, and signified and you know, what that’s actually doing, and how much of it is just sort of playing on the desire of the user to kind of believe that something magical is happening there.

Jonathan Sterne (00:26:55) - Or even just finally, someone’s listening to my audio, someone’s listening to my music. Finally, I have one audience member it’s the computer. I’ll just say one quick thing on genre: we have to remember that genre itself is a not an organic category. It is also an industrial product of an attempt to segment markets and produce predictable music sales. Popular music scholars who’ve looked at the music industry, like Keith Negus, documented this David Brackett has this wonderful, fairly new book on the history of genre. That sort of goes all the way back in the recording industry. But John was old enough that people really live it, right? Like technically the guitar sound in certain metal and certain punk genres are exactly the same. And yet you’ll never like people who are devotees of those genres won’t necessarily accept the music from the other genre.

Jonathan Sterne (00:27:51) - You could say the same thing about certain synthesizer tones that are used in funk and ambient music, for instance, right? It’s like the exact same setting on the exact same piece of gear. But these two things don’t translate. As of now, there’s no way in machine learning to like figure out the difference between those two things. We briefly mentioned Isotope in one of the article because I got to see a presentation of theirs at NAM, which is like the big corporate trade show where companies basically present to other companies. And there’s just a lot of bullshit about machine learning in general. I mean, this is something that’s well-documented in the broader literature, Kate Crawford, Mary Gray and Siddharth Suri, Tarleton Gillespie, all these people have documented a great lengthy amount of human labor behind machine learning and the sort of discursive expansiveness around automation, claiming things that are claiming things are automated when they’re actually not.

Jonathan Sterne (00:28:58) - And where they’re claiming no humans are involved when in fact there are. LANDR and iZotope are no different. My favorite anecdote from that iZotope presentation was a progress bar, it was basically the same thing as like the spinning pizza on an apple system or something like, it’s just, it’s doing its thing. It’s busy. And he’s like, well, now it’s doing some machine learning, right? And it’s the same thing with the iZotope plugins. The thing that bewilders me is why it only analyzes 30 seconds of the track. If you’re doing anything RD or whatever, you might have huge dynamic and temporal shifts, like why can’t it analyze all five minutes or all 20 minutes are all one hours better solution. Well, one, one reason is it’s not actually doing machine learning.

Jonathan Sterne (00:29:57) - It’s probably doing music information retrieval and like recognition based on categories that may have been formed at the company from machine learning. But like, it’s not doing machine learning on your computer and then it’s just picking presets and modifying them, maybe modifying them a little bit. The one thing I like about iZotope is what it or not, well iZotope, yeah, in general, all their automated plugins, what results is not the mastered or finished mix or the finished track, but suggestions. And that to me is not a trivial difference because it’s more honest and it’s more like presets in the sense of presets are also a suggestion. You get a synthesizer that can make like, again, air quotes, any sound imaginable, you pick a preset and you’re like, well, I want a little more echo or a little less echo or some other part of it.

Jonathan Sterne (00:30:55) - And you start messing around with it, the interface is like understandable and iZotope lays out all of its processes for you, LANDR conceals it and mystifies it. So it’s really about preserving the social relationship of absence around mastering and co-opting it. Whereas iZotope, at least there’s an opportunity for learning and seeing how the decisions are made, which I also think, like we say a lot of paranoid things ( I mean that not in the the pathological sense, but in the hermeneutics sense) about Machine Listening and it’s certainly deserves it, especially when we talk about natural language processing and like voice printing and stuff like that. But it’s also true that musicians have been using automation basically since forever in one way or another. If you include frats and reads and things like that and do creative things with the, I have no problem with people interacting with automated machinery to make creative decisions.

Jonathan Sterne (00:32:04) - Like I have no problem with that whatsoever. So I like with iZotope, I think the AI stuff is like mostly marketing. Like yeah, maybe they use machine learning at corporate. They won’t take my calls so I can’t get in closer to that but what they’re doing on my computer is not machine learning, so that’s just like bullshitty marketing, but it’s no different than like the Woodside panels you see on software plugins. Like there’s no reason for that wood panel, except it’s like, it makes you feel a certain way about the sound. Like it’s just a picture it’s like a badly drawn picture of wood on your screen. So that’s the difference there?

James Parker (00:32:50) - I’ll ask a follow on that. I sort of don’t know where it’s going, but let’s see what happens. So when you’re speaking, Jonathan, you know, you said bullshit, honesty, concealing. I don’t have a problem with, you know, these kinds of things and there’s a kind of like political, , I don’t know, it’s not anger exactly, but it’s like you sound piss a bit pissed off. Right. But in the article, that’s not the tone of the article. The articles read, like you’re partly pushing back on that kind of tone in, in relation to, well, AI discourse in general, and you’re saying, well, look, actually, there’s some interesting anecdotes of like artists who are doing interesting things and you said, oh, you know, I have no problem with automation.

James Parker (00:33:52) - And I just wondered if you could maybe elaborate a little bit, I mean, I’m not asking you to say why you write one way and speak another way, or maybe you don’t at all, but just to elaborate on how you both think aboutthe politics of an organization like LANDRA, like does sound like you have a bone to pick with them now, and I just wonder if you could say a little bit more about what that bone is and how it relates to the moments when you’re a bit more sympathetic to the project?

Jonathan Sterne (00:34:28) - Well, I’ve been talking a lot, Elena?

Elena Razlogova (00:34:31) - I wanted to make a distinction between being angry at machine learning and being angry at LANDR. I’m angry at LANDR because I think they misrepresent something, some things about what they’re doing and they profit from it. But I also like the idea of automation and I agree completely with Jonathan, that automation is something that musicians or photographers, like we should talk about ocularcentrism later, but basically artists have used automation for decades and decades, even centuries. So there’s nothing especially wrong with that. Holly Herndon and her ‘AI Baby’ project is like the example that I usually give when I teach, like it’s an artist collaborating with a machine. And, it’s an interesting experiment and there should be more of that experimentation, but LANDR is doing something different because they are in, in the law being their technology and the mystique and then trying to profit from it. They don’t really create art. You can even argue that they try to level the field that makes certain art possible because they create a standard for sound that is a little bit more flat than existed before, in terms of loudness and other technical aspects. I think another point that we’re making in the articles is that the idea that what is finished sound is also different now because of automatic mastering, because some tracks that would have been considered finished before, in the minds of musicians will seem like they’re imperfect, unless they’re run through some kind of automation process.

Jonathan Sterne (00:36:28) - Yeah. It’s adding an expectation, I think. And I could be wrong because it’s been a few months since I’d done this, but when you upload something to SoundCloud, it says like, do you want us to master this for you now? And like, what the hell does that even mean? Right. Probably means just pumping up the levels. And it’s true. When you do upload to any platform, it does things, it expects certain levels for the audio and it does things to them and different platforms ask for ask for different things and set different standards. Okay. So the bullshit part of it is the venture capital Silicon Valley lying about what you do in order to make it seem cooler, more advanced, more automatic, more brilliant than anything else. AI is a buzz word. It’s the I prefix of the 2020s.

Jonathan Sterne (00:37:19) - I guarantee in the tech industry, like I remember in the mid 2010s, when people who are doing machine learning, like actual computer scientists had their work rebranded as artificial intelligence by corporations like Microsoft, like Google, like Facebook, right? Because AI sounds more powerful than machine learning. Machine learning is also a metaphor, so it’s marketing BS – in some cases it’s outright lying about the level of automation. And, it’s also like in the case of LANDR, it’s also working on the classic Silicon Valley disruption model of let’s take an industry, in this case mastering engineers, let’s try to automate their jobs and then let’s try to profit from it. And it’s important to note, and this is again a point we make in one of the articles, as the venture capital has gone through another round of funding and stuff, they’re behaving more and more like a corporation, less and less like a part of the Montreal music scene or any other music scene.

Jonathan Sterne (00:38:27) - And they’re also expanding out, right? So all this platformization, they’re doing, they’re just searching for profitability. I don’t know if they’re making a profit or not, they’re making income. But yeah, this is all about the venture capitalists, like getting return on investment or, and the founders getting return on like IP that they can sell off. Now, I mean, all companies are in business to make a profit, but it’s different when your profit is actually dependent on the quality of your service, like a mastering engineer. Right. And you could even argue to a certain extent, iZotope iis bound by that because they’ve sort of chosen a market niche. So that’s the problem part of it.

Jonathan Sterne (00:39:21) - I mean, along with Holly Herndon, we could go back to someone like George Lewis, who’s doing it long before a computer could really do something like machine learning at a rate that would be useful for a musician. Right. Anyway, he’s been improvising machinery for decades and arguing about it with other sort of philosophers of improvisation. , I always ask, like machine learning for who and for what purpose? Right? So the idea, this is Shoshana Zuboff uses the term inevitabilism, right? That there’s this inevitable march towards

Jonathan Sterne (00:40:01) - automation is just not true. It’s a corporate ambition. It’s just another version. It’s a business version of Manifest Destiny. I’ll just leave it there. So, I have real political concerns about dishonesty, corporate concentration, extraction of profit at the expense of people’s quality of life. And I have a real belief, I mean, the whole reason I’m interested in this at all is I like in some very embarrassingly romantic way for an academic believe in music and believe in like people talking with one another. I mean, I guess at a certain level I’m a humanist, right? Like I like humans, and you know that even as a proposition is debatable at this point in history. But I wanna believe in that anyway. So there’s that and yeah, part of it is just the difference between speaking and talking. I really think like now is a time in history for academics to speak plainly if what they’re dealing with is political and can be easily comprehended. And I think words like bullshit are really important for us to utter when we’re talking about corporations that conceal what they do.

Sean Dockray (00:41:23) - I felt like that quote, near the beginning of the article where the person from LANDR is talking about the short runway that they have after getting around a venture capital, to producing some data points, you can almost feel the pressure that puts on to people who are living in the Montreal music scene. And you can feel the squeeze that venture capital and the pressure and everything that that generates. When you were talking about the fact that LANDR just produces suggestions, it occurred to me that actually, when you choose a suggestion, you may actually be training the model for the future. That the choice that you make potentially, you know, that you get wrapped in the labor, and Jonathan, you mentioned the way through that automation mastering engineers are sort of being put out of work by automation at some level.

Sean Dockray (00:42:31) - And I just actually wanted to shift to something that came up in, Elena’s, article and also again in this one. It’s just about the way that aside from the dishonesty of a lot of these companies, the point about primitive accumulation and the way that they capitalize on open source, that they make open source software or communities, that they kind of buy them up or make them non-functional in order to take them as property. Sorry, I’m not saying it in a very elegant way, but this point that you make go, I think is just super important. And it’s the thing that kind of makes me almost most angry, even more so than dishonesty, is that we actually build these things in our own kind of communities that are not for profit, and then the capacity to be doing these things gets taken away, and then sold off for profit and I was just hoping that you could talk to us a little bit more about that a bit more. I’m thinking in particular Echo Nest and WFMU and the free music archive but if you would just talk a little bit about the way that the these social practices are already built into Machine Listening, even before Machine Listening comes up as a, as a thing.

Elena Razlogova (00:44:14) - Yeah, I think the period, maybe the last decade of the nineties and the first decade of the 21st century are a little bit misunderstood because they’re considered as this birth of corporate AI in music, especially like Shazaam started at that point. And then by the end of the decade, you have iPhones and apps and everything. But in fact, that was the time when the interaction between the open access and open source movement and, the beginnings of automation and music and other art forms were really very closely intertwined and some of the companies that became commercial later and Echo Nest is a perfect example because they became part of Spotify, or their engine drives Spotify. And, they worked like that. The founders worked for Spotify for a while. They’ve left now, I think to start other projects

Elena Razlogova (00:45:11) - They benefited from collaborations with entities that are entirely anti-corporate and alternative, and like WFMU is a station that is only interested in the music that’s not popular. There was an article about it in the times and the New York Times that was headlined no hits all the time, so they’re not interested in popular music. But yet they were interested in automation and they used it during the Republican national convention to protest the war in Iraq for political purposes. They ran automatic or automated radio streams during these periods when people were protesting on the streets That history is completely erased from contemporary histories of Spotify. And another aspect that you mentioned, Sean is the primitive accumulation companies, such as Spotify and Shazam.

Elena Razlogova (00:46:13) - They use datasets that were obtained semi illegally, in the early 21st century to work out their algorithms. And almost nobody knows about this now. And I think that’s why books like Spotify Tear-Down, which is a very interesting volume that goes back and like uncovers that early history of Spotify. They’re so important because corporate histories will never tell you that they basically took music, like whole libraries of music recordings, either online in places like Pirate Bay or through companies like Shazaam used the company in Great Britain, to digitize. They digitize the records of a small record company, and then use that data to work out their music recognition algorithm. And that was not legal at the time because the law was kind of murky and they played in interstices of it. And now they’re in the business of protecting their intellectual property, but the way it was created was through open to the idea that information should be free. So they use that idea and they use the work of open access and open source advocates, and then to make themselves into the huge corporate entities today. I think it’s a really important story.

Jonathan Sterne (00:47:48) - Yeah. I’ll add to that. I have one clarification, which is LANDR doesn’t really make suggestions. iZotope makes suggestions. I mean, it’s not that important, except we’re talking about this stuff, so let’s parse it.. So everything Elena just said is also happening with speech and language. So when everything online with COVID a lot of disability access advocates said, we need transcription on zoom meetings, which I’m like totally in agreement with, and actually I found in teaching this year it isn’t just around say people who are hard of hearing or people with ADHD or something like that, where like the transcriptions are really useful for what just happened, but also non-native speakers or maybe just tired people. Okay. So you have a need for transcription. So there’s one of two things you can do, like historically what you would do is hire a person to transcribe, right?

Jonathan Sterne (00:48:52) - It’s the same thing as like what do you call it? Simultaneous translation, right? These are skills. These are things people can do. Did institutions do that? No, because there’s this service that you know, and there’s many companies now that like more or less instantly turned speech into text. So in this very zoom meeting, we could turn on closed captioning and have a dumb assignment to a person or have it done automatically the company that provides the automatic captioning for zoom is Otter.ai. Otter stores, their data on Amazon, the Amazon web services servers and if you look at the user agreement for Otter, they say, you own the content of what you do, which is of course, you know, that sort of awful word again from the sort of Silicon Valley industry, all this stuff we care about, like what we’re saying to one another, the music we listened to the journalism we read that’s content. That’s not important. What’s important is all this other stuff. What we don’t own is the product of machine learning performed on the data that goes through their servers. And that includes things like their attempts.

Jonathan Sterne (00:50:11) - To voiceprint us to identify us and to be able to connect our speech and the sound of our voices with other things. Now, some of this is just like outright fantasy. It’s like phrenology. Like I could, from the tone of someone’s voice, determine whether they’re lying or not. That’s as likely as a machine learning system being able to make accurate genre distinctions. It’s a very unlikely but then in the last year, then this like call for access has actually produced a massive theft of data from people that they don’t even really know is happening and I’ve written about this in a piece I’ll have coming out with Mehak Sawhney who I know Joel knows. Hopefully by December the piece will be out in California. And so we looked at that and we also looked at low resource languages, languages, where there’s not all large Corpus of recorded audio to train an AI system.

Jonathan Sterne (00:51:13) - And in both cases, sort of arguments for access and inclusiveness also have been co-opted by these industries to collect more data and, as Sean put it like classic primitive accumulation: take something that’s commonly owned and turn it into private property. So I mean, I think it’s pretty amazing. It’s like your credit history, like you don’t really have the right to access to your own voiceprint. Like, I mean, we we have more rights to our fingerprints and we have to our voice sprints right now even though voiceprint is a little bit more complicated in terms of being able to identify people in being useful and things like that, at least for now.

Elena Razlogova (00:51:58) - Yeah I think, and that reminds me of some work by Xiaochang Li from Stanford on the history of voice printing, which basically shows a very similar pattern to what’s happening with machine learning now, which is that it wasn’t working. And yet it was used in law and it was advertised as something that worked because people in power for whom it was necessary to pretend or use the technology, even though it was swabbed and I think something similar is happening now also. I forgot to say something in response to the question about methodology, because what happens with the collaboration with WFMU and Echo Nest that I was working on is that, and it’s probably true of how Jonathan described his attempt to like research iZotope when you call and they don’t return your messages.

Elena Razlogova (00:52:58) - So people who made it in the corporate world don’t return my messages, even LANDR. I think I was able to speak to one since Jonathan was my collaborator (he has a little bit of a higher status in Montreal community). And then they did give us an interview. Before, it was completely impossible for me to get it, that was another kind of benefit of collaborating with Jonathan. But he can work around it. And with Echo Nest and their early attempts at music recognition, they worked with an academic from Columbia who published a lot of articles who is available for interview. And there were a bunch of people who worked on early hacks who are also available so you don’t need to have the head of the company if you’re interested in researching its history, you just need to research around it. And we’ll get a pretty clear picture that way as well.

James Parker (00:54:02) - Do you mean Dan Ellis?

Elena Razlogova (00:54:06) – Yeah.

James Parker (00:54:07) – I’m going to try and join some dots here. I don’t know if I’m going to be able to do it acrobatically. You mentioned Dan Ellis sort of by implication, and we’ve spoken with Xiaochang Li as well. And her work is fantastic. And Liz Pelly about Spotify, and Mara Mara’s work. When we spoke with Mara Mills, you know, she also made that point about access that you’re making Jonathan. So, you know, so there’s a growing community of scholars, activists, journalists researchers, and so on, clearly working, you know, in this vicinity. Dan Ellis is a bit different because he’s inside the infrastructure rather than sort of speculating on it from the outside somehow. But you point out, well, we have less rights in relation to our voiceprint than our fingerprint and, you know, it seems, so what I’m trying to say is that there’s a growing community, but the fact that we know the names that you mentioned, also the names that we’ve found suggests maybe that it’s still quite small and, you know and that there’s quite a lot of work to be done to kind of attain the kind of status and profile for this work that would enable us to maybe have some rights in relation to our voice sprints in the same way that we do fingerprints and so on. So in other words, I’m trying to say: how should we think about the kind of the current status of work on I don’t know if we should call it Machine Listening yet? That’s maybe another conversation that we should have, but like, is it the case that where this community is small and quote unquote behind and if that’s the case you know, do you see promising signs that it’s growing or do you see particular points of orientation that we should, you know, we in so far as we are, we begin to, you know, be mobilizing around or problems that we should be addressing?

Jonathan Sterne (00:56:49) - Sure so there’s probably a point to be made here about academics, ocularcentrism, or God, it’s not ocular centrism it’s like ocular preferism. I don’t know what the right word is, but there are more people working on the visual side of it than the audio side from like a critical or cultural standpoint. And that’s pretty much always the case you know, I think the community you’ll get better, bigger, more people be interested on teaching my first graduate seminar on my critical approaches to AI in sound next year we’ll be using some of your curriculum for where you put your curriculum online.

James Parker (00:57:29) You know, I’m sorry to just shout out to this, because you know, Sean’s background is in open source and public education and things. And, you know, as an academic in Australia, in a law faculty, I basically got inducted into sound studies via your website and the curriculum that you shared on that. And that’s how I self-taught, basically on that as a resource. And I just want to sort of, I want to say thank you for that, but also just you know, I guess draw out the connections, I suppose, between the political importance of making something, producing something like a curriculum, putting it online, sharing it especially on the internet where it crosses borders and so on and so on. So yeah, sorry, that’s sort of belongs in brackets, but yeah continue.

Jonathan Sterne (00:58:31) - Well, thanks James.

Jonathan Sterne (00:58:34) – But yeah. I absolutely agree. Like, I don’t understand why people don’t post syllabi in general although I also think part of that and academics, aren’t that good at this, including me, I’ve started doing it as like crediting other people when you’re borrowing ideas from their syllabus, just to say, like, I didn’t come up with this Sui Generis, like just as we do that, we’re very good on our citation politics on reading, but I especially think, you know it’s especially important for white dudes but not just like, you know, this white men to do it, just to show that like our ideas come from other places. Okay. So I think the community is growing. I think the number is not so important either. Like if you say, well, what can we do about this?

Jonathan Sterne (00:59:25) - Well, I mean, stuff can be legislated. There is an active discourse in Canada right now about banning public facial recognition, for instance, will that actually happen? Well, I don’t know. Will the Liberals stay in power? Will they stay interested in the subject? Like, that’s always a question, right. And I mean, you can see the same thing in the United States with machine learning. The Obama White House was interested in it. I’m sure the Biden White House will be interested in it and so there’s these moments where there is a possibility for intervening in policy and there it’s like, you know, that standard straightforward, white papers and like getting on the phone and calling legislators. And it’s not sexy and it’s not stuff that academics are always rewarded for. I guess it depends on what you’re position looks like, but there are ways to, to get people, get people engaged and change things. And then of course there’s consciousness raising and just educating people about it, which isn’t enough on its own. Like some consciousness raising sort of happens in the middle of activism, right? Like you need enough people who are already energized around the subject to have a community, to bring people into and then you have to make more people aware of how this stuff works. And I feel like we’re in a moment, I mean the pendulum, the public discourse is always quite simplistic and that’s a risk of using terms like bullshit is that it sounds very dismissive. Like there isn’t a bunch of research and thinking behind it but I certainly think that there’s opportunities for organizing and especially around the enclosure stuff, because it’s about the conversion of like non-private goods into private goods.

Jonathan Sterne (01:01:10) - It is precisely the kinds of market behavior that can and is regulated all the time Tarleton Gillespie, and, Luke Stark actually at University of Western Ontario said that right now the petroleum industry is better self-regulated than the machine learning industry. And I just want that to sink in for a minute, right? The industry that is like really, really effectively contributing to climate change is more effectively regulated and self regulated, then data expropriation at the moment and you know, it’s not that effectively regulated or self regulated as we can see. So there’s a long way to go and there’s a lot of precedence and we’re not going to get hell, I mean, you know, the flip side of it is the tech industry is producing all these billionaires who have tremendous political influence. So it’s going to have to be a lot of organizing, but it can be done. Go ahead.

Elena Razlogova (01:02:15) - I just want to add that maybe local organizing or like city or state-wide organizing is also the way to go. Cause I know in California there were really successful movements for banning facial recognition, for example, and here we can take, we can use ocularcentrism because the work has been done there; they’re gotten a little farther toward stopping these forms of surveillance, but doing it at the scale of a city or a province or a state seems like it’s feasible and then maybe expanding so yeah, white papers and local organizing, organizing, I think is the way to go.

James Parker (01:03:01) - I suppose then the question is what are the white papers about though? You know so, what is field in so far as you understand it at the moment, either work that you’ve been doing or work that you think needs to be done, but it hasn’t been done. Maybe this is also an opportune moment to talk about the phrase Machine Listening. Cause I noticed that it crops up in one of the papers, but it’s not sort of an organizing theme. And sometimes I wonder if it’s a distraction because you know, well maybe if we just talked about expropriation or primitive accumulation and we’re agnostic as to the medium or the sensory mode…

Joel Stern (01:03:45) - We’re obviously, I mean, we’re obviously quite invested in it because, you know, we call the project Machine Listening curriculum. And so we kind of want the term to sort of do some work and to you know, in a way be a provocation to sort of think about, you know, how the qualifier machine transforms, you know, the verb listening angle or sort of what it actually means to put those two terms together in this context. I think that was kind of something that in a way, we’ve really wanted to speak with you both about, you know, because you have used the term in certain ways, you know, what do you think it could mean, whether it’s sort of what it suggests, what it suggests for you and kind of what the sort of critical horizons are. Or, how it might operate as a sort of fantasy or desire in the way that artificial intelligence does because we think there’s quite a lot of potential obviously to mobilize people around, not just this term, but that the term can sort of help us do that.

Elena Razlogova (01:05:06) - Okay. Well, I can start, I would be interested in what you guys think this thing is but.

Elena Razlogova (01:05:13) - To my mind, I don’t think I used it in my articles until the LANDR article, but it seems like there’s some practices that definitely fall under the categories, such as a song recognition, speech recognition, music recommendation is not so much because some is just text analysis so technically I’m not using AI or machine learning or neural networks to recognize something in the sound the way image recognition works to me, that sounds like Machine Listening. But, maybe that phrase could be used to explore metaphors because there’s a new feature in apple OS, that just came out Mac, that allows you to find objects that are tagged. And it’s described in society as listening: that your iPhone is listening for these objects, even though there’s no audio involved, but the metaphor is still not a visual metaphor, but an audio metaphor. So it can be expanded to that level also, I think.

Jonathan Sterne (01:06:29) - Yeah, I’d sort of go both ways on the term, right? On one level machines don’t listen in the same way that they don’t see, right. Those are that’s, you know, they’re (I’m like with the quote unquote) hard media archaeologists, right? That the computer works on a different logic than a human brain. I don’t think that’s an important observation to maintain at a historical moment where words like intelligence and learning are being used as if it’s the same thing as people, the same time. Listening’s a social relationship. It’s always been a social relationship. It’s never not been a sound always has to have multiple causes. A sound is only a sound because there’s a percipient; so listening’s always a relationship. So of course, you know, yeah. Maybe like in a very metaphorical or discursive way, but also in a real way, since listening is always a social relationship, like my phone can listen for my keys, if I can get one of those $15 tags for them.

Jonathan Sterne (01:07:29) for me, the other part of is this is computer scientists – I’ve been working on material about and with engineers for years. But actually I don’t view the primary audience, the people I want to engage with around this work going forward, as engineers. But one of the things I want to do before I move away from that space, on the listening and Machine Listening because people in natural language processing andmusic information retrieval don’t agree on this term. And so I, and several research assistants this summer are actually going to look at like, what do they think listening is? And do they include or exclude from their discourses about what they’re doing. And then contrast that with the very rich, philosophical, and intellectual history of listening and sound studies where there’s a lot of reflection and again, no agreement, no consensus on what listening is. I’m very firm in my positions after doing this for a long time, but I also recognize that, like, not everybody agrees with me.

James Parker (01:08:47) - Is one of the people you have in mind, by any chance Richard Lyon because he’s got this book Machine Hearing, I don’t know if you know it he’s and he has this section, like it’s just a little block pop out from the text where he goes, oh, I didn’t like Machine Listening. Machine Listening is a word, a phrase used by the music guys. And you know, I think in, in the way that he describes machine hearing, this preference for machine hearing it’s sort of, he’s trying to draw on the, kind of the biologist stick kind of associations that come with hearing, whereas, you know, listening is kind of parked in the culture camp basically. And so as far as I understand, the phrase Machine Listening starts to get used regularly out of the computer music sort of scene and particularly Robert Rowe’s book and so, yeah, it’s just interesting the way that that particular version of the argument played out. But I mean, I’ll be fascinated to see that research, I mean, on some level, I sort of feel like you have to have some historical kind of fidelity and, you know, watch the, the phrase sort of gather meaning as it grows. And it does seem that it’s kind of big gun to.

James Parker (01:10:13) - A lot of people that we’ve spoken to have been using it and they don’t necessarily know why, partly because it kind of is just there in the ether. But I also think it’s a kind of a term that people can invent, you know, in the same way that like people just autonomous. Yeah. They just thought it makes there’s machine learning. So there’s Machine Listening, we spoke to an Australian academic, sort of a computer scientist who seemed to have simultaneously invented the phrase Machine Listening as far as she was concerned at about the same time that Robert Rowe was beginning to use it at MIT. So it’s just seems like a sort of there for the taking on some level, but I also feel like it can be loaded up with meaning and that may be something worth doing is to begin to load it with certain kinds of meaning early you know, to say that Machine Listening includes as an object the technical machine learning stuff, but also the bullshittery and the kind of wizard of Oz parlor tricks. And that the field of Machine Listening, isn’t just a technical field, but it’s also a field that comprises and includes all of that political discursive.

James Parker (01:11:37) - You could also add economic and extractive and whatever work as well. And so part of my inclination is to sort of pick it up from those heart, you know, the hard sciences or computer music, and then begin to kind of load it with associations, but I don’t know whether that’s….

Joel Stern (01:11:54) - But also the social relations that you mentioned, Jonathan, sort of that come from, sort of living in a world in which our smart speakers appear to be listening to us. Even if what they’re technically doing is something else entirely, you know, with voice user interfaces, you sort of speak to devices and they speak back and that produces a social relation that is similar to the one of being listened to.

James Parker (01:12:24) - We’ve been toying with the term listening effects or something to try and get at that experience. Like, to some extent it’s not very satisfying to say, ‘ah, but it doesn’t really listen because the experience of being listened’ and so that’s also something worth noticing, and that is doing certain kinds of cultural and political work as well. So yeah, because I’ve encountered a couple of times where that, Hey, Presto yeah. Falls flat and I think, yeah, I think you sort of need to do both. Sure. And where do you come down on the term?

Sean Dockray (01:13:15) - My jury is still out. I’m listening. Yeah.

Elena Razlogova (01:13:21) - This reminds me again, like to cross the boundaries… I understand that it’s important to keep listening to the audio field, but it really reminds me of the Rogerian therapist app, which wasn’t about listening at all, but the way James described it, it seems like the importance of connection and the effect of a human that you’re speaking to that was captured in the very first Eliza, that everybody teaches in the introductory courses on computers and history so I would still say that there are things outside of the sound wave field that could be definitely incorporated into the idea of Machine Listening.

Jonathan Sterne (01:14:14) - And I think it’s our responsibility to do that, right. I mean, otherwise you’re basically seeding the ground to people who say, and I mean, not all computer scientists believe this by any measure, but like this certain aggrandizement of computer science that says it is now the master discipline. And if you understand data, you can understand everything about the world, right. Which reduces sound to like audio that can be processed, right. When in fact listening is meaningless without all these relational dimensions. So I think that’s really important, right. And I think the Eliza does listen, Kate Crawford has this whole thing about Twitter, basically being ambient sound like, you know, you would have it on, on your desktop in the same way that people would have a radio on or a TV on, in the background as they’re doing something else. And I think that’s actually right.

Jonathan Sterne (01:15:12) - Right then it’s metaphorical, but it’s also like structural and I think that that part of a theory of listening is really important for understanding Machine Listening, because otherwise we’re saying that the technical explains everything else and I don’t think that that’s true.

James Parker (01:15:34) - How do you avoid essentialising – the audio visual litany, basically, like if Twitter is kind of Sonic because it’s ambient and it’s a stream? Whenever I think of that kind of thing, I always think, ‘ah, the audio visual litany!’ You’re sort of forcing very specific and historically situated account of sound on to this in order to make that metaphor work. And so, I don’t know, I don’t know where that gets us….

Jonathan Sterne (01:16:16) - First. Thank you. You are the first person who’s ever accused me of actually using the audio visual litany...

James Parker (01:16:29) - God, I didn’t mean you…

Joel Stern (01:16:33) - I know I’m going to just, just go with it.

Joel Stern (01:16:35) - But the point of coming up with a theory is, is the potential it could be used against you obviously. Jonathan Sterne (01:16:40) - Obviously. Great. No, I don’t actually think about it that way. I think of that in historically situated and the way practices move from one field to another. So, let’s take Crawford’s point, which is not ‘this is how sound works.’ This is how radio and television worked in the domestic sphere. And this is something that’s even a fairly well understood thing in television practice, but not so much in television theory, right? The role of television sound, you don’t always have to be facing and looking at it in order for it to do its work, it’s also the structure it’s television soundtracks, right? Think about sporting events where announcers play-by-play people like raise their voices as something might happen, and then they keep yelling after it happens, or like they boost up the crowd noise in order to draw a person back to the screen, same thing with narrative music and television.

Jonathan Sterne (01:17:37) - and I think so these practices are built into like, again, it’s not the technology, it’s like all of the things around it, the way people interact with it in the domestic space versus how people then design it in anticipation and in cultivation of those interactions and social media platforms like Twitter. And I mean, the metaphor I like to use for Facebook is a house party. Not not audio, but just going with Kate Crawford’s point it’s that it works like sound it’s that it works socially in ways like radio and television and silent, which is a different thing. That’s not a good, it’s not that the sound fills up the space or whatever it’s that this like coming to and moving away from, you know, distracted relationship as you’re doing something else already exists as a set of embodied embedded systematized media practices with histories and contexts and all that.

Jonathan Sterne (01:18:41) - And obviously like any metaphor it’s not universally applicable. So yeah, if you say, ‘dude, it’s like sound because sound envelops, you or sound is all around or sound is ambient’ then no, but if you say ‘it’s like the way certain kinds of television’ obviously, you know, she was writing about like Australia and the United States and Western Europe, and it may not work that way everywhere and I would say the same thing for social media, right? Like it’s very different interacting with Twitter on a desktop and Twitter on a mobile phone, for instance.

Elena Razlogova (01:19:17) - Yeah. Or in continuous Wi-Fi zone versus intermittent internet. Yeah.

Jonathan Sterne (01:19:26) - Yeah. Or like back in the days when like people actually interacted with it by text message, which I don’t think anybody does anymore, maybe they do? I don’t know but it certainly was very important like André Brock and his stuff on Black Twitter covers this pretty well. And that difference, that now retrospectively like, of course Twitter’s not like text messages wasn’t apparent necessarily from interface for like a very significant user base early in its history.

Jonathan Sterne (01:20:03) - So now we’re like way off Machine Listening. Yeah.

Elena Razlogova (01:22:49) - Well actually, should we talk about the possible visual article? I know it’s not about machine listening, but I wanted to mention it because this is something on automated that we’re thinking of doing, as a companion piece on automated seeing. Machine seeing that would consider certain automated editing photographs in the historical context. And like one example is (I’m forgetting it now the rule of thirds because you guys know what it is) so basically it’s a rule that allows you to place the most important object on the photograph and there’s software that emerged a while ago now, like five or seven years ago that got some pushback from photographers that allow you to do that automatically. And this is a great example where there were artists especially photographers and filmmakers who broke this rule of thirds and became famous by breaking it…

Elena Razlogova (01:23:57) – Stanley Kubrick – his entire oeuvre as a photographer and the filmmakers based on placing an object in the center and there are companies such as Polaroid who also encouraged users in their materials, promotional materials to put the object in the center, whereas this software makes it a choice for you or what is important in the picture, but also in a particular enlightenment tradition that then gets enshrined in the software and makes only certain forms of art possible. So we’re thinking of doing something with that and having a companion piece to wander with does that particular method of machine editing.

Sean Dockray (01:24:44) - Most of the stuff that’s built into the cameras, right?

Elena Razlogova (01:24:47) - Yeah. Instagram is based on that.

Joel Stern (01:24:54) - But it’s also like zoom background noise reduction tools, which work pretty well when you’re having a conversation. But, I was teaching a sonic art class on zoom last semester and we were working with a lot of different forms of experimental audio and sound and playing music to each other. And it was so amazing. It was like hearing the zoom algorithm at work as it was filtering these different pieces of music to try to separate foreground from background and decide what is, or isn’t important in the scene. I mean, it became really an amazing kind of study of different form of mediation in a very new form but yeah, it was also sort of obvious to kind hear what, what was considered to be important in the audio signal.

Elena Razlogova (01:25:52) - Exactly. Did you turn on original sound? There’s a setting for like, if you want to make music on it?

Joel Stern (01:25:59) - Yeah, no, we did all sorts of stuff like that and, and actually, some of those experiments culminated in a work for the last Machine Listening live session that we did on Improvisation and Control where one of the artists we worked with Matin made a score for zoom where the score was a whole group of participants in the same session to improvise with the audio settings and to kind of play and to sort of bring their microphone and speakers up to just the threshold of feedback and then improvise with the audio settings, changing the background and foreground and noise reduction. And it was really quite beautiful and subtle and an interesting soundscape that was produced just out of these kinds of zoom, audio artefacts of the zoom engine, trying to kind of deal with these different sound events. Yeah.

Elena Razlogova (01:27:01) - They’re also incredibly time specific because the next version of zoom will not produce the same effect. Like John Cage’s radio experiments, like the radio is pretty much the same through all of those experiments, but let’s zoom. It wouldn’t be the same way.

Joel Stern (01:27:23) - That’s right. Yep.

Jonathan Sterne (01:27:25) - There’s probably a whole article In that one sentence, Elena, let me think about that. Yeah. I mean the backgrounds that James and Elena have on it’s amazing how you know simultaneously powerful these computer judgments are and how utterly incompetent. Right. People’s arms and shoulders disappear and reappear into the backgrounds and like the outlines of people’s heads and stuff. So like on one level it works really well. And on the other, it like just gets basic stuff wrong. Right. And yeah, no, no, there you go. Yeah. Great. I’m gonna, I’m gonna do the same thing as soon as I stopped talking, but yeah, so it’s the same kind of thing with both the visual and auditory, like simple questions like foreground and background that human beings are very good at figuring out the machinery just really struggles with or it doesn’t depending your tolerance for those kinds of edge errors.

James Parker (01:28:31) - Is there anything else anybody wants to talk about before we wrap up?

Elena Razlogova (01:28:42) - One thing that came to my mind, and again, it’s not exactly about Machine Listening, but it is about Zoom and similar software where it’s the medium of transnational talking basically is that there was a debate about considering zoom, a public utility after several pro-Palestinian talks were canceled by the Zoom corporation. And then the protest panel in relation to the first canceling was canceled by corporations and it kept going on and on and on. And people were saying, well, this is like a highway or this is a means of an infrastructure that needs to be publicly owned. And I think that is also a way to think about the sound that we’re producing through this highway, because that would give us additional rights if we consider zoom and other software like that as public utility that can be regulated in different way and hopefully protect our privacy in the process here.

Joel Stern (01:29:48) - Exactly. That’s awesome. Yeah. Thanks for making that point. And I mean, I was horrified about the cancellation of that panel and I actually saw Leila Khaled give a speech in London about 20 years ago before anything was ever sort of live streamed and it’s amazing that a company has the power to in a way curtail a conversation like that but yeah we’ve thought a lot about the sort of political implications of doing the Machine Listening project, largely via zoom as we have been.

James Parker - But I think they’re changing aren’t they, because initially when we were thinking of it as a curriculum, as universities were kind of being Zoomified, it felt like they kind of ready-made for us to be working in and against to some extent. But I feel like that is beginning to change, as that move doesn’t feel so interesting a year later.