Nolen Gertz
Duration: 37 min
Views: 1768
34 likes
Published: November 3, 2020

Transcript

[00:00:10] Right. Uh, sorry if my, uh, my computer is too old. Uh, I'm I'm borrowing it from my, uh, chair, uh, so, uh, I guess you can blame him. Uh, so thank you, uh, Anne, uh, for, uh, for taking care of the slides and for inviting me. And, uh, thank you to everyone who, uh, is here tonight. Uh, of course, uh, as you can see from the first slide, you can reach me at Ethicist for Hire. Uh, if you feel the need to, uh, tweet about, uh, all these technical difficulties, which are, of course, uh, to be expected since I'm the, uh, I'm known as the anti-technology guy, so it makes sense for technology to, uh, attack back. All right, uh, next slide.
[00:01:00] So, uh, if, uh, if anything I say doesn't make sense, uh, you can just buy my book. Uh, and if it does make sense, you can also, uh, buy the book. So here is, uh, a brief overview of of just, uh, information about the book if you're not familiar with it already. Uh, I've also written a more recent book just called Nihilism with MIT Press. Uh, so I have all your nihilism needs, uh, satisfied. Next slide.
[00:01:33] So, uh, because I am a, uh, philosophy professor, uh, I, of course, like to start with asking a why question. So I thought it'd be good to start with, uh, the standard why do we use technology? Next slide.
[00:01:51] And if you, uh, think about, uh, some contemporary examples of technology, uh, you know, it's interesting to think about, uh, specifically advertising of technologies and how they themselves try to answer why do you use the technology? Uh, Netflix, of course, suggesting, uh, that it's, uh, you know, because you need to, uh, do something, uh, to get away from the world. Uh, Google, I don't think even Google knows why you need a Google Home. And, uh, of course, the, uh, Google Assistant on my phone just activated. Uh, it's like they're listening. Uh, the Amazon drone, uh, of course, in the age of coronavirus, that'll probably become even more ubiquitous soon. Uh, but my favorite is this, uh, lower left, uh, which isn't even necessarily, uh, obvious what it's advertising. Uh, if you can tell that that little black robot puck in the corner, that's that's a Roomba. Uh, but what they're what they're telling you, of course, is that what they're advertising isn't, uh, a vacuum. They're not advertising suction power. What they're really advertising, uh, is, of course, an experience, right? That with with this little robot, uh, in your house, uh, you'll finally get the, uh, the time, uh, to to have a couch fort with your family. Even the dog will be happy. And this is really what they're selling, right? Uh, this is, uh, in other words, they're selling you leisure. Next slide.
[00:03:25] And, of course, uh, since Aristotle, uh, we know that, uh, leisure is very important. In fact, for Aristotle, he says that leisure is the the end of politics, that this is really everything the state should be organized, uh, for. And that, uh, the way to achieve leisure, according to Aristotle, is, of course, through slaves. So, uh, if you have a slave, they do all the work, and then you get to have leisure. Next slide.
[00:03:53] Karl Marx similarly thinks that, uh, leisure is what we should aim for. Uh, but unlike Aristotle, uh, he thinks that slaves should, of course, have leisure too. And what he wants to do is basically flip things upside down and say, well, shouldn't the state, uh, be the slave that does all the work so that the rest of us can have leisure? And if you look at, uh, that bottom quote, you can see that his description of a communist society, uh, really isn't all that different, uh, from what most people probably do, uh, on internet or or try to do on internet anyway. Next slide.
[00:04:28] So really what this raises, uh, for me is the question, of course, uh, technology is giving us the leisure, uh, that we've always been looking for. Uh, but is it providing us also the liberation, uh, that Aristotle and Marx, uh, thought went hand in hand with the leisure? Next slide.
[00:04:50] So, let's take a step back and ask the even more fun question, what actually is technology? What are we talking about when we talk about technology? Next slide. I apologize, by the way, that I like, uh, Photoshop so much. That that isn't what Nietzsche looked like.
[00:05:08] So, uh, there are broadly speaking three different ways of thinking about technology. You can think about it, uh, from a deterministic perspective, that technology, uh, makes the work makes the world worse. Uh, you can think of it from a utopian perspective, the world makes everything better. Uh, and you can think about it from an instrumentalist perspective, uh, that technology doesn't do anything. It doesn't make anything worse or better. It just is. Uh, it's like a screwdriver. It just does whatever you do with it. Next slide.
[00:05:42] Now, normally, uh, I would have asked you, uh, to raise your hands, uh, for which, uh, you were going to vote for. Uh, but since we're, uh, online, instead, I thought it'd make more sense to give you a more concrete example of how to think about these different perspectives. Uh, so I, uh, decided to think of, uh, Tinder and how to analyze it from these three perspectives. So, again, uh, Tinder is dangerous, Tinder is a tool, Tinder is liberation. Next slide.
[00:06:16] So, what does it mean to think of Tinder as utopian? Next slide. On Tinder's own website, they use very utopian language saying that it actually empowers users and they give you these fantastic numbers to give you just that impression, right? That Tinder is, uh, clearly making the world better. Next slide.
[00:06:39] Uh, but as you can see here, we can also think about, uh, you know, if you go behind the scenes, uh, into the decision trees behind Tinder, it really doesn't seem, uh, like it's making the world do anything, it's about the user, right? Next slide.
[00:06:55] So again, you could say that, well, Tinder really doesn't do anything other than give us, uh, you know, a new way to do what we've always done. So just like we go to a bar, uh, and swipe eyes up and down people, uh, on Tinder, you just thumb left and right. So really it doesn't do anything, uh, that's all that new.
[00:07:18] Next slide.
[00:07:24] You could also say, of course, um, that Tinder is dangerous. I've always been amused, uh, by this this extra button, uh, that you can see here that even when you, uh, reach a match, it still has that little, but why not just keep swiping anyway? Next slide.
[00:07:42] So it probably shouldn't surprise us, uh, that researchers like Jen Pervis have argued that actually, um, Tinder is dangerous, uh, and in specifically in the form of addiction, uh, that people can't stop swiping, uh, and it seems to operate not unlike, uh, gambling in Vegas. Next slide.
[00:08:04] So this raises, uh, yet another question about how do we actually use technology? Next slide.
[00:08:13] So now if we go back to, uh, the perspectives I gave you at the beginning, uh, you should probably realize by now that if a philosophy professor asks you which of the three options it is,
[00:08:23] Next slide.
[00:08:25] The answer is, of course, uh, going to be none of the above. It's actually something completely different. Uh, it is this, uh, incredibly ugly word that we've come up with, post-phenomenology. Uh, and I've again, turned of, uh, Photoshop, I apologize again. Uh, and I've made for you, uh, this, uh, sort of cyber rabbit duck or duck rabbit, uh, to to really summarize the post-phenomenology perspective, uh, that technology is not good, it is not bad, it is both and it is neither simultaneously. Or as Don Idy calls it, it is multistable. Next slide, please.
[00:09:05] So traditionally, we would say that subjects and objects exist independently of each other. This would be, uh, where we're, uh, speaking in in a French conference, we should talk about Descartes, of course. Uh, this would be the sort of Cartesian dualism that you're probably familiar with. And post-phenomenology starts from the perspective that this is wrong. Subjects and objects, uh, are not separate, uh, they do not belong to separate realms. They actually they they belong to each other. They actually create each other. They are co-constituted by each other. So the example I've given you here is a fork. And, uh, again, in each context, uh, how I use the fork determines what the fork is and simultaneously determines what I am. So as a parent, uh, I often say to my son, uh, you know, you're using the fork wrong. But as a phenomenologist, I of course, have to say to my son, uh, you're doing a good job answering the call of the fork. Uh, so that's really what is what is trying to be captured here, that the fork, uh, has no essence, in other words, it is how we use it, just as I have no essence, I am how I relate to the fork in that context. Next slide.
[00:10:17] So Don Idy, uh, the philosopher in America who came up with post-phenomenology, uh, says that we have what could be described as four different human technology relations to describe how we relate to technologies. Uh, and he calls these embodiment relations. The example here would be glasses, uh, as I'm doing right now. Uh, the glasses become part of me so that, uh, again, I would say that I'm looking at my screen, uh, when in reality, I'm looking at my glasses looking at the screen. Uh, in hermeneutic relations, uh, again, what all of us are doing right now and what caused all the technical difficulties. Uh, this is when I, uh, relate to the world through my computer, and importantly, the computer seems to become part of the world, so that, uh, again, I think that I'm I'm talking to you, uh, when really I'm talking to my computer and it's my computer that's talking to you. And I really have no idea what you're hearing or seeing. I just, uh, have what Don Idy calls hermeneutic faith, uh, that what I think you're experiencing is what you're experiencing. In alterity relations, uh, I am actually focused on the technology, uh, to the exclusion of the world. This is probably an experience you've had if you've ever been walking beside, uh, behind someone on their iPhone, uh, and they just pay no attention to the fact that there are people, you know, right right behind them who can hear them. Uh, in background relations, on the other hand, uh, this is when you relate to the world, uh, to the exclusion of the technology. So for example, uh, when I came home tonight, I expected my Wi-Fi to be working, I just take it for granted. And it's only when it, uh, wasn't working, uh, as of course, seems to be happening more often nowadays. Uh, when my life increasingly depends on Wi-Fi, uh, that I find that it's not working, and suddenly the background relation breaks. Where again, I had, uh, the taken for grantedness of the relationship is actually brought to the fore, uh, and this is what background relations are all about, that how I think of the world is actually a technological world, but if the technologies, uh, are operating the way they're supposed to, then I shouldn't be thinking about them. Next slide.
[00:12:28] So, as Don Idy, uh, describes all this, he says what this shows, uh, Next slide. Yeah, thank you. Uh, so what he shows is that the technologies are not neutral. Right, that we cannot think of them from a neutral perspective, uh, and that really what they show, he says, is that we have this wish, uh, a contradictory wish. We're on the one hand, uh, we want what the technologies give us. We want, uh, what they reveal. Uh, but at the same time, we, uh, we want to, uh, we don't want the technologies. So, what we like is when the technologies conceal themselves. So, in other words, we want the power, uh, of technologies, but we don't want technologies to have the power, we want to have the power. Next slide.
[00:13:18] So, he, uh, he shows us, uh, what this looks like, uh, on a continuum, right? There there's no technology, again, the anti-essentialism. You cannot say a technology is any of these relations, rather it exists on a continuum, uh, where technologies are quasi-me, so they are like me, to quasi-other, they are like another being. Uh, and again, in each of these relations, we have a revealing and concealing. Uh, so in embodiment relations, I have the empowerment. My glasses let me see things I could not see otherwise, but at the same time, they belittle me, uh, insofar as, of course, when my glasses don't work, I am, uh, increasingly blind without them. Uh, in hermeneutic relations, I have knowledge that I would not have otherwise, so I can project myself into situations that I wouldn't have, uh, but the technology can also betray you. So this would be the example of Chernobyl. Uh, if you haven't seen, uh, the HBO documentary or lived through the '80s, uh, spoiler alert, uh, they trusted that the, uh, the the radiation, uh, was being measured properly, uh, but really it was the measuring device itself that had stopped working. Uh, in alterity relations, again, we have, uh, think about robots here. Uh, we are fascinated by the technologies, uh, so in Enschede, where I live, for example, we have a big robot now by the train station. Uh, that's kind of become like an art project. Uh, but at the same time, there's sort of that sort of threat, right? That the, uh, the more advanced the robots become, the more in danger we are and the more we'll need to compete with them. Next slide.
[00:14:56] So, to sum this up, uh, in traditional philosophy, we say humans perceive, humans act, and technologies are means to human ends. So, again, the instrumentalist perspective. But in, uh, post-phenomenology, we say technologies are not mere means, that actually they mediate how we see the world and they mediate how we act in the world. Next slide.
[00:15:19] So now if we go back to, uh, our Tinder, uh, example from earlier, we could say that Tinder, of course, mediates, uh, our search for pleasure. That this is this is what what Tinder does. This is what Tinder is. Next slide.
[00:15:35] But if you, uh, again, remember the numbers I showed you earlier from Tinder's website, uh, we might begin to ask, uh, Next slide.
[00:15:45] whether it is matches, uh, that are really what's providing us with the pleasure. I know this is going to be surprising from a philosophy professor, but I actually did do the math here. Uh, next slide.
[00:16:00] Yeah, and this is where you can begin to question, uh, whether it's again really, uh, you know, the the, uh, hookup app, uh, as it has come to be known in the United States, uh, whether it's really dates or matches that we're really looking for in Tinder. Next slide.
[00:16:18] So again, maybe, uh, we've just simply taken for granted that, uh, if there is a reward operating behind Tinder, that it must be again some sort of sexual reward, uh, when again the numbers themselves seem to suggest maybe it's something else entirely. Next slide.
[00:16:37] So maybe we should say that, yes, Tinder mediates our pleasure, but maybe it's a different kind of pleasure than we expected. Next slide.
[00:16:46] And that perhaps really what's operating, uh, in our, uh, addiction to Tinder is really our addiction, uh, to judging people. And again, uh, if you think about Facebook, uh, if you look at the lower right corner, uh, you can remember, uh, Facebook's original incarnation, uh, was basically a hot or not website. Uh, and that became, uh, you know, the powerhouse that it is today. So again, the, uh, desire to judge other people, uh, became a behemoth that took over the world. Next slide.
[00:17:22] So again, we can think about this, uh, and how it applies to other, uh, platforms, uh, like Kickstarter, uh, Airbnb, and Uber. Uh, but again, uh, they don't seem to be, uh, operating in the way we might expect, uh, because all of them seem to actually be operating in terms of judging people rather than helping people. Next slide.
[00:17:46] So this brings us, uh, to my colleague, uh, Peter-Paul Verbeek, who argues that, uh, we should move, uh, beyond post-phenomenology to simply describing how technologies mediate practical life. To instead think about how technologies mediate ethical life and raising more normative questions. Next slide.
[00:18:08] So rather than again, in traditional ethics, asking what should I do? Uh, if we take post-phenomenology seriously, we have to start asking what should I technology do? And think about basically that our traditional ethics, if it does not take technology seriously as a mediating influence, uh, then our ethics is, uh, really not understanding the world we live in. Next slide.
[00:18:36] So for me, this raises the new question of what does technology mean? And I don't know if you remember, uh, for example, when Apple Maps, uh, tried to compete with Google Maps, uh, and people started driving into rivers, uh, or one person drove onto an airport runway. Uh, and again, as this headline captures, there was this response of, uh, you know, it was a mishap, right? That what else was I supposed to do? It told me to turn right. And I'm sort of fascinated again by this idea that a voice, uh, on a device, uh, tells you disembodiedly to turn right and you just do it. Next slide.
[00:19:14] So what is that all mean?
[00:19:17] And if we think about, uh, in terms of Facebook, uh, Mark Zuckerberg again, uh, you know, tried to, uh, be very open about, uh, technology is meaningful, uh, and for him, it's all about, uh, bringing people together. Next slide.
[00:19:34] But of course, uh, you know, people, uh, who are on Facebook and are seeing Facebook in the news might have a different experience, right? That it's not bringing the world closer together, it's actually tearing the world apart. And that, uh, basically what started out again as a way to, uh, judge women on Harvard's campus, uh, has again become a way to, uh, destroy democracy.
[00:19:56] Next slide.
[00:19:13] So what does that all mean? And if we think about uh in terms of Facebook, uh Mark Zuckerberg again, uh, you know, tried to uh be very open about uh technology is meaningful uh and for him it's all about uh bringing people together. Next slide.
[00:19:33] But of course, uh, you know, people, uh, who are on Facebook and are seeing Facebook the news might have a different experience, right? That it's not bringing the world closer together, it's actually tearing the world apart. And that uh basically what started out again as a way to uh judge women on Harvard's campus, uh has again become a way to uh destroy democracy.
[00:19:55] Next slide.
[00:20:02] So, you might remember Cambridge Analytica, um, this is election night, so of course, I thought we should also talk about the death of democracy a little bit. Uh, I am American after all, so I'm sharing with you my anxieties now. Um, and again, you might remember after the 2016 election, there were uh questions about the role that Cambridge Analytica played in helping Trump get elected. And what's interesting is Cambridge Analytica, as many tech companies do, went to Twitter to apologize, uh but as you can see here, by the time they got to tweet five of their thread, uh they basically took their apology back and said, right, that no, advertising doesn't make anyone do anything, people are smarter than that. So if you blame Cambridge Analytica, uh, you know, it's it's you who think people are stupid, not us, right? Next slide. But weirdly, if you actually went to Cambridge Analytica's website, uh at the same time that they were tweeting that, their website says something very different, that data drives all that we do. So again, which is it, right? Uh, is it that uh we do what uh advertisements tell us, or are we smarter than that?
[00:21:13] So I wanted to turn uh to a contemporary philosopher uh that would really help us to uh to think about this. Next slide.
[00:21:24] So for me the most contemporary philosopher I could think of of course is the 19th century German philosopher Friedrich Nietzsche, uh who tells us uh that, you know, we shouldn't be surprised that we're having these problems since uh clearly even though we are coming uh to know more and more about the world, that doesn't mean that we are coming to know more and more about ourselves. Next slide.
[00:21:51] So again, uh increase in knowledge doesn't mean increase in self-knowledge. Next slide. So Nietzsche asks us to think about this, right? Why is it that we don't know ourselves? And he gives us two answers to this, which are really interesting. So on the one hand, he gives you sort of a Marxist answer, right? The problem is that we are just simply too busy. Uh, we're busy little bees, we're honey gatherers of the spirit as he puts it. Uh so again, you know, if we just uh had the the Marxist revolution already, uh we would finally come to know ourselves. And then interestingly, I think uh I don't know if Nietzsche ever read Marx, I don't think there's any evidence to suggest that he did. Uh, but he does offer this second answer, which I think gives also a good answer to the question of why was Marx wrong uh in his prediction about the revolution, uh because Nietzsche says, well, maybe we actually uh, you know, we're lacking the earnestness, right, that we're too cowardly, we don't want to know ourselves. In other words, it's quite possible that we like having a boss uh because it gives us someone to complain about, someone to blame, uh rather than having the revolution and not having all that responsibility that comes with it. Next slide.
[00:23:04] In other words, we make ourselves busy in order to avoid ourselves. So why would we do that? Next slide.
[00:23:14] Yes. You knew it was coming, nihilism. There's the answer Nietzsche gives us. So what is nihilism? Next slide.
[00:23:24] I apologize if you were expecting a picture of Trump there. He'll uh he'll come up later. So, Nietzsche tells us uh that the problem is that uh, you know, we live in this great world of progress, uh but this progress doesn't actually mean that we are necessarily becoming happier, uh and he thinks that really what uh what's happening is that we are becoming more and more sick of each other. Uh and again, this was long before coronavirus. Next slide.
[00:23:54] So again, uh he is happy to admit that we are living in a time of progress, but that doesn't mean we're living in a time of human progress. Next slide.
[00:24:07] So the way I like to think about this, um this might be a little antiquated now, uh but if you remember being on public transportation, uh you know, if you remember 100 years ago when we used to be able to ride on a bus or a train. Uh again, the sort of uh, you know, you get on, you take uh your backpack and you you put it next to you to take the seats so no one has to sit next to you. You take out your laptop, your iPhone, um, you're put on your earbuds, and again, you sort of create this sort of technological bubble, uh to avoid having contact with other people. Uh it's like a little Trump inside of you, right? You use technology to build the wall so you don't have to deal with any other human contact. And again, it's the idea that I don't want to see other people, touch other people, smell other people, I certainly don't want to taste other people, uh I just want to get to where I'm going with as little human contact as possible. And I think for Nietzsche, that basically this is, you know, what it means to be human, this is how we think about life, I just want to get to where I'm going uh with as little uh, you know, uh problems with other people as possible. Next slide.
[00:25:13] And again, this is more or less, you know, the the dream of virtual reality, right, that I can just, you know, create an entirely different world where I don't have to deal with any other people at all. And and we're kind of experiencing that right now. Uh on this platform. Next slide.
[00:25:30] So why uh why would this happen, right? Uh interestingly, Nietzsche says that maybe the problem is actually morality itself. Uh that we have basically become such a good people, such responsible people, uh that we are becoming less and less human because of it. Uh that next slide.
[00:25:51] Uh that basically what he's saying here is that to be responsible is really just to be regulatable, uh to be predictable. And that this is really what we mean uh that I can trust you because I I know what you're going to say, I know where you're going to be, uh and again that, you know, I don't need technology to start profiling you. That this is really what uh morality has been doing uh, you know, since at least Nietzsche's time. Next slide.
[00:26:17] So again, this idea that we use technologies uh, you know, not just to make each other uh into good people, but to make ourselves into good people, right? That your own smartwatch can really, you know, help you to breathe and you just do what it tells you and you become a better person. Next slide.
[00:26:33] Uh or again that you have uh lighting systems uh here in the Netherlands, this is uh, you know, Philips Eindhoven, uh we have smart lighting systems to make uh, you know, more productive workers, which of course is always sold as happy workers. And this has become so successful that Philips now sells this as a home product, right, the Philips Hue, uh which has become a best-selling product, uh where again you can uh control your own mood through lighting. Next slide.
[00:27:04] Or again, the idea that uh, you know, you use technology uh to determine uh your own mental health or more importantly, uh technology uh like Facebook wants to know your mental health because after Facebook video went live, uh people started killing themselves on Facebook video, uh and Facebook uh wanted not to turn off the system, of course, but rather to be able to predict if you were suicidal, uh so they could get you off the platform. Next slide.
[00:27:38] And if you remember uh BBC Sherlock, what was fascinating to me was that he was basically uh the living embodiment of uh facial recognition software. Next slide.
[00:27:53] So he was again, uh, you know, sort of this this BBC show where it really made uh facial recognition literally sexy, uh and everyone on the show interestingly enough wanted to be read uh by Sherlock, just like again this idea of wanting to be read by facial recognition algorithms, so I can finally know who I am, of course. Next slide.
[00:28:19] And now of course, we have the fun of coronavirus apps, where again we can uh track each other, surveil each other, uh in order to again, of course, create a a safe environment, but it's a safe environment policed by technology. Next slide.
[00:28:38] Where again, I can trust you, uh because the technology tells me if I can trust you. Next slide.
[00:28:51] So as Nietzsche told us, uh this is really what uh religion is all about, this is what the priests uh help us with, that they see our suffering and they try to alleviate the suffering, uh basically by trying to help us to not just kill ourselves. Uh but what's interesting, next slide. Um is that he says that basically they're, you know, offering us salvation, uh from the very suffering that they're helping to create. Uh so again, they're they're making the world even worse, uh while also trying to offer us salvation. Because of course, you know, if you think about it, uh the idea of telling people that there's a better world after death, uh of course, then this world is going to pale in comparison and why wouldn't you immediately want to kill yourself. So according to Nietzsche, then there the religion has to uh, you know, keep you in this world and keep you uh in a way to accept the suffering even though it's the very suffering that they're inflicting on you. Next slide.
[00:29:47] So he says that there are uh different mechanisms uh that priests have uh to help us to alleviate suffering. Self-hypnosis, uh I can uh put myself into a stupor, uh either by drinking or by of course sleeping, which again, uh in coronavirus, you're probably very familiar with. Mechanical activity, I can just put myself to work to distract myself from reality. Uh petty pleasures, I can uh, you know, try to uh uh find little ways to make myself feel better, typically through charity, uh which Nietzsche points out is also often a way to make yourself feel powerful, since if you can give to charity, that means that you have the power to give to those less fortunate than you. Uh herd formation, what Nietzsche is probably most famous for, again the idea that I get lost in a crowd, so I don't have to be by myself. And orgies of feeling, which he says are the most dangerous ones where occasionally we just explode, and you've probably seen this uh wherever you live with the recent protests against uh masks and coronavirus rules, this sort of need to again just explode. Next slide.
[00:30:56] So in my book, I suggested that uh what Nietzsche was saying about priests and uh this idea of offering salvation against the very suffering uh that they were inflicting on us, uh I thought that this could also be thought of in terms of tech companies today. And that basically we could find in tech companies examples of each of these mechanisms. Next slide.
[00:31:24] Yeah, so we have techno hypnosis, uh again, you've probably become very familiar with this in lockdown time, uh where you just Netflix yourself to death. Next slide.
[00:31:37] We have data driven activity, uh where of course, you just do whatever the technology tells you to do, so you can uh relieve yourself of responsibility. Next slide.
[00:31:47] We have uh again charitable platforms, so I can give to people all over the world. Next slide.
[00:31:57] We of course have uh social media networks that even use Nietzschean language like followers, uh to really give you again this impression of why would I ever want to be by myself. Next slide.
[00:32:13] There's I warned you Trump would appear at some point. Uh so yeah, again we have uh, you know, the desire of trolling uh even up to uh doxing, swatting, uh or in America we might call it Trumping, uh where again we use technology to uh destroy people. Next slide.
[00:32:39] So this raises the question of course of what shall we do about all this. Next slide.
[00:32:50] Well, one answer we get from Nick Bostrom, who if you don't know is uh Google's chief ethicist and a professor at Oxford, um, the answer to Nietzsche's God is dead is simply to say that technology is God, in other words, to embrace technology. Next slide.
[00:33:10] So, uh this is where uh if you haven't heard of it before, transhumanism, this idea of of creating a a post-human, uh a being who would be a technological being, uh such that again you would be able to augment yourself in any way you want, be able to create the you you always could be if only uh nature got out of the way and technology let you be. And what's interesting of course is that uh Bostrom tells us that, you know, this isn't uh Brave New World, uh this isn't uh Eugenics, that actually even people who misread Brave New World, that Aldous Huxley was simply telling us that uh if you have the right uh people behind the technology, that, you know, this would be great. Next slide.
[00:33:53] Now, what's interesting is actually if you read Brave New World, and uh, you know, I don't want to be on record here criticizing an Oxford professor, uh but if you read Brave New World, I think actually Bostrom is wrong in his interpretation here, I think Huxley is pretty clear uh that really we don't want technology uh that perfects us if it means that we lose the suffering uh that can actually help us to appreciate what it means to be human. Hannah Arendt similarly worried uh in the 1950s that psychology uh was increasingly helping to create what she called a desert world, uh where basically uh we focused so much on uh alleviating suffering that we stopped asking the political question of what is it about the world that makes us suffer, but only uh asked the individualistic question of why do I suffer and what can I do to stop it. Next slide.
[00:34:52] So Nietzsche similarly says, right, that this is really what we should be aiming to do, we should be aiming uh to really appreciate uh who we are and to uh think about not how can I simply fix things, uh but instead try to think about uh who I am and what it means to be human. Next slide.
[00:35:14] So now if we look at Bostrom again, uh instead of uh his approach, right, where he says bioconservatives, these are the bad guys who simply assume nature is good, uh as opposed to transhumanists who say nature is flawed. Next slide.
[00:35:31] And who say that with technology we can create a new natural.
[00:35:37] So what's the problem with that? Next slide.
[00:35:45] Well, the very logic of this thinking is bound to lead us into the thinking that whatever new nature we have is itself going to be new flawed, and that again we need to use technology to improve it. Next slide.
[00:36:01] So what is this going to do? It's going to lead us into a never-ending struggle against flaws. We're we're never going to be satisfied. We're going to constantly be using technologies to solve the problems that were created by the last technology. Next slide.
[00:36:15] So in other words, we're going to be left with a new nihilism. So technology isn't going to solve nihilism, it's just going to create a newer version of it. Next slide.
[00:36:26] And of course, this is what Nietzsche tried to warn us about over 100 years ago. That again, even though God is dead, we're going to just creep keep creating new gods, uh today technological gods, uh and we are going to just be in the same problem over and over and over again. Next slide.
[00:36:48] So in closing, the issue here really is that we see technology as the way to solve life's problems, uh and this leads us into thinking that life itself is a problem to be solved by technology. Next slide.
[00:37:02] Thank you. That's uh that's all I had to say and I tried to finish on time. I I think I did it. Uh so yes, that's uh that's it. Thank you.