The most terrifying podcast interview I've done
and how to argue with a "it is what it is" defeatist
Do you know those conversations where you have raised an ethical point, an issue where you’re questioning whether such-and-such should exist or be happening in the world today, and the other person engages by arguing things that speak stubbornly to the fact that it does exist…so, um, end of story.
You’re doing the moral wrestle. They are luxuriating in a type of ontological “it is what it is”-ism. You see this kind of bad-faith argument happen when a normal person debates something with a politician. Or when a normal person debates something with Jordan Peterson.
Now, it’s fine to contend both perspectives. Great, even! A certain amount of acceptance of “what is” is required for any sensible discussion. But those who take the ontological line often use it to shut down the moral discussion. Me, I think these ontological folk are scared of vagaries, and perhaps of the fact that they don’t have a very sophisticated or developed moral compass (!). They’d rather put up their hands in resignation and cling to the masts of absolutes. Even if those absolutes are a sinking ship. Witness bad-faith (ultimately petrified) climate change resisters who say after some tedious back and forth: “Oh well, governments are corrupt and Big Business isn’t going to change, so why should I bother!”
(This, BTW, is why I love newslettering! Can you imagine being able to discuss these kind of gritty, micro things anywhere else??)
This battle comes up, too, when discussing AI and transhumanism, as it did for me in a podcast I have just released with Australian-born, Oxford-based transhumanist Elise Bohan.
Transhumanism? What is this?
It’s an intellectual and technology advancement movement that advocates - and is currently well on its way to making reality - technology that will enable us to to upgrade our intelligence and experience beyond current human limits. We’re talking things like bioengineering our bodies to live forever, replacing awkward sex with humans with no-dramas sex with robots, external artificial wombs so that women don’t have to do pregnancy or childbirth, as well as intellectual upgrading to solve complex problems. But of course, when this surpassing of human limits happens - it’s called “the singularity” - we may no longer have control of this super/uber-intelligence (it will be smarter than us). And. So. It might also kill us.
Here are some moral questions the normal/average person will put to a transhumanist, then:
Hang on, how has this even been “allowed” to happen? Are we going to continue to let it go ahead, without checks and balances, without a moral/ethical discussion… that needs to start with, Do we even want this?
Why weren’t “we” (the bulk of humanity) consulted on this, given it will affect us all materially and existentially. It will quite possibly wipe humanity out and is regarded as the most serious existential risk we face. Shouldn’t we have a say!?
Who is managing the moral and emotional ramifications of this AI development? The tech bros? God forbid! How are we going to manage the fallout as people’s jobs dry up? You say it will give us more leisure time…but who said this is what we want? Most people can’t handle an empty weekend.
Who says the robots or these superintellent beings, made in our image, won’t do what we tend to do when we think we are superior, namely trash the place (and people) once we hit singularity? Haven’t we learned anything from Frankenstein’s monster?
In the transition stage, will only the rich and powerful be able to access this technology? Hello, massive equity issues!!?
Some of the most exploitative men on the planet are driving this transhumanist project. Elon Musk has a company that’s about to start human trials of brain-implantable computer chips for - cough therapeutic purposes. Google’s Larry page and amazon’s Jeff Bezos are investing in “labs” dedicated to the “reprogramming” of human biology to defeat ageing, while PayPal’s Peter Thiel is making plans to have his body stored in liquid nitrogen, cryogenically preserved until medical science has reached the stage when he can be revived and his resurrected body augmented and enhanced. Shouldn’t this ring alarm bells and elicit it an “ABORT ABORT” command?
I mean, the list of moral questions is long. What have I missed?
The transhumanist line is often this:
We need newer, better AI technology because we can’t fix the problems that all the other technology we built in the past has created. Human intelligence simply isn’t advanced enough to handle the complexities of the “hyperobjects” we built. It’s too clunky to undo the climate crisis, capitalism, social media implosions, pandemics and the rest. We are dumb enough to make the mess, but not smart enough to clean it up; we will need AI upgrading for that. More robots!
(Which begs the response: Fixing a problem with the same consciousness that caused it is the definition of insanity, right?)
The train has already left the station. The technology is already invented and the transition is underway. We are hurtling to a superhuman future already (ah shit, didn’t we tell you…ooops!), so we better board the train and ensure it doesn’t crash us into our own extinction event.
(Which is the ultimate ontological dead-ender, right? Can’t beat ‘em, better join ‘em!)
Our ape brains are so limited and human nature is so shortermist, selfish and self-destructive that we can’t be as good as we’d like to be. Even our morality will need to be mind-mapped by higher intelligences if we want to live our best lives.
We are meant to evolve. Transhumanism is the “it is what it is” inevitable next step.
There is a chance all this could lead to an incredible flourishing. This is certainly the position that Elise Bohan takes. Imagine what we could do if we could solve the mess, live forever, not have to work?
To which I would say, yes, this could be possible, but it will require the moral and ethical discussion now, to ensure it all leads to flourishing not extinction (and I’m not sure AI will ever be able to handle our moral quandaries, especially if it’s created in our image). So my average person point stands.
In my interview with Elise, we both concede each other’s points. And we establish we are both coming from a place of deep love of humanity and a longing for human flourishing. We just have different ideas of how this should be done. Or, rather, whether just because it is being done, it should still be done. And how much emphasis should be put on the ethical implications. Me, I think it should be 95% ethical reflection, 5% tech advancing; Elise, I would guess, would say our ape brains can’t do the kind of ethical reflection I speak of, so…build the robots!
Honestly, I don’t know what to make of it, where to go next.
Do you? Does anyone? I heard Indigenous academic Tyson Yangaporta, in response to the horrible quandary of the First Nations predicament in 2022, say, “We have to stand in it”. Yes, we have to stand, not collapse into, where we are at.
But, then what? This is where we have choice and it’s where magic, flourishing and the best of humanity can happen. This is where we have to get alive to the fired up, caring, brave, deeply committed moral questions. We must ask them, demand answers and stand up to the bad-faith “it is what it is” crew who remain in fear, flinch from being courageous and responsible and, instead, waste precious time skewing arguments. And causing more havoc.
(And, if nothing else, I hope this post helps you reframe those horrible arguments with bad-faith debaters - they are operating from fear and potentially from a deficit of moral capacity.)
I obviously flesh out all the above and more in my chat with Elise who has a brilliant mind and has recently published a book on the matter: Future Superhuman: Our Transhuman Lives in a Make-or-Break Century. Wonderfully, she is not a bad-faith debater and we go to all the places in a non-poke-eye-with-fork way. And I feel it’s relevant to say she is 32 (is this an appropriate thing to flag?).
I’m not sure how our chat will land…it’s big and wild. I’d love your feedback.
Sarah xx
The most terrifying podcast interview I've done
Hey Sarah,
I have so many thoughts on all of this!!
Your interview with William MacAskill was the first time I’d come across longtermism and the Effective Altruism (EA) movement and that conversation left me feeling very uneasy and with many questions, so I went away and did some further reading, and it led me down a deep and very dark rabbit-hole... Shortly after your podcast interview with William MacAskill was released, this brilliant piece by Émile P. Torres (a long-time critic of both longtermism and EA) was published on the Salon website. I imagine that you may have already come across it, as it’s been shared widely, but I’d urge anyone else who’s listened to your interviews with William MacAskill and/or Elise Bohan to take some time to read this piece in full:
Understanding ‘Longtermism’: Why This Suddenly Influential Philosophy Is So Toxic
https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/
This article chilled me to the bone and left me feeling completely despairing over the influence that both longtermism and the EA movement already have and are cultivating further, backed by huge money and some very powerful supporters in high places.
I’ve just listened to your interview with Elise Bohan, who I felt spoke with the same hyper-confidence and sense of moral certainty as William MacAskill, which I guess should come as no surprise, given that the humbly-named Future of Humanity Institute has strong ties to both longtermism and EA (the Institute's Director, Nick Bostrom, is known as "the father of longermism")… I also found this interview deeply depressing, but I loved that you pushed back on many of the ideas that Elise raised.
One of the most disturbing quotes (of many) in this interview was the following:
“The moral part that I worry most about [is] if we prioritise ourselves as frightened people in a transition time too much, we may actually be denying trillions of future people a bright and sustainable future because I do believe that for all its attendant risks, AI and other technologies will be the key to solving the climate crisis.”
As you rightly pointed out: “we’re trying to fix the problem with the same consciousness that caused the problem in the first place.”
I was also grateful that you countered her assertions about humanity being on an upward trajectory of “progress” with the point that the likes of Steven Pinker and Bill Gates “don’t point to the loss of human flourishing” through this so-called progress. For anyone wanting to dive deeper into this (and learn more about why both Pinker and Gates are dangerous for a host of other reasons), I can highly recommend this excellent episode of the Citations Needed podcast:
https://citationsneeded.medium.com/episode-58-the-neoliberal-optimism-industry-and-development-shaming-the-global-south-cf399e88510e
I also liked your point about much of the thinking around transhumanism, AI and related fields being heavily intellectual, but not incorporating intuition, and it makes me wonder if perhaps some of the people working in these spaces are just not used to navigating the world with their gut, intuition and heart, and are driven purely by logic and reason (a very warped sense of logic and reason, in my opinion, but you know what I mean!) – and possibly by reading far too many sci-fi books in their childhood and perhaps having lost any connection that they once may have had to the rest of the living world. The fact that most of them seem to see all past civilisations as inferior to this one also highlights to me the disturbingly narrow and skewed lens through which they appear to view the world.
At the end of this interview, you asked for suggestions on other people working in these areas that you could interview. I would really love to see you interview Émile P. Torres, the author of the Salon piece above (@xriskology for anyone on Twitter). I listened to this great interview with them after listening to your interview with William MacAskill and think that interviewing them would offer some much-needed balance to all of this:
Life According to Longtermism
https://podcasts.apple.com/au/podcast/griftonomics/id1624729935?i=1000576185765
Another person I would love to hear you interview is Timnit Gebru (@timnitGebru), who is an AI computer science expert, industry whistleblower, and advocate for diversity in technology, and another fierce critic of longtermism and EA (there are too many privileged, wealthy, white people being platformed in these discussions too, so it would also be great to hear from someone who can offer a completely different perspective).
For anyone wanting to understand the arguments against longtermism (as opposed to ‘long-term thinking’, which is a very different thing), I would also recommend diving into these articles:
Defective Altruism
https://www.currentaffairs.org/2022/09/defective-altruism
Why Longtermism Is the World’s Most Dangerous Secular Credo
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo?fbclid=IwAR3KVZAi-QZ9fjUqRAMK71vP4xW04PTGXhfHtSbgCc763ab7kmVmUXnyHn8
The Dangerous Ideas of Longtermism and Existential Risk
https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk
How Longtermism Is Helping the Tech Elite Justify Ruining the World
https://theswaddle.com/how-longtermism-is-helping-the-tech-elite-justify-ruining-the-world/
Philosophical Longtermism Is More Than I Can Take
https://www.scmp.com/comment/opinion/article/3189759/philosophical-longtermism-more-i-can-take
I share many of the same moral and ethical questions and concerns that you have about AI and transhumanism and agree that we should not just accept this future as a given (honestly, if this is the future we're headed for, kill me now). What alarms me greatly is that these technologies are being developed by many of the same individuals who are supportive of ideologies like longtermism (Elon Musk being one of them). It was very telling when Elise Bohan commented in your interview with her that we'll need both big business and governments to work together to shape the future of these technologies – it felt like she was accidentally saying the quiet part out loud...
I’m really interested to hear where your research into all of this is going to take you, Sarah, but hope that you will look at interviewing some people who are talking about these important issues, but not aligned with the frankly bat-shit crazy and utterly terrifying cult-like movements of longtermism and effective altruism. To quote Julia Steinberger (@JKSteinberger), who has also been speaking out strongly against these movements recently: “[Longtermism] an omnicidal, imperial ideology of endless domination and exploitation”.
I think we desperately need people in the public eye to challenge these ideologies before they gain even more power and influence.
Brilliant timing! Thank you.