38 Comments
Sep 28, 2022·edited Sep 28, 2022Liked by Sarah Wilson

Hey Sarah,

I have so many thoughts on all of this!!

Your interview with William MacAskill was the first time I’d come across longtermism and the Effective Altruism (EA) movement and that conversation left me feeling very uneasy and with many questions, so I went away and did some further reading, and it led me down a deep and very dark rabbit-hole... Shortly after your podcast interview with William MacAskill was released, this brilliant piece by Émile P. Torres (a long-time critic of both longtermism and EA) was published on the Salon website. I imagine that you may have already come across it, as it’s been shared widely, but I’d urge anyone else who’s listened to your interviews with William MacAskill and/or Elise Bohan to take some time to read this piece in full:

Understanding ‘Longtermism’: Why This Suddenly Influential Philosophy Is So Toxic

https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/

This article chilled me to the bone and left me feeling completely despairing over the influence that both longtermism and the EA movement already have and are cultivating further, backed by huge money and some very powerful supporters in high places.

I’ve just listened to your interview with Elise Bohan, who I felt spoke with the same hyper-confidence and sense of moral certainty as William MacAskill, which I guess should come as no surprise, given that the humbly-named Future of Humanity Institute has strong ties to both longtermism and EA (the Institute's Director, Nick Bostrom, is known as "the father of longermism")… I also found this interview deeply depressing, but I loved that you pushed back on many of the ideas that Elise raised.

One of the most disturbing quotes (of many) in this interview was the following:

“The moral part that I worry most about [is] if we prioritise ourselves as frightened people in a transition time too much, we may actually be denying trillions of future people a bright and sustainable future because I do believe that for all its attendant risks, AI and other technologies will be the key to solving the climate crisis.”

As you rightly pointed out: “we’re trying to fix the problem with the same consciousness that caused the problem in the first place.”

I was also grateful that you countered her assertions about humanity being on an upward trajectory of “progress” with the point that the likes of Steven Pinker and Bill Gates “don’t point to the loss of human flourishing” through this so-called progress. For anyone wanting to dive deeper into this (and learn more about why both Pinker and Gates are dangerous for a host of other reasons), I can highly recommend this excellent episode of the Citations Needed podcast:

https://citationsneeded.medium.com/episode-58-the-neoliberal-optimism-industry-and-development-shaming-the-global-south-cf399e88510e

I also liked your point about much of the thinking around transhumanism, AI and related fields being heavily intellectual, but not incorporating intuition, and it makes me wonder if perhaps some of the people working in these spaces are just not used to navigating the world with their gut, intuition and heart, and are driven purely by logic and reason (a very warped sense of logic and reason, in my opinion, but you know what I mean!) – and possibly by reading far too many sci-fi books in their childhood and perhaps having lost any connection that they once may have had to the rest of the living world. The fact that most of them seem to see all past civilisations as inferior to this one also highlights to me the disturbingly narrow and skewed lens through which they appear to view the world.

At the end of this interview, you asked for suggestions on other people working in these areas that you could interview. I would really love to see you interview Émile P. Torres, the author of the Salon piece above (@xriskology for anyone on Twitter). I listened to this great interview with them after listening to your interview with William MacAskill and think that interviewing them would offer some much-needed balance to all of this:

Life According to Longtermism

https://podcasts.apple.com/au/podcast/griftonomics/id1624729935?i=1000576185765

Another person I would love to hear you interview is Timnit Gebru (@timnitGebru), who is an AI computer science expert, industry whistleblower, and advocate for diversity in technology, and another fierce critic of longtermism and EA (there are too many privileged, wealthy, white people being platformed in these discussions too, so it would also be great to hear from someone who can offer a completely different perspective).

For anyone wanting to understand the arguments against longtermism (as opposed to ‘long-term thinking’, which is a very different thing), I would also recommend diving into these articles:

Defective Altruism

https://www.currentaffairs.org/2022/09/defective-altruism

Why Longtermism Is the World’s Most Dangerous Secular Credo

https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo?fbclid=IwAR3KVZAi-QZ9fjUqRAMK71vP4xW04PTGXhfHtSbgCc763ab7kmVmUXnyHn8

The Dangerous Ideas of Longtermism and Existential Risk

https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

How Longtermism Is Helping the Tech Elite Justify Ruining the World

https://theswaddle.com/how-longtermism-is-helping-the-tech-elite-justify-ruining-the-world/

Philosophical Longtermism Is More Than I Can Take

https://www.scmp.com/comment/opinion/article/3189759/philosophical-longtermism-more-i-can-take

I share many of the same moral and ethical questions and concerns that you have about AI and transhumanism and agree that we should not just accept this future as a given (honestly, if this is the future we're headed for, kill me now). What alarms me greatly is that these technologies are being developed by many of the same individuals who are supportive of ideologies like longtermism (Elon Musk being one of them). It was very telling when Elise Bohan commented in your interview with her that we'll need both big business and governments to work together to shape the future of these technologies – it felt like she was accidentally saying the quiet part out loud...

I’m really interested to hear where your research into all of this is going to take you, Sarah, but hope that you will look at interviewing some people who are talking about these important issues, but not aligned with the frankly bat-shit crazy and utterly terrifying cult-like movements of longtermism and effective altruism. To quote Julia Steinberger (@JKSteinberger), who has also been speaking out strongly against these movements recently: “[Longtermism] an omnicidal, imperial ideology of endless domination and exploitation”.

I think we desperately need people in the public eye to challenge these ideologies before they gain even more power and influence.

Expand full comment
Sep 27, 2022Liked by Sarah Wilson

Brilliant timing! Thank you.

Expand full comment

I’m amazed at how far transhumanism work is progressing. I too am not marvelling at the technological/scientific/medical feats - I’m perplexed at how ideas and actions with huge risks to the whole of humanity are legal. It also seems really odd to me that work that requires deep cognitive thought and problem solving, is concurrently really light on critical/holistic thinking. How can the proponents not have considered the precautionary principal? How can they be unaware of/okay with all the glaring ethical blind spots? I can only conclude, that similar to geo engineering, and climate change inaction, the movement is thoroughly hijacked by corrupt corporate interests. Whatever altruism they are peddling is either a lie, or coming from (possibly well-intentioned but tragically ill-informed) individuals who are victims of a culture of loneliness/dysfunction/the disintegration of the family/and separation from the natural world.

The incredibly perceptive author Jerry Mander critiques technological advancement in his book In the Absence of the Sacred. He argued in the 90’s for (amongst other things) the urgent need for a citizens assembly to debate the direction of new technologies that we allow into our communities. Cos at the moment the only prerequisite is: can some bloke can make a buck? It’s imperative we find a better model. I’d love if it was legislated that all new products/research and development work needed to transparently complete a Holistic Decision Making Process (an enlightening strategy developed by Dan Palmer from the Vic, Australia permaculture movement). Where the reasons for doing something/creating something are stripped back to their deepest essence and evaluated from there. But I suspect that we’d need to abolish capitalism first!

Expand full comment

Reading about these billionaire men and their outrageous and pointless plans, makes me think their penises must be extremely small. Sorry to be so puerile but what else would drive this obsession for wealth, power and these crazy ideas that they think they are God like figures, changing humanity/nature/life. What the actual fuck?

Expand full comment

Hi Sarah,

I began listening to this episode yesterday and felt that squirmy uncomfortable feeling, perhaps at the exuberance and confidence with which Else talked about the future of AI, so I stopped. I will go for a walk and listen to it today, now with this substack and comment thread for more context.

I am currently reading a wonderful novel called Klara and the Sun, by Kazuo Ishigoru that explores these moral issues in AI. (I also live in Bondi and would be happy to pass it on to you if you like).

Thanks for always wrestling with these difficult topics in such a beautiful way.

Laura

Expand full comment

I've met versions of 'it is what it is' people many times over the years. I find them the most triggering, the people who make my heart sink so low that I've now learned to get out of engaging before my soul drowns. They have a certainty which reminds me of sociopaths, a complete void where the obvious 'should' (to me) lie-that we can progress with a paradigm shift in consciousness, one that many people are aware of. Social and trad media doesn't echo the paradigm shift that already exists, it just isn't galvanised, a global mass scattering of mindset that hasn't been cultivated to thrive in a more singular form of power that can challenge the current system.

The soul and heart-sinking is fear. Being faced with such a 'clean-lined', 'logical', 'cut off', sense of non-emotional, absolute certainty in 'absolutes', makes me feel like it's inevitable, and it feels like a death. When those who control all the social media gateways to mass information/how we think/what we know- are on the side of continuing the corporate systems (which have forever been fixing problems with the same consciousness that caused it-thats what expecting year on year economic growth is, what building an extra highway is. Ad infinitum). To us it is nonsensical. To them, its a robotic inevitability. Adam Curtis's 'Century of The Self' is all about how we got here, and his more recent ones (on BBC IPlayer) focus on what is in motion, and it's power. Adam Curtis's ability to step right outside the matrix is also why I understand why most people are watching Love Island. The blue pill goes down easier.

Expand full comment

Sarah, your points 3, 5 and 6 are truly terrifying. At the risk of adopting the ‘it is what it is’ attitude that you caution against in this newsletter, can any of us really see the ‘tech bros’, Musks and Bezoses of the world implementing this technology in a fair, ethical and benevolent way? Can we really restrain our capitalist tendencies enough to avoid this technology creating further inequality? And when the 99% of the population who cannot afford to take advantage of this technology find themselves with abundant ‘leisure time’ (ie unemployed), will we embrace a fair economic model such as a UBI, or will our politicians and other leaders simply lament the increasing number of ‘welfare bludgers’. I would love to imagine we can overcome these challenges (although I’m still strongly opposed to this technology), but I am sceptical.

Expand full comment

A couple of things come to mind when reading this...

If resource overshoot is a thing then the materials needed to grow a digital world, e.g. copper & silicon, are about to cost more energy to produce than makes sense. So from that perspective it's hard to see a future filled with digital/electronic entities.

There are those that say meta-systemic cognition can be learned, and this is a development that can equip humans to go beyond rational linear thinking and have the ability to deal with complex non-linear systems. Those who have tapped into that potential are likely the best equipped to deal with the sorts of issues that AI is supposed to be needed.

Expand full comment

Thank you for raising my awareness.

Expand full comment

Wow, a big topic. I’m thinking “it is what it is” could be framed as a beautiful starting point, and I would invite them to that. I often subscribe to the pathway of Awareness -> acceptance -> action, and “ it is what it is” to me is an acceptance mantra. Then, perhaps we can accept what is, and lovingly choose a bridge thought of “I accept what is, and from that place, I act”. It seems like a calm and more rational place to act from. Finding that common ground may just help bring others on that journey of action.

Expand full comment

Wow sounds big and a subject I’ve given little to no thought to so I look forward to listening to the podcast.

Expand full comment

Very briefly, to your last query - yes, it is absolutely appropriate and relevant - because it's hopeful. That there are minds of only 32 years thinking and working on these things, it is reassuring to know that the world is in the hands of at least some safe, thoughtful, considered change makers.

P.S. Thank you for what you do.

Expand full comment

Related to the "inevitable" march towards AI singularity - this was an interesting dissent to the new Zuckerberg-funded Kempner institute at Harvard (a part 6 addition to above, $500M). (Keep in mind this is a student-run publication) https://www.thecrimson.com/article/2022/10/4/dissent-artificial-intelligence-kempner/

Expand full comment

Thanks Sarah for another thought-provoking and deeply moving interview and for facilitating this ongoing conversation.

I’ve been thinking about it a lot and talking about it with family and friends, wrestling with the uneasiness that comes with the uncertainty that transhumanism raises for my future and my children’s future.

Then today I listened to an interview between Thomas Hubl and Christiana Figueres, which is part of the Collective Trauma Summit. They talked about the collective trauma we are experiencing as a consequence of the unmitigated damage that climate change is having on our planet and the ways we can approach this. Christiana shared her insights into the power of emergence as a path through the painful experience of uncertainty. Before the Paris Agreement was successfully signed, she had no idea how the world’s economies could come to an agreement. And yet it happened through a process of trust and commitment that it would. She also talked about her personal discovery of the Buddhist teachings and how they helped her heal from her own personal trauma that she experienced as a result of her work in the climate change movement.

This conversation left me feeling more hopeful that it is possible that we can choose a path towards AI and transhumanism that will lead us to positive outcomes for our planet. We just need people like Christiana Figureres to guide us there.

I’m sure you’ve got her on your list of people to interview!!

Expand full comment

Still trying to process your last podcast. Scary, heartbreaking & so so close its terrifying. Thank you for bringing this to a discussion which was intelligent & thought provoking… we hear you to love & cherish our one & precious life as we may lose it faster than we think

Expand full comment