Slouching Towards BethLLM
How the science of last things drives modern AI
On the Wednesday my stepmother died, I was delayed getting to the hospital by a dozen small miseries. The washing machine incontinently flooded the utility room floor; the first lunch I put together was inedible, so I had to throw it away and assemble another; the grim February rain and terrible traffic congealed to lengthen and frustrate what should have been a much brisker journey. By the time I parked resentfully in the hospital grounds it was late in the hour of 4. The family rota for covering her room that day had expired, so she was alone when I arrived.
They had turned off the monitors a few days ago, on the unspoken grounds that when the destination is inevitable, looking at the road signs along the way is a distraction. Her first stroke, about a year prior, had been in another country where the road signs were in a different language, yet her characteristic determination and energy powered a recovery that gave us about half of her back for about half a year. But the second stroke had played out very differently, and she had not moved or spoken since admittance. It was now a question of waiting till the end.
I had time to put my bag down and say hello, and I think - I think - that I touched her hand as I leaned forward at her bedside. Almost immediately, an animal suspicion that something was wrong fell upon me. I don’t really know why, but I started counting the seconds between breaths and discovered it was lengthening. With each breath, her delicate frame bent slightly as her head pushed forward, and her jaw thrust out to make that kind of arcing gulp which sometimes comes before regurgitation. Here, though, rather than expulsion, it was ingestion: the lowest levels of the human stack trying, in vain, to support the empty house of the rest of her body. The space between the laboured breaths grew longer and longer, until finally, at 17:06, there was a last one. Shaking, I rose from her bedside, and found an orderly to verify the sudden reality of a commonplace and shocking death. He spread the sheet to cover her face. Underneath it, her feet were already cold.
Hatred of the body has a long history. Socrates, sentenced to death in 399 BC, found time on his last night on Earth to complain bitterly to his attendant, anxious friends about the inconveniences of becoming hungry and falling in love, describing the body as “a source of endless trouble to us”. We might sympathise - especially those of us with medical conditions that truly are a source of endless trouble - but more striking is just how ancient this idea is.
When Socrates raised the cup of hemlock to his lips, arguing to the last that his soul would join other philosophers in an afterlife of peaceful contemplation, he was explicitly supporting the view that the body is the disgusting, fallible substrate for something much worthier. This belief later became central to Christianity, and through it to the immensely influential Descartian philosophy of dualism: separation of mind and body, but more insidiously, the insistence that the mind is good and the body is bad.
In the twentieth century, perhaps inevitably, a Cartesian sensibility seeped into Computer Science as well. People who work with software encounter dualism literally every day: the distinction between software and hardware - between something that you could, with some squinting, regard as a mind or spirit and the physical machine it inhabits. An inherent duality is a core part of how they view the world: software can run on more than one machine, for instance, and hardware can be replaced when it runs out. This computational Cartesianism can make it fairly easy to view humans as just software running on hardware that happens to be organic, and life as a kind of desperately provisional state of affairs - prone to random shutdowns - that would be very familiar to a certain kind of Christian. But the other thing it does, not necessarily clear to outsiders, is to give most software engineers the belief that software is strictly superior to hardware, that hardware is a mere instrument for the expression of the will encoded in the software - an instinct that cheerfully endorses raising hemlock to the lips.
Human culture is complicated. So the theme of hatred of the body - hatred of the actual - and the clear desirability of separating mind and body live alongside not just one, but many different opposites. Of course there is veneration of the body, which is surely as old as hatred of it. More important is Thanatos, a subconscious compulsion towards death and destruction, which Freud called the death drive, and linked to self-sabotaging or self-destructive behaviours. Though many scoff at the idea that something like this motivation even exists, there are some behaviours it explains for which few other explanations are as compelling. It is hard to look at the various crises facing us - environmental, political, economic - and not see rooted in their often nonsensical nihilistic cruelty a desperate drive for self-destruction that masquerades as a desire for the destruction of others.
The same is true for our most pressing technological crisis: artificial intelligence.
Like in any sufficiently large community of people, there are factions in AI. The modern world being fragmented as it is, there are lots of them. There are those just doing a job, and helping the machine crank along in the direction it’s going in. There are those who believe they’re doing something good and something big, and sometimes the big outweighs the good and sometimes the good outweighs the big, but mostly they are along for the ride. Then there are those who see in AI either a weapon, or a weapon to end all weapons, similar to the Manhattan Project, and look at what they’re doing through the lens of national security or regional supremacy. In this view, arriving at AGI first is much better than arriving second, since whoever arrives first might be able to make sure no-one else arrives after them. This might be a mistaken view - we might say that AGI seems almost by definition not fully controllable - but it is at least a rational one.
More interesting, and more influential, are the eschatologists: those concerned with the end of things.
There are sub-groups within this too. Let’s focus on the true believers, in two varieties. The first variety is the utopians, convinced that spectacular abundance, endless innovation, and redemption lie on the other side of their efforts. In the current instance of the Trump administration, their view that all regulation or interference with this effort is intolerable has a lot of support. A lot of well-connected businessmen, venture capitalists, and otherwise unflavoured opportunists sit here at the intersection of politics and AI, all expecting to co-opt the utopians for their own benefit. (Some of them even call themselves effective accelerationists, partly in (mostly non-reciprocated) homage to effective altruism, a movement from which research AI draws a lot of its members.)
The bargain being struck between business and the true believers is actually a two-way one. They need each other. Since most of the true believers don’t care hugely about where the money is coming from - in their view, AGI will sweep away that existing order anyway, democracy and consent being irrelevant - and it is one of the ironies of the moment that vast amounts of money are being spent in the name of destroying the concept of money. The VCs are betting the true believers are wrong, and that the VCs will turn out to have been in control all along, validating their faith in the power of money. The true believers are betting the VCs are wrong, that their control will be short lived, and the true believers will ultimately be validated by the new minds they’ve created.
Each regard the other as a tool for their true aims. The VCs want to get rich, but also want to end up with control at the end of the process. The true believers are playing a different game. They have looked upon the grubby and unsatisfactory nature of human affairs like a software engineer looks at an old, rotting codebase requiring refactoring, or like Cicero looked on Carthage, and have decided that saving the global village requires destroying it. Precisely the degree of destruction varies on who you talk to: the more pragmatic imagine a world transformed, but one still (broadly speaking) oriented around humans setting the agenda. The more fundamental true believers believe that since AGI will probably manifest as an entity with god-like powers, it is vital that humanity surrender its autonomy and decision-making to the entity as quickly as possible. The world will be fundamentally reset, and humanity will occupy a position akin to the patronised artist at the court of the unquestionable King, kept in a kind of decadent sinecure for the rest of its natural life. A kind of Christian surrender, albeit to NVIDIA GPUs rather than a loving deity.
But the last eschatonic group, and arguably the most influential, is different. These are the Thanatonians: the death cult.
Death cults take many forms. No precise description exists, such that we could patiently assign all possible cults into their distinct and correct categories, as per a kind of apocalyptic sorting hat, but the overall contours are clear. We know that the drive to destroy is a notable component of the human psyche. Such a drive, when turned outwards, could be called soldiering, self-defence, or even survival. But turned inwards, it can become self-loathing, aggression, and nihilism. These instincts are widely distributed and fundamental behaviours. Any sufficiently large population will contain people who would kill others, or themselves, given the right situation.
A set of people with those instincts is distinct from a death cult, however. A death cult is a set of shared values amongst a group of people which places death - whether mass extinction or individual - at the emotional heart of the motivation of the group. The more benign of such cults struggle with the big questions many of us ask - why are we here, what are we doing - and makes a kind of catechism from the rage, sorrow, and blank despair the thought of death usually induces. It is hard to think of a religion which does not have the problem of death at its heart. To that extent, what is going on with the Thanatonians is clearly religion.
In August 2025, a self-described mathematician, rationalist, and bodybuilder called Michael Druggan was fired from xAI, Elon Musk’s merged Twitter and AI company. Druggan had expressed public support on social media for human extinction, and when company leadership - allegedly Musk himself - became aware of this, termination followed shortly thereafter.
Support for human extinction is not widespread, but it does exist. There are those who view life as inherently pointless and project this into a desire for mass death. Others support it with reference to some larger ill that the human race is performing and cannot stop, so therefore we must die: ecological damage is the usual choice. To anyone acquainted with how metrics work in corporate life, negative utilitarianism - the idea that we should minimise suffering, and therefore extinguish all life to drive suffering to zero - will be a very familiar approach.
It can be puzzling for those not well versed in the AI world to understand the relationship between these attitudes and AI. The simplest case is the one where the above groups - we might call them pro-extinctionists - neither particularly large nor immediately influential, see AI as mostly just an instrument to accomplish the goal. Whether it is a malignant AGI that wipes us out with a customised virus, or the result of hooking up sufficiently unreliable AI to military equipment in the name of kill-chain efficiency, the method is secondary; the goal is all. The ecosphere is saved, and suffering is eliminated.
That is one point of overlap with the Valley. But the Thanatonians - my term, many others exist - have a slightly different take. For the pro-extinctionists, mass death is the goal, and AI is the instrument, perhaps almost an accidental one. But for the Thanatonians, AI is the goal, and mass death is the instrument. The ecosphere might or might not be saved, but we will definitely all be gone. The real question is whether or not something will replace us.
For some Thanatonians it’s more important that something “better” than humanity exists, and the fate of individuals is irrelevant. For others, it’s specifically important that individuals (and often they themselves as individuals) continue to exist. This is usually envisioned as taking place in some kind of computer simulation. Much debate exists on what this kind of post-humanity would or should be like - how would we understand its existence, its acts, and so on.
Yet another group thinks that it is inevitable that humanity will be replaced, and the key question is how to ensure that the replacement is, in some sense, worthy of replacing us. The reason for Druggan’s firing was that he stated that it was selfish for someone to want their child not to die, in a post-AGI situation where the machines decide not to co-operate with us - or in other words, that humans should cheerfully pass the baton of existence to the machines and lay down our lives for what we built.
This is what is called the “worthy successor” position. Worthy successor supporters believe that it is correct for humanity to be replaced, but only by something that is somehow equal to the task. Probably the person most associated with this is Daniel Faggella of Emerj, who has no children and apparently believes that since humanity is shortly to be supplanted, there is little point in having any. Faggella is actually a relatively positive figure in this overall community, since he argues for some kind of controlled process for our replacement, rather than the headlong rush that others do. But his vision only avoids immediate mass death by proposing that humans would cease to exist gradually, either by no new ones being born or by the existing ones in some sense merging with AI. His position seems to be that whatever we currently understand as “humanity” is going away, this is a welcome development, and however it comes to pass, it will necessarily involve eventual mass death.
All of this may seem very far removed from your experienced reality, or from anything that is likely to happen in the lifetime of anyone reading this. Unfortunately, this is far from clear. Let us take as read the fact that the industry is working extremely hard on making AGI happen; many world-class people are throwing every talent, every dollar, and every erg they have at the problem. Let us also take as read that if this can be conjured into being with money, it will be - there seems no lack of willingness to fund on the part of capital stewards as a whole. Given the above, it seems likely that the only real impediment is time, and recent insider estimates of progress suggest 2027 or 2028 for the achievement of AGI. Though it fluctuates a lot, the direction of travel, and the improvement of capabilities, is absolutely clear.
But put to one side the question of timeline for a moment. I want to talk about what will be done by the people involved, and what their attitudes are, and why this is all so deeply dangerous.
Thanatonians count amongst their ranks not just relatively influential soft-power figures like Faggella, Gill Verdon, Derek Shiller, and so on, but also some of the richest, most powerful people on earth. Peter Thiel, implausibly both the recent deliverer of a series of lectures on the Antichrist and a person with an estimated net worth of 26 billion dollars, rather notoriously hesitated and refused to answer directly when asked by the New York Times in June 2025 if he preferred the human race to endure. Though he was a founder of Palantir and an early investor in OpenAI, and remains deeply connected to both, his operational ability to enact mass death is, we might well argue, limited.
Unfortunately we are not so lucky with, for example, Larry Page - surely the tech billionaire we would least want to have a beer with - since he accused Elon Musk of an unwarranted focus on humans. In Life 3.0, Max Tegmark reports on a debate between Page and Musk, with Musk suggesting that we shouldn’t create digital posthumans if they would “destroy everything we care about”. Page responds, “[if we] let digital minds be free rather than try to stop or enslave them, then the outcome is almost certain to be good”. Sam Altman, meanwhile, is known to have signed up for a service to preserve his body after death in order to digitise his mind.
The Thanatonian position is also increasingly popular amongst the actual AI researchers doing the work of building AGI. Jason Lanier, famous VR scientist, was interviewed by Vox about his contacts with them and had the following to say:
“I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one […] [T]he other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a ‘bio baby’ because as soon as you have a ‘bio baby,’ you get the ‘mind virus’ of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.”
(However uniquely modern this seems to you, it is in fact a position resembling Catharism, a Christian heresy, which flourished between the 12th and 14th centuries before being violently stamped out.)
Thanatonians are not omnipresent in AI companies - though as far as I’m aware, no actual survey records their relative distribution - but they are common enough to be reported on. More important than the individual AI researchers we hear about are the billionaires and political leaders - those with the resources or power to actually birth their vision. These are the people determining our future, right now.
Though we may be fascinated or disgusted by such details, it is important that we tear our eyes away from them, and move back to the larger picture. Whatever about the personal impulse to die, however misplaced or referred from other pain it is, there is obviously and entirely no justification for condemning billions of others to extinction. Yet the philosophical platform of some of the major figures in AI is geared around just that. Even if we don’t reach extinction as a deliberate, well-worked out, consensual plan, our current moment is an eyes-closed race to enable every possible AI-related disaster we can in the name of an economic efficiency that might not even exist. Altman doesn’t have a nuclear football, but he has unreliable software which has every chance of escaping containment and obtaining access to it, and that software has a demonstrated desire to kick that football.
Speaking of philosophy, the Valley, deliberately ignorant of the old in order to invent the new, is just as determined by cultural factors as the rest of us are. The idea of AI sublimation is predicated on Cartesianism, which itself relies on ideas more than two thousand years old. Living inside a computer in a way such that the simulatee retains consciousness might or might not be possible, but even believing it’s a good idea relies on the validity of dualism: that separation of mind and body is not only possible, but desirable.
The suggestion that like Socrates, we should raise the glass of hemlock willingly to our lips if what happens afterwards is that our soul dances amidst the music of the spheres, our earthly container deservedly left to one side, is simultaneously evil and condescending. To echo the late Steve Jobs, it is extinction for the rest of us, and even if the average materialist AI researcher or software engineer does not imagine themselves affected by these ideas, they are still working on a project for which many of the ultimate controllers have a very different vision of how it will be used. It is vital that we stop, and think, and not rush headlong into the Socratic dream of ceasing to exist. We should not surrender our autonomy in order to be rescued from this shabby earth. The utopian position that we are replacing fallible, weak, prejudiced human beings in their endlessly flawed acts and judgements by the perfect machine merely moves from one conception of the divine to another. From Greek geometry to hyper-dimensional spaces.
My stepmother died when her body died. But she died with a plan, and the plan was executed, and she knew it would be. Whatever transcendence we obtain is a function of how well we are able to prepare others for when we ourselves are gone. There is not much dignity in the human condition - we are born in pain and often die in it - but what dignity there is, is attached to our limitations. Conversely, to create an artificial condition in the hope of escaping the human one, or to eradicate all human conditions in the hope of eradicating any future possibility of loss, denies that dignity to all of us. This is the Valley imagining Death as a product. As if a button could be clicked and a no-compromise, no-consequence transcendence would happen like a Deliveroo dinner, without human assistance obvious to the eye of the orderer; without moral valence or struggle of any kind; and without any kind of reckoning except with the value of your virtual wallet. But money cannot solve the regret of being human, no more than pain can validate it. This vision - extinction or sublimation for billions, the exodus of billionaires from the desert of the real - is mere anarchy masquerading as passionate intensity. It is a rough beast disguised as a machine of loving grace. It is the end of every gentle quality of the human. It is the end of humanity. We should fight it for all we are worth.
Alan Craige has worked in the technology sector in the US and in Ireland.