Warning: Today’s message is about a thought experiment, not an empirical examination. Normal service will resume tomorrow.
The rise of AI does not seem to stop and there are many warnings about the machines that are added. The Wikipedia -page about existential risks of AI Has a long list of things that can go wrong and detect warnings against catastrophic risks from AI back to Alan Touring. So if AI even has a small chance of destroying humanity, our planet or both, we should not think about investing in softening or preventing these risks?
Charles Jones from Stanford University tried to crack a few figures By building a cost benefit model. He started with an analogy of the insurance sector. The statistical value of a life in the US is $ 10 million. This means that the government or the insurance company must be prepared to pay $ 100,000 to reduce the risk of death of a single person by 1%. If that is cumulative for the next ten years, prevention measures that reduce mortality by 1% in the next ten years are worth $ 10,000. Or, to reverse it, to get a $ 10 million life insurance policy that will pay in the next 10 years when you have a risk of 1% to die, you will cost $ 10,000 a year.
OK, now calculate the amount of money that the government should spend on reducing the existential risks of AI when the risk that AI will destroy the planet or humanity is 1% in the next ten years. Of course we do not know how great the risk is in practice, nor do we know how effectively every mitigation measure we take will be. So let’s assume that by taking mitigation against extreme risks of AI, we can reduce the disaster risks by half. How much should we spend every year on AI disaster limitation?
You do not believe the results in the graph below. In this baseline scenario, the government should spend 15.3% of GDP every year To prevent the existential risks of AI. That is four times as much as the US government currently spends on defense.
If the risk of killing the human race with AI in the next ten years is only half as much, we still have to spend 8.3% of GDP every year to prevent that result. If our mitigation measures are less effective and reduce existential risks by 20% instead of half, we still have to spend 5.9% of GDP ions -ai mitigation.
And none of this takes into account that if we destroy humanity in the next ten years, we lose all future generations that are still unborn. This is a fully selfish calculation that only regards our survival in the next ten years. If we value the lives of our children and grandchildren, we have to spend a huge 29.5% of GDP every year on the limitation of catastrophic risks of AI.
Optimum expenses to reduce the existential risk of AI
Source: Jones (2025)
Do you think we will ever spend something near enough to prevent the existential risks of AI? Of course not.
This is just a theoretical exercise that tells us that if we took existential risks seriously, we would have a completely different government budget.
Or to reverse it: the fact that we do not spend anything on existential risk prevention shows that as a society we live carelessly in the moment without any worries about the future. As always we will try UnderRepare and then try to throw huge amounts at the problem as soon as we are in a crisis. That has been the story of humanity. And so far we have usually avoided social collapse. Let’s hope it stays that way.

#Preventing #Terminator #scenario


