Why the people building AI won’t do anything about it
US Senator from Vermont, Bernie Sanders and Professor Geoffrey Hinton – considered the ‘Godfather of AI’ – pointed out last week at Georgetown University that artificial intelligence (AI) and robotics are not inherently bad, but that the people driving this technological revolution are the richest people in the world. They suggest that Musk and Bezos, for example, don’t stay up at night worrying about ordinary people. They don’t worry about working people. They don’t spend hundreds of billions of dollars to extend life expectancy, tackle global warming, shorten the work week and ensure quality health care. They just want more wealth and more power.
It’s clear that Sanders thinks their motivations are incompatible with a just future. So the question is not whether AI is inherently good or bad; it’s who controls it and who benefits from it.
As Professor Hinton pointed out, if we had a political system governed for the benefit of the people, we would of course have to invest all our money and time in developing a very powerful AI that would solve all our problems and do everything for us. But we don’t live in that kind of world, so we may have to be very careful.
Ask the people at the coal face
If a chef refused to eat his own food, a boat builder refused to board his own boat, or the chief of an aircraft manufacturer never flew his own plane, you would also have doubts.
According to one Guardian research published this week (November 23, 2025) is essentially the situation we have now in the field of generative artificial intelligence (GenAI).
The article interviewed more than a dozen frontline AI workers—the human evaluators, teachers, and data labelers who literally teach ChatGPT, Google Gemini, Meta AI, Grok, and others how to sound coherent and “safe.”
You might be surprised to learn that training AI is actually a very human, manual, and grimy affair.
And almost without exception, the people of the coalfield are the very people who are now telling their families, friends, and even their own children to stay away from the technology they help build.
According to the Guardian, the people closest to the models have no confidence in them. They see the constant hallucinations, hidden prejudices and downright dangerous results up close. As a result, many have banned generative AI in their own homes and actively discouraged others from using it.
Interviewees also noted that speed crushes safety. The relentless pressure for fast turnaround times, vague instructions from AI industry bosses, and unrealistic deadlines meant that quality was sacrificed so that companies could release new features and versions, maintain the perception of rapid progress, and keep stock valuations rising.
According to the interviewees, the number of AI hallucinations is increasing, but not decreasing. An independent indeed NewsGuard An audit conducted between August 2024 and August 2025 found that the top 10 models now almost never say “I don’t know” (from 31 percent last year to 0 percent this year), while the percentage of AI models that confidently repeat dangerous falsehoods has almost doubled (from 18 percent to 35 percent).
Disturbingly, sensitive topics, including medical questions, historical controversies, and hate speech detection, are being handled by unqualified, low-paid contractors with no domain expertise, paying pennies per job and racing against the clock. Obviously, if the data fed into these models is rushed, inconsistent, and often toxic, the people cleaning the data must conclude that the end product can never be truly reliable.
According to the same frontline workers, AI feedback loops have been broken and reported biases and errors remain unaddressed. The Guardian cited an example where Gemini refused to answer basic questions about Palestinian history while happily giving long answers about Israel.
For investors
It is of course difficult to make predictions many years from now, but we can reasonably conclude that regulators will eventually respond. Regulatory risk therefore went from ‘possible’ to ‘when, not if’.
Some suggest we can expect a wave of EU AI Act enforcement in 2026, plus likely US federal legislation in 2026-2027 once the new Congress is in session.
And if foundation models are deemed harmful, regulators will eventually demand mandatory third-party audits, massive transparency requirements, and possibly legal liability for hallucinations that cause harm.
The idea that the best data, the most money and the greatest computing power will inevitably lead to an unchallenged dominant lead could be challenged by simple caution. If the data pipeline is rushed, the data coming in may be nonsense. If that’s true, scaling – which is what the AI boom is now about – won’t be a get-out-of-jail-free card.
That’s because corporate adoption is more likely to slow down than speed up. If CIOs, who are already nervous about hallucinations, consider this The Guardian’s findings are so legitimate, can they pause any longer?
The hype-driven cheap financing for land grabs – the “land-and-expand leads to total platform domination” flywheel – will only deliver decent returns for investors if those cyclical customers believe the outcomes are reliable enough for mission-critical workflows. The Guardian suggests that trust may be eroding.
However, keep in mind that stock momentum can remain disconnected from fundamentals for a long time. In this biggest hype cycle in history, bad news is already routinely dismissed. All eyes are on the northeast direction for monthly active users and “compute capital investment (capex) guidance.” And that could take quarters, possibly even a few years.
However, it’s worth keeping an eye on the ‘half-life’ of each new AI release and asking yourself if the ‘wow’ is diminishing.
We’ve already noted that general purpose technologies (GPTs) have generated tremendous excitement throughout history, lowering the cost of capital and creating massive economies of scale that have led to oversupply. The Guardian’s evidence is that AI owners are adopting a ‘move fast and ship’ model, treating safety and quality as a public relations (PR) tick box. Commoditization and regulation are not unlikely.
For many investors it won’t make any difference, but if you’re long a stock with 80 to 120 times sales, it will. Our suggestion is not to shut down, but to rebalance. After taking advice, consider taking some of the profits off the table and diversifying those profits into other asset classes, such as stocks and fund managers with exposure to defensive and growth sectors that are not dependent on the artificial intelligence (AI) boom.
#people #building #wont

