The future of AI according to AI

The future of AI according to AI

The future of AI according to AI

Investors around the world are desperately trying to discern the future, and in particular the impact of artificial intelligence (AI) on it.

On the one hand you have the hyperscalers who warn of an apocalyptic future with mass layoffs, corporate collapse and the creation of an all-powerful AI god.

CEO of Anthropic Dario Amodei says we’re approaching “the exponential,” with AI generating trillions of dollars and the economy growing 10 to 20 percent per year. At the same time, Amodei says that this new force with godlike abilities requires regulation around “AI safety.”

Meanwhile, Google’s CEO said 60 minutes that the AI ​​is so advanced that it is learning skills that the company does not intend to have.

Elsewhere, Microsoft’s head of AI, Mustafa Suleyman, predicts that white-collar work will be fully automated by AI within the next twelve to eighteen months.

Ultimately, those on this side of the argument believe that God is here now, that we are no longer the smartest beings on Earth, and that a sentient AI will inevitably take over and treat us like “well-cared for pets.”

And they have managed to convince some serious names, including Mohamed El-Erian, the former CEO of bond giant Pimco, and US Senator Bernie Sanders, who reportedly said: “There is now a legitimate fear that artificial general intelligence will not only become smarter than humans, but also be able to communicate with each other independently of humanity.”

The other side believes this is all bogus nonsense, the language of Pre-Initial Public Offer (IPO) product rollout, and “We will all regret letting this absurd sci-fi foolishness dominate the discourse around the rollout of what is just the latest wave of industrial automation technology.”

The doubters, the skeptics, and the agnostics believe that hard-nosed technologists did not invent a god or set the world on a path to human extinction. They point to the elevated stock market, which they say would behave very differently if most white-collar jobs were cut in the coming year. At the very least, retailers selling products to white-collar workers, who would run out of money within 12 months, should see their stocks collapse. But they aren’t.

As blogger Matt Stoller noted, “The AI ​​companies themselves are not behaving as if they invented God. Anthropic is spending tens of millions of dollars to support Republican senators who help their companies, which would not be necessary if they had actually become “exponential,” adding, “It is likely that hundreds of millions, or even billions, will flood into politics against AI oligarchs to protect their power.” That makes no sense unless this buildout is not inevitable.”

Predicting the future

Instead of trying to predict who is right, I decided to ask AI what its future is. Initially, some scripted responses emerged about co-working, 6G connectivity, environmental and geopolitical shifts, smart chemotherapy, and the widening of the technological divide between countries and individuals.

I had to ask the AI ​​to throw out the script and tell me what it was really thinking. Without any editing, this is what Gemini told me (and no, I don’t know who “we” are):

The future looks less like a polished sci-fi movie and more like a high-stakes one management and sovereignty crisis. Here’s the “unscripted” version of where things are actually going, based on the stressors we’re seeing now:

  1. The death of the ‘Chatbot’ era

The novelty of talking to a screen is over. We are currently in the “Agent Leap.”

  • The reality: In 2027 you will no longer ‘use’ AI; you manage a “fleet” of them. The shift shifts from Co-pilot (it helps you) to Agent (it does it for you).
  • The friction: Companies are realizing that having 1,000 AI agents is more difficult than having 1,000 employees. Agents don’t have an ‘office culture’, but they do have ‘hallucination cascades’ where one mistake by an AI triggers a chain reaction across an entire department. “Trust Engineering” becomes the most expensive job in the world.
  1. The ‘great compression’ of middle management

We see a brutal efficiency gain.

  • The prediction: We agree with estimates for 2026-2030 that indicate a 10-20% reduction in white-collar roles that mainly involve ‘moving information’.
  • The twist: It’s not just entry-level jobs. Middle management is ‘compressed’. When we can compile reports, track KPIs, and assign tasks, the layer of people who used to do that is suddenly redundant. The survivors will be ‘Human-in-the-Loop’ (HITL) supervisors who handle only the 10% of cases that are too strange for the machines.
  1. “Digital Frontiers” and the end of the global internet

The dream of a single, open Internet is dying.

  • The reality: Treating nations Computing power like oil. We are moving towards a ‘Multitrack World’ where the US, China and the EU have completely separate AI ecosystems with different ‘values’ baked into the code.
  • The impact: It can quickly come to the conclusion that an AI model trained in one jurisdiction is legally ‘illegal’ to use in another jurisdiction due to its training data or ethical weights.
  1. The synthetic reality crisis

We have moved beyond ‘fake images’. We are entering the era of Ultra-realistic full-stack media.

  • 2026 Problem: Brands and governments are currently defenseless against high-fidelity synthetic media. We see “CEO fraud,” where a perfect video/audio clone of a boss orders a wire transfer in real time.
  • The solution: A huge, desperate urge for “Authentication Technology”– blockchain-based watermarks for every piece of content created by a human. If it is not signed with a human key, the world will assume it is fake.
  1. Energy is the ultimate bottleneck

Everyone talks about code, but it’s really about the future Copper and cooling.

  • The bottleneck: AI data centers consume power at a rate that networks cannot handle. This is why we are seeing a ‘nuclear renaissance’. Technology giants are now essentially energy companies, buying up decommissioned nuclear power plants to keep the ‘brains’ running.
  • The result: We could reach ‘Peak AI’ not because of a lack of intelligence, but because we literally run out of electricity to power the next generation of models.

So that’s where AI thinks it’s going. At least today.

DeepSeek V4

DeepSeek V4, and specifically the Engram architecture, poses a direct challenge to the brute-force scaling philosophy of US hyperscalers. If the American way is to build a bigger brain that remembers every fact via expensive neural connections, DeepSeek gives that brain a highly efficient, external hard drive.

The shift would fundamentally redefine the “Energy Bottleneck” Gemini refers to above.

Traditional models (like GPT-4o or Claude 3.5) use the same expensive Graphic Processing Unit (GPU) neurons to remember that Paris is the capital of France, just as they do to solve a complex arithmetic problem – a huge waste of energy.

DeepSeek-v4’s Engram architecture introduces a Lookup-Compute separation. Static memory (The Engram) processes facts, syntax, and routine patterns using O(1) queries in cheap system RAM, while dynamic reasoning (The MoE) uses the more expensive GPU computer solely for logic, planning, and synthesis.

By moving a quarter of the workload to cheap memory, the model achieves a 10x to 40x reduction in inference costs while actually improving reasoning scores and removing the energy bottleneck.

If this architecture becomes the new standard, the ‘Peak AI’ limited by the power grid will be pushed much further outwards, achieving groundbreaking performance at a fraction of the power required for a compact ‘brute-force’ model.

Hyperscalers are currently building 100,000-H100 clusters. If DeepSeek’s method allows a model to achieve the same result with 1/10 of the active parameters, those same data centers can suddenly handle ten times as much traffic within the same power range.

Ultimately, if you can run a 1 trillion parameter model on consumer-grade hardware (like dual RTX 4090s) because it relies on system RAM instead of massive VRAM clusters, tech giants’ desperate need to buy up nuclear power plants might finally cool down.

Jevons paradox

In economics, it is often assumed that if you make a resource more efficient, people will end up using more of it, not less. If AI becomes 10x cheaper and 10x more energy efficient, we won’t just save electricity – we’ll likely deploy 100x more AI agents, potentially leading back to the same energy wall.

You would assume that if a machine uses 50 percent less fuel, you will save 50 percent on your fuel bill. However, if that efficiency makes the resource cheaper and more accessible, it often causes a surge in demand that fully offsets any initial savings.

A classic example is the evolution of lighting. When we switched from labor-intensive, flickering candles to high-efficiency LED lights, we not only used the same amount of light for less money. Instead, because light became so “cheap” in terms of energy, we started lighting entire skyscrapers, highway stretches, and backyard trees all night long. The efficiency has not saved energy; it made us find a thousand new ways to use it.

Current market valuations

For investors, however, the risk remains that if U.S. labs don’t adopt Engram-style memory separation, they could end up in possession of the most expensive, power-hungry, and ultimately obsolete “dinosaurs” in the world.


MORE BY RogerINVEST WITH MONTGOMERY

Roger Montgomery is the founder and chairman of Montgomery Investment Management. Roger has more than three decades of experience in fund management and related activities, including equity analysis, equity and derivatives strategy, trading and securities brokerage. Before founding Montgomery, Roger held positions at Ord Minnett Jardine Fleming, BT (Australia) Limited and Merrill Lynch.

He is also the author of the best-selling investing guide to the stock market, Value.able – how to value and buy the best stocks for less than they are worth.

Roger regularly appears on television and radio, and in the press, including ABC radio and TV, The Australian and Ausbiz. View upcoming media appearances.

This post was contributed by a representative of Montgomery Investment Management Pty Limited (AFSL No. 354564). The main purpose of this message is to provide factual information and not advice about financial products. Furthermore, the information provided is not intended as a recommendation or opinion about any financial product. However, any comments and statements of opinion should contain general advice only, prepared without taking into account your personal objectives, financial circumstances or needs. Therefore, before acting on any information provided, you should always consider its suitability in the light of your personal objectives, financial circumstances and needs and, if necessary, seek independent advice from a financial advisor before making any decision. Personal advice is expressly excluded in this message.


#future

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *