Time to rethink AI exposure, deployment and strategy
This week, Yann LeCun, Meta’s recently departed Chief AI Scientist and one of the founders of modern AI, presented a technically informed look at the evolving AI risk and opportunity landscape at the UK Parliament’s APPG Artificial Intelligence evidence session. APPG AI is the all-party parliamentary group for artificial intelligence. This post is built around Yann LeCun’s testimony to the group, with quotes taken directly from his comments.
His comments are relevant to investment managers because they cover three areas that capital markets often consider separately but should not: AI capabilities, AI control and AI economics.
The dominant AI risks are no longer focused on who trains the largest model or secures the most advanced accelerators. They are increasingly about who controls the interfaces with AI systems, where information flows take place and whether the current wave of LLM-focused capital expenditure will deliver acceptable returns.
Sovereign AI risk
“This is the biggest risk I see in the future of AI: the capture of information by a small number of companies through proprietary systems.”
For states, this is a national security concern. For asset managers and companies it is a dependency risk. When research and decision support workflows are mediated by a limited number of proprietary platforms, trust, resilience, data confidentiality, and bargaining power weaken over time.
LeCun identified “federated learning” as a partial mitigating factor. In such systems, centralized models avoid having to see underlying data for training, relying instead on exchanged model parameters.
Basically, this allows a resulting model to perform “… as if it were trained on the entire set of data… without the data ever leaving (your domain).”
However, this is not a lightweight solution. Federated learning requires a new kind of setup with trusted orchestration between parties and central models, as well as a secure cloud infrastructure on a national or regional scale. It reduces the risk of data sovereignty, but does not eliminate the need for sovereign cloud capacity, reliable energy supply or sustainable capital investments.
AI assistants as a strategic vulnerability
“We can’t afford to have those AI assistants under the sole control of a handful of companies in the US or from China.”
AI assistants are unlikely to remain simple productivity tools. They will increasingly mediate daily information flows and shape what users see, ask and decide. LeCun argued that the concentration risk in this layer is structural:
“We need a wide diversity of AI assistants for the same reason we need a wide diversity of news media.”
The risks are primarily at the state level, but are also important for investment professionals. Apart from obvious abuse scenarios, a narrowing of information perspectives by a small number of assistants risks reinforcing behavioral biases and homogenizing the analysis.
Edge Compute does not eliminate cloud dependency
“Some will run on your local device, but most will have to run somewhere in the cloud.”
From a sovereignty perspective, edge deployment can reduce some of the workload, but it does not eliminate jurisdictional or control issues:
“There’s a real question here about jurisdiction, privacy and security.”
LLM abilities are exaggerated
“We are fooled into thinking that these systems are intelligent because they are good at language.”
The problem is not that large language models are useless. It is that fluency is often confused with reasoning or understanding the world – a crucial distinction for agentic systems that rely on LLMs for planning and execution.
“Language is simple. The real world is messy, noisy, high-dimensional and continuous.”
For investors, this raises a familiar question: how many current investments in AI are building sustainable intelligence, and how many are optimizing the user experience around statistical pattern matching?
World models and the post-LLM horizon
“Despite the achievements of current language-oriented systems, we are still far from the kind of intelligence we see in animals or humans.”
LeCun’s concept of world models focuses on learning how the world behaves, not just how language is related. While LLMs optimize for prediction of the next token, world models focus on predicting consequences. This distinction separates surface-level pattern replication from models that are more causally grounded.
The implication is not that today’s architectures will disappear, but that they may not be the architectures that ultimately deliver sustainable productivity gains or investment benefits.
Meta, risk on open platforms
LeCun acknowledged that Meta’s position has changed:
“Meta used to be a leader in offering open source systems.”
“We have lost ground in the past year.”
This reflects broader industry dynamics rather than a simple strategic reversal. While Meta continues to release models under open-weight licenses, competitive pressure and the proliferation of model architectures – highlighted by the rise of Chinese research groups such as DeepSeek – have reduced the sustainability of pure architectural advantage.
LeCun’s concern was not taken as criticism of one company, but as a systemic risk:
“Neither the US nor China should dominate this space.”
As value shifts from model weights to distribution, platforms increasingly favor proprietary systems. From the perspective of sovereignty and dependence, this trend deserves the attention of both investors and policymakers.
Agentic AI: before governance maturity
“Agent systems today have no way to predict the consequences of their actions before they act.”
“That’s a very bad way to design systems.”
For investment managers experimenting with agents, this is a stark warning. Premature deployment risks hallucinations that spread through decision chains and poorly managed action loops. Although technical progress is rapid, governance frameworks for agentic AI remain underdeveloped compared to professional standards in regulated investment environments.
Regulation: applications, not research
“Do not regulate research and development.”
“You create regulations through big tech.”
LeCun argued that poorly targeted regulations entrench incumbents and create barriers to entry. Instead, the focus of regulation should be on the results of implementation:
“Any time AI is deployed and can have a major impact on people’s rights, regulation is needed.”
Conclusion: preserve sovereignty and avoid capture
The immediate AI risk is not runaway general intelligence. It is the capture of information and economic value within proprietary, cross-border systems. Sovereignty, both at state and corporate level, is central and that means security comes first when deploying LLMs in your organization. An approach with little confidence.
LeCun’s testimony diverts attention from the key model publications and toward who controls data, interfaces, and computers. At the same time, much current AI investment spending remains anchored in an LLM-centric paradigm, even though the next phase of AI will likely look materially different. This combination creates a familiar environment for investors: an increased risk of misallocated capital.
In times of rapid technological change, the greatest danger lies not in what technology can do, but in the situation where dependency and interest ultimately increase.
#Strategy #LLM #Boom #Maintain #Sovereignty #Avoid #Takeover #CFA #Institute #Enterprising #Investor

