Where AI ends and investment judgment begins – CFA Institute Enterprising Investor

Where AI ends and investment judgment begins – CFA Institute Enterprising Investor

6 minutes, 21 seconds Read

Artificial intelligence is changing the way investment professionals generate ideas and analyze investment opportunities. AI is now capable of not only passing all three CFA exam levels, but also autonomously completing long, complex investment analysis tasks. Yet a close reading of the latest academic research reveals a more nuanced picture for professional investors. While the recent developments are striking, a closer reading of current research, reinforced by Yann LeCun’s recent testimony before the British Parliament, points to a more structural shift.

Three structural themes appear in academic articles, business studies and regulatory reports. Together they suggest that AI will not only enhance the skills of investors. Instead, it will revalue expertise, increase the importance of process design, and shift competitive advantages to those who understand the technical, institutional, and cognitive limitations of AI.

This post is the fourth installment in a quarterly series on AI developments relevant to wealth management professionals. Based on insights from contributors to the bimonthly newsletter, Improved intelligence in investment managementit builds on previous articles to provide a more nuanced view of AI’s evolving role in industry.

Capacity trumps reliability

The first observation is the widening gap between capacity and reliability. Recent studies show that frontier reasoning models can pass CFA level I through III mock exams with exceptionally high scores, undermining the idea that high memorization of knowledge provides a sustainable advantage (Columbia University et al., 2025). Similarly, large language models are increasingly performing better in benchmarks for reasoning, mathematics, and structured problem solving, as reflected in new cognitive scoring frameworks for AGI (Center for AI Safety et al., 2025).

However, a large number of studies warn that benchmark success masks vulnerability in real-world scenarios. OpenAI and Georgia Tech (2025) show that hallucinations reflect a structural trade-off: attempts to reduce false or fabricated answers inherently limit a model’s ability to answer rare, ambiguous, or underspecified questions. Related work on causal extraction from large language models further indicates that strong performance in symbolic or linguistic reasoning does not translate into robust causal understanding of real-world systems (Adobe Research & UMass Amherst, 2025).

This distinction is crucial for the investment sector. Investment analysis, portfolio construction and risk management do not operate on the basis of stable ground truths. The outcomes are regime dependent, probabilistic and very sensitive to tail risks. In such environments, results that appear coherent and authoritative, yet are incorrect, can have disproportionate consequences.

The implication for investment professionals is that AI risk will increasingly resemble model risk. Just as backtests routinely overstate real-world performance, AI benchmarks tend to overestimate the reliability of decisions. Companies that deploy AI without adequate validation, foundational and control frameworks risk embedding latent vulnerabilities directly into their investment processes.

From individual skills to institutional decision quality

The second theme is that AI commoditizes investment knowledge while simultaneously increasing the value of the investment decision process. Evidence from AI use in production environments makes this clear. The first large-scale study of AI agents in manufacturing shows that successful deployments are simple, tightly constrained, and constantly monitored. In other words, AI agents today are neither autonomous nor causally “intelligent” (UC Berkeley, Stanford, IBM Research, 2025). In regulated workflows, smaller models are often preferred because they are more controllable, predictable and stable.

Behavioral research confirms this conclusion. Kellogg School of Management (2025) shows that professionals underuse AI when its use is visible to supervisors, even if it improves accuracy. Gerlich (2025) finds that frequent AI use can reduce critical thinking through cognitive discharge. Therefore, leaving AI unattended creates a double risk of both underutilization and over-reliance.

For investment organizations, the lesson is therefore structural: the benefits of AI do not benefit individuals, but investment processes. Leading companies are already embedding AI directly into standardized research templates, monitoring dashboards and risk workflows. Governance, validation and documentation are increasingly more important than raw analytical firepower, especially as regulators themselves adopt AI-based supervision (State of SupTech Report, 2025).

In this environment, the traditional idea of ​​the ‘star analyst’ is also weakening. Repeatability, verifiability and institutional learning can become the true source of sustainable investment success. Such an environment requires a clear change in the way investment processes are designed. In the wake of the Global Financial Crisis (GFC), investment processes became largely standardised, with a strong focus on compliance.

However, the emerging environment requires that investment processes be optimized for the quality of decision making. This shift is significant in scale and difficult to achieve because it depends on managing individual behavior change as a fundamental layer of organizational adaptability. This is something the investment industry has often tried to avoid through impersonal standardization and automation – and is now trying again through AI integration, wrongly characterizing a behavioral challenge as a technological challenge.

Why the limitations of AI determine who captures value

The third theme focuses on the limitations of AI, rather than viewing it merely as a technological race. On the physical side, infrastructure constraints become binding. Research shows that only a small portion of announced US data center capacity is actually under construction, with grid access, electricity generation and transmission times measured in years rather than quarters (JPMorgan, 2025).

Economic models reinforce why this matters. Restrepo (2025) shows that in an artificial general intelligence (AGI)-driven economy, output becomes linear in terms of computing, not labor. The economic return therefore benefits the owners of chips, data centers and energy. The placement of computing infrastructure, chips, data centers, energy, and platforms that manage allocation are the controlling factor in capturing value as it removes labor from the equation for growth.

Institutional limitations also require more attention. Regulators are rapidly expanding their AI capabilities, raising expectations for explainability, traceability and control in the investment industry’s use of AI (State of SupTech Report, 2025).

Finally, cognitive limitations emerge. As AI-generated research spreads, consensus will emerge more quickly. Chu and Evans (2021) warn that algorithmic systems tend to reinforce dominant paradigms, increasing the risk of intellectual stagnation. When everyone optimizes on similar data and models, differentiation disappears.

For professional investors, the widespread adoption of AI increases the value of independent judgment and process diversity, as both become increasingly scarce.

Implications for the investment sector

AI’s growing role in automating investment workflows makes clear what it cannot eliminate: uncertainty, judgment and responsibility. Companies that design their organizations around this reality are more likely to remain successful for the next decade.

Taken together, the evidence suggests that AI will act as a differentiator rather than a universal improvement, widening the divide between companies that design for reliability, governance, and constraints, and those that do not.

On a deeper level, the research points to a philosophical shift. AI’s greatest value may lie less in prediction than in reflection: challenging assumptions, exposing disagreements, and asking better questions rather than simply providing faster answers.


References

Almog, D. AI recommendations and non-instrumental imaging Concerns preliminary working documentKellogg School of Management Northwestern University, April 2025

van Cast, S. et al. State of SupTech Report 2025, December 2025

Chu, J and J. Evans, Delayed canonical progress in major scientific areas, PNASOctober 2021

Gerlich, M., AI tools in society: impact on cognitive offloading and the future of critical thinking, Center for Strategic Business Forecasting and Sustainability2025

Hendryckx, et al. D, A definition of AGI, https://arxiv.org/pdf/2510.18212October 2025

Kalai, A, et al., Why Language Models Hallucinate, OpenAI2025, arXiv:2509.04664, 2025

Mahadevan, S. Large causal models from large language models, Adobe Research, https://arxiv.org/abs/2512.07796December 2025

Patel, J., Reasoning models pass the CFA exams, Columbia University, December 2025

Restrepo, P., We will not be missed: work and growth in the age of AGI, NBER ChaptersJuly 2025

UC Berkeley, Intesa Sanpaolo, Stanford, IBM Research, Measuring agents in production, https://arxiv.org/pdf/2512.04123December 2025


#ends #investment #judgment #begins #CFA #Institute #Enterprising #Investor

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *