Attention Bias in AI-Driven Investing – CFA Institute Enterprising Investor

Attention Bias in AI-Driven Investing – CFA Institute Enterprising Investor

The benefits of using artificial intelligence (AI) in investment management are obvious: faster processing, broader information coverage and lower research costs. But there is a growing blind spot that investment professionals should not ignore.

Large Language Models (LLMs) are increasingly influencing the way portfolio managers, analysts, researchers, quantitative experts and even Chief Investment Officers summarize information, generate ideas and formulate trading decisions. However, these tools learn from the same financial information ecosystem, which is itself highly skewed. Stocks that attract more media attention, analyst attention, trading volume and online discussions dominate the data on which AI is trained.

As a result, LLMs may systematically favor large, popular companies with stock market liquidity, not because fundamentals warrant it, but because attention does. This introduces a new and largely unknown source of behavioral biases in modern investing: biases embedded in the technology itself.

AI predictions: a mirror of our own biases

LLMs gather information and learn from text: news articles, analyst commentary, online discussions, and financial reports. But the financial world does not generate text evenly across stocks. Some companies are discussed constantly, from multiple perspectives and by many voices, while others appear only occasionally. Big companies dominate analyst reports and media coverage, while technology companies grab headlines. Highly traded stocks generate constant commentary, and meme stocks attract intense attention on social media. When AI models learn from this environment, they absorb these asymmetries in reporting and discussion, which can then be reflected in predictions and investment recommendations.

Recent research suggests just that. When LLMs are asked to forecast stock prices or provide buy/hold/sell recommendations, they exhibit systematic biases in their output, including latent biases related to firm size and sector exposure (Choi et al., 2025). For investors using AI as input to trading decisions, this creates a subtle but real risk: portfolios could inadvertently tilt toward what is already crowded.

Indeed, Aghbabali, Chung, and Huh (2025) find evidence that this displacement is already underway: after the publication of ChatGPT, investors increasingly trade in the same direction, suggesting that AI-assisted interpretation drives convergence in beliefs rather than diversity of views.

Four biases that may be hidden in your AI tool

Other recent work documents systematic biases in LLM-based financial analyses, including foreign biases in cross-border forecasts (Cao, Wang, and Xiang, 2025) and sector and size biases in investment recommendations (Choi, Lopez-Lira, and Lee, 2025). Building on this emerging literature, four potential channels are particularly relevant to investment professionals:

1. Size deviation: Large companies receive more analyst attention and media attention, which means LLMs have more textual information about them, which can translate into more confident and often optimistic forecasts. Smaller companies, on the other hand, can be treated conservatively simply because there is less information in the training data.

2. Industry biases: Technology and financial stocks dominate business news and online discussions. If AI models internalize this optimism, they can systematically assign higher expected returns or more favorable recommendations to these sectors, regardless of valuation or cycle risk.

3. Volume Bias: Highly liquid stocks generate more trading commentary, news flow and price discussion. AI models may implicitly prefer these names because they appear more often in training data.

4. Attention Bias: Stocks with a strong social media presence or high search activity tend to attract disproportionate attention from investors. AI models trained on internet content can pick up on this hype effect, amplifying popularity rather than fundamentals.

These biases matter because they can distort both idea generation and risk allocation. If AI tools overweight household names, investors may unknowingly reduce diversification and miss under-researched opportunities.

How this is reflected in real investment workflows

Many professionals are already integrating AI into daily workflows. Models summarize files, extract key statistics, compare comparable companies and make preliminary recommendations. These efficiencies are valuable. But if AI consistently emphasizes large, liquid, or popular stocks, portfolios can gradually tilt toward crowded segments without anyone consciously making that choice.

Take a small-cap industrial company, with improving margins and low analyst coverage. An AI tool trained on sparse online discussions may lead to cautious language or weaker recommendations despite improved fundamentals. Meanwhile, a high-profile tech stock with a large media presence may see continued bullish views even as valuation risk increases. Over time, the idea pipelines created by such results can narrow rather than broaden possibilities.

Related evidence suggests that AI-generated investment advice can increase portfolio concentration and risk by considering dominant sectors and popular assets (Winder et al., 2024). What seems efficient on the surface can quietly reinforce the herd behavior underneath.

Accuracy is only half the story

Debates about AI in finance often revolve around whether models can accurately predict prices. But prejudice introduces another problem. Even if the average accuracy of the forecasts seems reasonable, the errors may not be evenly distributed across the cross-section of stocks.

If AI systematically underestimates smaller or low-attention companies, it can consistently miss potential alpha. If highly visible companies are overrated, it can amplify busy trades or momentum traps.

The risk is not just that AI gets some predictions wrong. The risk is that this will get them wrong in predictable and concentrated ways – exactly the kind of exposure that professional investors are trying to manage.

As AI tools move closer to frontline decision-making, this distribution risk becomes increasingly relevant. Screening models that silently encode attentional biases can shape portfolio construction long before human judgment intervenes.

What practitioners can do about it

When used carefully, AI tools can significantly improve productivity and analytical breadth. The key is to treat them as input, not authority. AI works best as a starting point – surfacing ideas, organizing information and speeding up routine tasks – while the final judgment, valuation discipline and risk management are heavily human-driven.

In practice, this means that we pay attention not only to what AI produces, but also to patterns in its results. If AI-generated ideas repeatedly cluster around names of major companies, dominant industries, or highly visible stocks, that clustering itself may signal embedded bias rather than opportunity.

Periodically stress-testing AI results by expanding the screens to under-examined companies, less-followed sectors, or segments that receive less attention can ensure that efficiency gains do not come at the expense of diversification or differentiated insight.

The real benefit will not accrue to investment professionals who use AI most aggressively, but to those who understand how its beliefs are formed and where it reflects attention rather than economic reality.

#Attention #Bias #AIDriven #Investing #CFA #Institute #Enterprising #Investor

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *