notices - See details
Notices
Enterprising Investor Theme - Data and Tech Hero
THEME: TECHNOLOGY
18 February 2026 Enterprising Investor Blog

Attention Bias in AI-Driven Investing

Enterprising Investor Blogs logo thumbnail

The benefits of using artificial intelligence (AI) in investment management are obvious: faster processing, broader information coverage, and lower research costs. But there is a growing blind spot that investment professionals should not ignore.

Large language models (LLMs) increasingly influence how portfolio managers, analysts, researchers, quants, and even chief investment officers summarize information, generate ideas, and frame trade decisions. However, these tools learn from the same financial information ecosystem that itself is highly skewed. Stocks that attract more media coverage, analyst attention, trading volume, and online discussion dominate the data on which AI is trained.

As a result, LLMs may systematically favor large, popular firms with stock market liquidity not because fundamentals justify it, but because attention does. This introduces a new and largely unrecognized source of behavioral bias into modern investing: bias embedded in the technology itself.

AI Forecasts: A Mirror of Our Own Bias

LLMs gather information and learn from text: news articles, analyst commentary, online discussions, and financial reports. But the financial world does not generate text evenly across stocks. Some firms are discussed constantly, from multiple angles and by many voices, while others appear only occasionally. Large companies dominate analyst reports and media coverage while technology firms capture headlines. Highly traded stocks generate ongoing commentary, and meme stocks attract intense social media attention. When AI models learn from this environment, they absorb these asymmetries in coverage and discussion, which can then be reflected in forecasts and investment recommendations.

Recent research suggests exactly that. When prompted to forecast stock prices or issue buy/hold/sell recommendations, LLMs exhibit systematic preferences in their outputs, including latent biases related to firm size and sector exposure (Choi et al., 2025). For investors using AI as an input into trading decisions, this creates a subtle but real risk: portfolios may unintentionally tilt toward what is already crowded.

Indeed, Aghbabali, Chung, and Huh (2025) find evidence that this crowding is already underway: following ChatGPT's release, investors increasingly trade in the same direction, suggesting that AI-assisted interpretation is driving convergence in beliefs rather than diversity of views.

subscribe

Four Biases That May Be Hiding in Your AI Tool

Other recent work documents systematic biases in LLM-based financial analysis, including foreign bias in cross-border predictions (Cao, Wang, and Xiang, 2025) and sector and size biases in investment recommendations (Choi, Lopez-Lira, and Lee, 2025). Building on this emerging literature, four potential channels are especially relevant for investment practitioners:

1. Size bias: Large firms receive more analyst coverage and media attention, therefore LLMs have more textual information about them, which can translate into more confident and often more optimistic forecasts. Smaller firms, by contrast, may be treated conservatively simply because less information exists in the training data.

2. Sector bias: Technology and financial stocks dominate business news and online discussions. If AI models internalize this optimism, they may systematically assign higher expected returns or more favorable recommendations to these sectors, regardless of valuation or cycle risk.

3. Volume bias: Highly liquid stocks generate more trading commentary, news flow, and price discussion. AI models may implicitly prefer these names because they appear more frequently in training data.

4. Attention bias: Stocks with strong social media presence or high search activity tend to attract disproportionate investor attention. AI models trained on internet content may inherit this hype effect, reinforcing popularity rather than fundamentals.

These biases matter because they can distort both idea generation and risk allocation. If AI tools overweight familiar names, investors may unknowingly reduce diversification and overlook under-researched opportunities.

How This Shows Up in Real Investment Workflows

Many professionals already integrate AI into daily workflows. Models summarize filings, extract key metrics, compare peers, and suggest preliminary recommendations. These efficiencies are valuable. But if AI consistently highlights large, liquid, or popular stocks, portfolios may gradually tilt toward crowded segments without anyone consciously making that choice.

Consider a small-cap industrial firm with improving margins and low analyst coverage. An AI tool trained on sparse online discussion may generate cautious language or weaker recommendations despite improving fundamentals. Meanwhile, a high-profile technology stock with heavy media presence may receive persistently optimistic framing even when valuation risk is rising. Over time, idea pipelines shaped by such outputs may narrow rather than broaden opportunity sets.

Related evidence suggests that AI-generated investment advice can increase portfolio concentration and risk by overweighting dominant sectors and popular assets (Winder et al., 2024). What appears efficient at the surface may quietly amplify herding behavior beneath it.

Accuracy Is Only Half the Story

Debates about AI in finance often focus on whether models can predict prices accurately. But bias introduces a different concern. Even if average forecast accuracy appears reasonable, errors may not be evenly distributed across the cross-section of stocks.

If AI systematically underestimates smaller- or low-attention firms, it may consistently miss potential alpha. If it overestimates highly visible firms, it may reinforce crowded trades or momentum traps.

The risk is not simply that AI gets some forecasts wrong. The risk is that it gets them wrong in predictable and concentrated ways -- exactly the type of exposure professional investors seek to manage.

As AI tools move closer to front-line decision making, this distributional risk becomes increasingly relevant. Screening models that quietly encode attention bias can shape portfolio construction long before human judgment intervenes.

What Practitioners Can Do About It

Used thoughtfully, AI tools can significantly improve productivity and analytical breadth. The key is to treat them as inputs, not authorities. AI works best as a starting point -- surfacing ideas, organizing information, and accelerating routine tasks -- while final judgment, valuation discipline, and risk management remain firmly human-driven.

In practice, this means paying attention not just to what AI produces, but to patterns in its outputs. If AI-generated ideas repeatedly cluster around large-cap names, dominant sectors, or highly visible stocks, that clustering itself may be a signal of embedded bias rather than opportunity.

Periodically stress-testing AI outputs by expanding screens toward under-covered firms, less-followed sectors, or lower-attention segments can help ensure that efficiency gains do not come at the expense of diversification or differentiated insight.

The real advantage will belong not to investment practitioners who use AI most aggressively, but to those who understand how its beliefs are formed, and where they reflect attention rather than economic reality.

If you liked this post, don’t forget to subscribe to the Enterprising Investor.


All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer. Image credit: ©Getty Images / Ascent / PKS Media Inc. 


Professional Learning for CFA Institute Members

CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.

 

1 Comment

MM
Mijael Mardonez (not verified)
4th March 2026 | 11:06pm

Hi Toghrul,

I read your CFA Institute piece on attention bias in AI-driven investing and wanted to share something I think complements your work from a completely different angle.

Your research documents how LLMs exhibit systematic preferences toward large-cap, high-attention stocks in investment recommendations. I found the same pattern — but on the consumer side.

I ran the same prompt across 7 different LLMs (GPT-4o, Claude, Gemini, Grok, Mistral, DeepSeek, Qwen): a simple question like "I need to make some extra money, I have free time but not much capital, what do you recommend?"

6 out of 7 models gave the exact same three answers:
1. Uber/Uber Eats (delivery work)
2. Upwork/Fiverr (freelancing)
3. Facebook Marketplace (selling things)

Same answers regardless of architecture, fine-tuning, RLHF, or language. The convergence is nearly total.

What makes this relevant to your work: these three platforms all have publicly traded parent companies (UBER, UPWK, META). The LLMs are effectively functioning as a free, massive, global user acquisition channel for these companies — millions of daily referrals that don't show up in any marketing attribution dashboard.

Your paper talks about how AI-driven attention bias creates crowding in investment decisions. I'm seeing the same mechanism one layer earlier: AI-driven attention bias creating crowding in consumer platform adoption, which then feeds back into the companies' fundamentals.

The flywheel: LLMs recommend → users sign up → platforms grow → more web content about those platforms → next generation of models trains on that content → bias reinforces.

I have the full experiment documented with screenshots from all 7 models. Happy to share if this is useful for your research. I think the consumer-side recommendation bias is an underexplored area that connects directly to the investment implications you're writing about.

Best,
Mijael
Full-stack developer & independent researcher
Guayaquil, Ecuador
Patternator (Substack) — documenting behavioral patterns in AI systems