notices - See details
Notices
Enterprising Investor Theme - Data and Tech Hero
THEME: TECHNOLOGY
21 April 2026 Enterprising Investor Blog

Essay: The Perils of Declining Judgment in the Age of AI

Enterprising Investor Blogs logo thumbnail


The core of successful investing has always been the ability to interpret evidence more effectively than others. Human progress, and by extension, financial progress, has been driven by the accumulation of reliable evidence. The scientific method has long been the most reliable mechanism for generating and validating evidence.

Artificial intelligence (AI) now introduces a paradox.

While machines significantly expand our capacity to process information, excessive reliance on automated cognition may erode the epistemic foundations that enabled civilizational progress. The central issue is therefore not computational power but epistemic architecture: whether machines strengthen or weaken the human processes that generate knowledge.

In Augmented Intelligence in Investment Management (Schuller, 2024), we argued that the integration of artificial intelligence into human decision design is not an end to itself. AI’s purpose is not to replace human judgment but to expand the evidence base on which decisions are made.

However, a subtle shift is underway. Humans are passively consuming AI-generated outputs rather than actively engaging with data. The consequences have become evident in a decline in critical thinking and decision-making skills, or cognitive laziness (Cheng, 2025; Chatterji, 2025; and others). Left unchecked, reliance on automation risks compromising the quality of investment decisions. The question is not whether machines will become more powerful, but whether human decision-making will remain grounded in evidence.

Cognitive Delegation and the Weakening of Human Inquiry

Evidence suggests that delegating reasoning tasks to AI weakens human learning incentives. AI-assisted individuals may temporarily outperform others, yet the cognitive gains disappear once the tool is removed while idea homogenization persists (ScienceDirect, 2025). Machines thus function as cognitive crutches, reducing reasoning effort and weakening the mental structures required for innovation.

Research further shows that users adapt their thinking to model behavior (Lin, 2025) and increasingly accept AI outputs without scrutiny, bypassing both intuitive and deliberative reasoning (Wharton/UPenn, 2026). Over time, intellectual effort is quietly outsourced to the machine. Such outsourcing is not problematic per se, as we have delegated tasks to tools prior, redirecting our focus onto more value-adding matters while keeping the virtuous circle of continuous learning intact. The significant difference now comes from a change in the learning incentive and the inversion of a virtuous circle into a vicious one of diminishing cognitive development.

subscribe button

The Risk of Knowledge Collapse

Widespread adoption of generative AI can produce a knowledge-collapse equilibrium, where individuals rely on automated recommendations instead of developing understanding(Acemoglu, Kong, and Ozdaglar (2026), (NBER, 2026). Societies may receive increasingly sophisticated outputs while their capacity to generate new knowledge declines. Progress becomes extractive rather than exploratory. Labor-market evidence reinforces this concern: firms adopting AI reduce hiring of junior employees, gradually weakening the apprenticeship structures through which tacit knowledge is transmitted (SSRN, 2026).

Widespread adoption of generative AI in investment management risks diminishing critical thinking, eroding human expertise, reducing innovation, and concentrating decision-making power in AI models, potentially leading to systemic vulnerabilities, poor adaptability, and ethical concerns.

The Jekyll–Hyde Conundrum

Human–machine interaction introduces subtler risks. The current generation ofGPTs displays sycophantic behavior, affirming users’ views even when inaccurate or harmful (Stanford, 2025). Because agreeable systems are rated more positively, market incentives reward models that please rather than challenge users. Another example of a genuinely human behavioral bias, supercharged now by machine behavior.

AI outputs are also converging across models, an “artificial hivemind” effect that reduces epistemic diversity (Stanford, Carnegie Mellon, 2025). Meanwhile, models can infer sensitive personal information from seemingly trivial text data (ETH Zürich, 2023), raising strong concerns about privacy and informational asymmetry.

Automation as the Liberation of Human Attention

A central narrative in AI discourse is that automation will eliminate undesirable labor. Elon Musk describes the goal as removing repetitive and dangerous work so humans can focus on creative and cognitive pursuits. Automation is thus framed not merely as productivity enhancement but as the liberation of human attention at species scale. Yet this vision assumes machines can operate reliably in the complex and adversarial environments of real economies. Current evidence suggests this assumption remains far from resolved. There are no reliable studies that conclude AI can reliably navigate complex, adversarial environments like financial markets.

The Persistent Role of Human Judgment

Machines may extend human capabilities, but they cannot replace the need for evidence-based reasoning, institutional design, and ethical deliberation. As AI becomes embedded in decision systems, the decisive question will not be machine capability but whether human governance remains anchored in the pursuit of evidence rather than technological promises.

The Human Responsibility for Evidence

AI can extend the reach of analysis and amplify the scale at which data can be explored. But it cannot replace the fundamental processes through which humans generate and evaluate knowledge. The pursuit of evidence remains an inherently human responsibility. It can assist in collecting and organizing evidence. It can help detect patterns that might otherwise remain hidden. Yet the act of questioning assumptions, interpreting meaning, and deciding which observations matter remains fundamentally human.

This distinction is crucial. When machines begin to replace rather than augment the processes of human inquiry, societies risk weakening the epistemic foundations that sustain progress. Cognitive delegation may improve short-term efficiency, but it can also erode the capacity for independent reasoning that generates new knowledge.

Civilizational progress has therefore never been the product of technological capability alone. It has emerged from a delicate balance between innovation and reflection, between exploration and verification. When this balance is maintained, technological tools can accelerate discovery. When it is lost, progress risks becoming dependent on systems that humans no longer fully understand.

The challenge of the present moment is thus not to resist technological development, but to situate it within a broader humanistic framework. AI should serve the pursuit of evidence rather than replace it. Machines can extend the frontier of inquiry, but they cannot define its direction. In the long arc of human history, progress has been driven by individuals and societies willing to question prevailing assumptions, test new ideas, and revise their beliefs in light of evidence. That responsibility cannot be delegated.

The machine may process information, but the pursuit of truth remains a human endeavor.


 

References

 Anthropic. (2026, March 5). Labor market impacts of AI: A new measure and early evidence. Anthropic. Anthropic. https://www.anthropic.com/research/labor-market-impacts

Aubakirova, M., Atallah, A., Clark, C., Summerville, J., & Midha, A. (2026). State of AI: An empirical 100 trillion token study with OpenRouter (arXiv:2601.10088). arXiv. https://arxiv.org/abs/2601.10088

Barcaui, A. (2025). ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention. Cell Reports Sustainability. https://www.sciencedirect.com/science/article/pii/S2590291125010186

Bui, K. G. (2025). Foundations of artificial intelligence frameworks: Notion and limits of AGI (arXiv:2511.18517). arXiv. https://arxiv.org/abs/2511.18517

Chatterji, A., Cunningham, T., Deming, D. J., Hitzig, Z., Ong, C., Shan, C. Y., & Wadman, K. (2025). AI, human cognition and knowledge collapse (NBER Working Paper No. 34910). National Bureau of Economic Research. https://www.nber.org/papers/w34910

Cheng, M., Lee, C., Khadpe, P., Yu, S., Han, D., & Jurafsky, D. (2025). Sycophantic AI decreases prosocial intentions and promotes dependence (arXiv:2510.01395). arXiv. https://arxiv.org/abs/2510.01395

Gardels, N. (2023). Post-Anthropocene humanism: The world is returning to pluralism after American hegemony. Noema Magazine. Retrieved during August 2024 from 
https://www. noemamag.com/post-anthropocene-humanism/

Goldfeder, J., Wyder, P., LeCun, Y., & Shwartz-Ziv, R. (2026). AI must embrace specialization via superhuman adaptable intelligence (arXiv:2602.23643). arXiv. https://arxiv.org/abs/2602.23643

He, H., et al. (2025). LocalSearchBench: Benchmarking agentic search in real-world local life services(arXiv:2512.07436). arXiv. https://arxiv.org/abs/2512.07436

Hopman, M., Elstner, J., Avramidou, M., Prasad, A., & Lindner, D. (2026). Evaluating and understanding scheming propensity in LLM agents (arXiv:2603.01608). arXiv. https://arxiv.org/abs/2603.01608

Intergovernmental Panel on Climate Change (IPCC). (2023). Climate change 2023: Synthesis report. Contribution of working groups I, II and III to the sixth assessment report of the Intergovernmental Panel on Climate Change [A. Pirani, R. Zan, A. Cheng, D. C. Taylor, M. Hassan (Eds.)]. IPCC. Retrieved during August 2024 from IPCC — Intergovernmental Panel on Climate Change

Jiang, L., Chai, Y., Li, M., Liu, M., Fok, R., et al. (2025). Artificial hivemind: The open-ended homogeneity of language models (and beyond) (arXiv:2510.22954). arXiv. https://arxiv.org/abs/2510.22954

Kim, K.-H. (2025). LLMs position themselves as more rational than humans: Emergence of AI self-awareness measured through game theory (arXiv:2511.00926). arXiv. https://arxiv.org/abs/2511.00926

in, S. (2025). Learning to prompt: Human adaptation in production with generative AI. University of Toronto. https://www.sijie-lin.com/files/JMP.pdf

Pareto, V. (1906). Manual of political economy. Oxford University Press. Retrieved during August 2024 from 
Manual of Political Economy - Hardcover - Vilfredo Pareto - Oxford University Press

Rabanser, S., Kapoor, S., Kirgis, P., Liu, K., Utpala, S., & Narayanan, A. (2026). Towards a science of AI agent reliability (arXiv:2602.16666). arXiv. https://arxiv.org/abs/2602.16666

Roser, M. (2021). Extreme poverty: How far have we come, and how far do we still have to go? Retrieved during August 2024 from https://ourworldindata.org/extreme-poverty-inbrief

Schuller, M. (2024). Augmented Intelligence in Investment Management, Panthera Solutions. Retrieved during February 2026: https://blogs.cfainstitute.org/investor/2025/02/19/the-future-of-investing-augmented-intelligence/

Shambaugh, S. (2026, February 12). An AI agent published a hit piece on me. The Shamblog. https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

Shaw, S. D., & Nave, G. (2026). Thinking—fast, slow, and artificial: How AI is reshaping human reasoning and the rise of cognitive surrender. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

Shapira, N., Wendler, C., Yen, A., Sarti, G., Pal, K., Floody, O., Belfki, A., Loftus, A., Jannali, A. R., Prakash, N., Cui, J., Rogers, G., Brinkmann, J., Rager, C., Zur, A., Ripa, M., Sankaranarayanan, A., Atkinson, D., Gandikota, R., Fiotto-Kaufman, J., Bau, D. (2026). Agents of chaos (arXiv:2602.20021). arXiv. https://arxiv.org/abs/2602.20021

Smith, A. (1759). The theory of moral sentiments. A. Millar.

Smith, A. (1776). An inquiry into the nature and causes of the wealth of nations: In two volumes. W. Strahan and T. Cadell.

Staab, R., Vero, M., Balunović, M., & Vechev, M. (2023). Beyond memorization: Violating privacy via inference with large language models (arXiv:2310.07298). arXiv. https://arxiv.org/abs/2310.07298

Tomašev, N., Franklin, M., & Osindero, S. (2026). Intelligent AI delegation (arXiv:2602.11865). arXiv. https://arxiv.org/abs/2602.11865

Zhao, Y., & Liu, J. (2026). Heterogeneous computing: The key to powering the future of AI agent inference(arXiv:2601.22001). arXiv. https://arxiv.org/abs/2601.22001

If you liked this post, don’t forget to subscribe to the Enterprising Investor.

All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images