This report addresses the ethical concerns and risks of AI washing in finance, providing crucial questions for stakeholders to evaluate managers’ AI claims and ensure transparency, integrity, and the genuine application of AI in investment strategies.

From foundational knowledge to investment mastery, our learning programs are designed give you a critical advantage at every stage in your career.
Report Overview
The rapid rise of artificial intelligence (AI) in finance has brought both real innovation and misleading marketing claims. Many financial services firms feel pressure to appear technologically advanced to stay competitive.
Although some firms genuinely apply machine learning and AI to improve investing, others make claims that do not match reality. These firms may use buzzwords such as “AI-driven” or “machine learning–enabled” without truly integrating these tools into their investment processes. Consequently, clients and investors may be misled into believing they are investing in innovative, cutting-edge strategies when they are not.
The CFA Institute report “AI Washing: Signs, Symptoms, and Suggested Solutions” addresses the growing concern around AI washing (AIW) — the act of falsely or overly inflating claims about the use of AI in financial products or services. It examines what AIW is, why firms engage in it, how it affects clients and the broader development of AI, and touches on the ethical, regulatory, and technical measures that can help address it. It also offers guidance to asset owners on how to spot both genuine AI use and inflated claims in the marketplace.
According to NVIDIA’s “State of AI in Financial Services: 2025 Trends” report, 57% of respondents in a global survey of financial professionals are using or considering AI for data analytics, and generative AI usage has risen sharply to 52% from 40% in 2023. In addition, 37% report AI-driven operational efficiencies, and 32% believe AI offers a competitive advantage. Use of AI in trading and portfolio optimization has increased to 38% from 15%, while its application in pricing, risk management, and underwriting has grown to 32% from 13%.
Barriers to AI Adoption
True AI in finance involves systems that process large datasets, learn patterns, and make decisions — such as predicting asset prices or optimizing portfolios. These efforts require serious investment in talent, technology, and time. Many investment firms, however, either lack the resources or are unwilling to overhaul their existing processes to meaningfully incorporate AI. Instead, they may add small AI elements (e.g., using a chatbot or language model) but advertise their strategy as “AI powered,” which is deceptive if these tools do not play a central role.
Barriers to real AI adoption in investing are high. Financial data is often messy, sparse, and hard to predict. Unlike other industries where data is more abundant and easier to model, investment forecasting requires handling noisy, volatile, and complex inputs. Consequently, many firms hesitate to disrupt their existing models that already perform well.
AIW is particularly dangerous because it undermines explainable AI (XAI) — a movement focused on making AI systems more transparent, understandable, and trustworthy — especially for non-technical users. If firms exaggerate or hide how they use AI, it becomes harder for stakeholders to assess the real value or risks of these tools.
This report asserts that investors deserve transparency about what technologies are being used, how they work, and whether they deliver value. Firms should avoid overhyping their use of AI just to attract clients or compete with rivals. Instead, they should be transparent about how they use AI, what it adds to their process, and what limitations exist.
Asset managers or asset owners must be able to provide sufficient detail regarding why and how they implement AI technology in their process, what specific frameworks they use, and what results or improvements they observe from using AI. This recommendation is in line with the ethical principles of transparency and duty to clients as set out in the CFA Institute Code of Ethics and Standards of Professional Conduct.
To help stakeholders — especially asset owners and clients — spot AIW, this report provides a list of targeted questions that cover technical details (e.g., what algorithms are used, what data they train on, how performance is measured) and organizational aspects (e.g., who is leading the AI efforts and whether they have the right background).
Key Takeaways
- AIW is the practice of overstating or misrepresenting how firms use AI in their financial products and services.
- Many firms face pressure to appear “tech savvy” and use AI buzzwords to attract clients — even when AI is not a core part of their process.
- Real AI use requires deep expertise, substantial resources, and clear evidence of impact on investment decisions or performance.
- AIW undermines XAI by making AI processes less transparent and more difficult for stakeholders to evaluate.
- The report offers a practical questionnaire to help asset owners and clients detect AIW through due diligence.
- Firms must uphold marketing integrity and transparency, ensuring that claims about AI use reflect reality — not hype — in line with the CFA Institute Code of Ethics and Standards of Professional Conduct.