The report underscores that transparent, explainable AI is vital in finance — not only for regulatory compliance but also for institutional trust, ethical standards, and risk governance. Although automated tools help, human oversight and organizational alignment are indispensable.
Hear from the Author
Executive Summary
Decision-making systems orchestrate our world, powered by machine learning (ML) systems based on artificial intelligence (AI). These AI-based systems help underwriters and credit analysts to assess risk, portfolio managers to optimize security allocation, and individuals to select investment and insurance products. As the digital economy grows, so does the need for immense computing power. This power comes at a cost, however: Systems based on deep learning algorithms in particular can become so complex that even their developers cannot fully explain how these systems generate decisions. This, in essence, is the “black-box problem,” which makes it difficult to trust an AI system’s decisions, assess model fairness, and meet regulatory demands. Consequences include actual or perceived discrimination against protected consumer groups and violation of fair lending rules.
This problem has led to the consideration of various proposed solutions — the most well known being explainable AI (XAI) technologies — to create a cognitive bridge between human and machine. XAI refers to AI and ML techniques, or capabilities, that seek to provide human-understandable justifications for the AI-generated output. Implicit in explainable AI is the question “explainable to whom?” In fact, defining “whom” (or the user group) is essential to determining how the data are collected, what data can be collected, and the most effective way of describing the reason behind an action. This report focuses on the human behind human–machine collaboration. The objective is to generate discussion on the best way to support the needs of diverse groups of AI users. As such, this report explores the role of XAI in modern finance, highlighting its applications, benefits, and challenges, with insights from recent studies and industry practices. It presents a detailed analysis of the explainability needs of six stakeholder groups, the majority of which are nontechnical users. The analysis includes matching their needs with their job responsibilities and assessing the most relevant XAI methods. Finally, the report reviews some alternative approaches to XAI — evaluative AI and neurosymbolic AI.
With its focus on AI explainability, this study represents a deeper analysis of transparency and explainability issues raised in earlier CFA Institute works. These publications include “Ethics and Artificial Intelligence in Investment Management" (Preece 2022) and "Creating Value from Big Data in the Investment Management Process" (Wilson 2025)
Key Takeaways

The CFA Program is your pathway to becoming a globally recognized Chartered Financial Analyst® (CFA®). Equipping you with real-world skills in investment analysis, a Chartered Financial Analyst credential helps you thrive in the competitive investment ind
1. The Need for AI Explainability in Finance
- Credit scoring and lending: Deep learning models can provide more detailed assessments by using alternative data (e.g., credit card transactions, social media), but they require explainability to ensure fairness, transparency, and regulatory compliance.
- Investment and portfolio management: AI can enhance financial analysis, asset allocation, and risk management by detecting patterns in large datasets to improve modeling and decision making, but lack of explainability and model “hallucinations” can lead to misinformed decisions and financial losses.
- Insurance: AI can speed up underwriting, boost fraud detection, and enhance customer service, but its use raises concerns about unintended bias and discrimination created through correlations with sensitive personal attributes. Examples of nonpersonal characteristics that may indirectly correlate with protected attributes include zip codes as proxies for socioeconomic status or ethnicity, as well as purchasing history for gender or ethnicity.
- Regulatory challenges: AI-driven systems present oversight difficulties caused by limited transparency in data sources and decision-making logic.
2. Explainability Techniques
This report categorizes XAI methods into two main types:
- Ante-hoc (built-in explainability) models:
- Designed to be inherently interpretable (e.g., decision trees, linear regression, rule-based systems)
- Provide global explainability, offering transparency in how a model works overall
- Useful for regulatory and risk management applications where interpretability is prioritized over predictive accuracy
- Post-hoc (after-the-fact explainability) models:
- Applied to black-box models (e.g., deep learning, ensemble methods) to generate explanations after predictions are made
- Examples:
- Feature attribution methods (SHAP, LIME): Determine which input factors influenced an AI decision
- Visual explanations: Heatmaps, partial dependence plots, and attention maps to illustrate AI reasoning
- Counterfactual explanations: Explain how a decision could have changed under different circumstances (e.g., “If income were $5,000 higher, the loan would be approved”)
- Rule-based and simplification approaches: Approximate black-box models with more interpretable versions
3. XAI Applications
This report addresses the following key examples. This list should not be construed as exhaustive, however.
- Credit scoring and lending: XAI methods, such as SHAP and LIME, can help financial institutions justify loan approvals or denials.
- Algorithmic trading and investment strategies: Visual techniques, such as heatmaps, can help traders understand how models generate buy/sell signals.
- Fraud detection and anti–money laundering (AML): Feature attribution techniques are used to improve the interpretability of fraud detection models.
- Regulatory compliance and risk management: Regulators require clear explanations for AI-driven financial decisions, ensuring accountability and fairness.
4. Key Challenges in Implementing XAI
- Technical challenges:
- Lack of standardized evaluation metrics: No universal benchmarks exist to assess the quality of AI explanations, leading to inconsistent evaluations.
- Real-time decision-making constraints: Delivering instant, understandable explanations during fast-paced transactions remains difficult.
- Regulatory challenges:
- Privacy risks: Detailed explanations can unintentionally reveal sensitive personal or financial data.
- Absence of universal explainability standards: Differing regional regulations (e.g., EU versus US regulations) create compliance challenges for firms that operate internationally.
- User experience challenges:
- Overreliance on AI explanations (algorithmic appreciation): Users often trust AI outputs without critical evaluation, leading to confirmation bias.
- Limited user-friendly tools: Most XAI tools are built for technical users, with a lack of accessible interfaces for business users, regulators, and customers.
5. Alternative Approaches to XAI
Beyond standard XAI frameworks, the report explores the following:
- Evaluative AI: Focuses on hypothesis-driven decision making rather than direct AI recommendations, promoting human engagement
- Neurosymbolic AI: Integrates rule-based reasoning with deep learning to improve interpretability while retaining predictive power
XAI presents a transformative opportunity for financial institutions to enhance transparency, regulatory compliance, and trust in AI-driven decision making. Although challenges such as overreliance on explanations, privacy risks, and model complexity persist, strategic adoption of XAI can help financial firms navigate these obstacles effectively. By developing standardized frameworks, tailoring explanations to stakeholders, balancing interpretability with performance, and ensuring privacy protection, financial institutions can use XAI to the extent of its potential while maintaining ethical and responsible AI practices. Future research should focus on developing hybrid models that balance accuracy with interpretability, creating standardized benchmarks for evaluating XAI methods, and improving computational efficiency in real-time financial applications.