While financial services firms continue to accelerate AI adoption, governance maturity is lagging. Legacy frameworks around models, data, and technology were not designed for today’s AI landscape: probabilistic models, opaque third-party dependencies, and, increasingly, autonomous agentic systems. As a result, firms attempting to scale AI using traditional governance approaches may find themselves exposed to risks that are difficult to detect, quantify, or control.
Weak AI governance can translate directly into misinformed investment decisions, security vulnerabilities, and ultimately, financial and reputational losses. Conversely, firms that build effective governance frameworks can better align AI with business objectives, manage downside risks, and create a more durable competitive advantage.
To address this challenge, I propose a two-tiered AI governance framework that integrates program-level oversight with use-case-specific controls. Much like the complementary top-down and bottom-up approaches in investing, this structure enables both consistency at scale and precision in execution.
The program-level component centers on three core actions:
- Discover your AI assets in order to govern them effectively
- Establish enterprise-level governance structures and mechanisms
- Focus enterprise-level governance on a few critical domains
Discover: A foundational step is establishing comprehensive inventories of AI assets, use cases and agents. These will serve as the building blocks for governance processes at both the program level and the use case level and should be linked into enterprise’s overarching governance and risk management mechanisms and tools. As we look to the future, it’s becoming critical to apply some of the same institutional and organizational processes to managing AI agents that we commonly apply to managing people, which is near impossible without these inventories in place.
Establish: Oversight mechanisms fall into this category including policy and procedures, risk appetite statements, chain of authority and escalation, and the creation of an enterprise AI literacy program. These elements define the “rules of the road” and act as a first line of defense against internal and external pressures that will inevitably arise during AI implementation.
Focus: The rapid proliferation of AI governance frameworks and controls can create the impression that effective governance requires a “boil the ocean” approach. In practice, this is neither feasible nor necessary. AI governance should instead be deliberately scoped and aligned with an organization’s specific risk profile, operating model, and strategic priorities. The objective is not completeness, but effectiveness.
Security, Data & Model Governance
Against this backdrop, several themes have emerged as consistently underdeveloped in current governance efforts. In particular, security, data, and models warrant focused attention as foundational domains for building scalable and resilient AI governance programs.
- Security: Security processes for AI agents have lagged broader AI security and governance efforts, reflecting their added complexity and relatively recent adoption. This gap is material: only 26% of organizations report having comprehensive AI security governance policies in place, according to a December 2025 Cloud Security Alliance survey.
While AI security is a rapidly evolving discipline, several foundational practices are emerging at the program level. These include enhancing detection capabilities for anomalous process creation, scripting activity, and unexpected outbound traffic to AI models; applying established cyber testing techniques such as red-teaming and simulation; and increasingly leveraging AI itself to move toward AI-audited code as a baseline capability.
However, program-level controls are insufficient on their own. Effective governance also requires use-case-specific safeguards. For agentic systems, this may include securing tokens and credentials in dedicated vaults rather than embedding them within agents, along with implementing controls tailored to agent autonomy and interaction patterns. Emerging agent-specific frameworks (e.g., CSA ATF, A2AS) provide additional guidance, though adoption remains uneven. As agentic architectures become more prevalent, these gaps are likely to become a source of operational and security risk. -
Data: Proprietary data remains one of the few durable sources of competitive advantage in an increasingly commoditized AI landscape. While models and tools are becoming widely accessible, enterprise data, particularly when proprietary and well-governed, continues to differentiate outcomes. AI, in turn, creates new pathways to extract and scale that value.
Realizing this value, however, requires treating data as a first-class asset: not as a one-off initiative, but as a governed, repeatable organizational capability. This is where many firms fall short. According to the 2026 Precisely/LeBow State of Data Integrity and AI Readiness survey, 43% of data leaders cite data readiness as the primary barrier to aligning AI with business objectives, a finding echoed across multiple industry studies.Importantly, “AI-ready” data is not a universal standard. Requirements vary significantly by use case, making it essential to pair enterprise-level governance with context-specific data preparation. At the program level, several practices consistently underpin AI-ready data. These include accelerating data quality initiatives, potentially leveraging AI and agentic approaches toaddress quality at scale, and establishing a semantic layer that captures business context, metadata, and sensitive data classifications. Such capabilities enable organizations to govern data in line with regulatory, operational, and ethical requirements.
Ultimately, firms that fail to operationalize data governance as a core capability and not a supporting function will struggle to translate AI investment into measurable business outcomes.
-
Models: Traditional model governance frameworks were largely designed for deterministic models trained on proprietary enterprise data and controlled by the end-user organization.In contrast, the foundational large language models (LLMs) prevalent today are probabilistic, trained on massive datasets with limited transparency and operated on third-party infrastructure that’s subject to change. Newer AI-specific model risks have emerged. These include prompt injection, model theft, and, in an agentic setting, behavioral drift and stochasticity (a condition where the same inputs can result in different outputs).
As a result, it’s imperative for model governance to evolve to address these, and emerging risks. Some best practices to consider include employing continuous model observability and autonomy controls, given the rapid pace at which AI models typically operate. In addition, as we look to the future, we are likely to see more independent evaluations of foundational LLMs, perhaps as a component of future regulatory frameworks.
The Governance Edge
As the technology continues to evolve, so too must the frameworks that govern it. Firms that treat AI governance as a strategic capability, rather than a compliance exercise, will be better positioned to capture its upside while containing its risks.
If you liked this post, don’t forget to subscribe to the Enterprising Investor.
All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.
Image credit: ©Getty Images