notices - See details
Notices
Enterprising Investor Theme - Data and Tech Hero
THEME: TECHNOLOGY
23 March 2026 Enterprising Investor Blog

When AI Trades, Who Is Responsible?

Enterprising Investor Blog logo - small thumb v2

At 3:07 a.m., a portfolio rebalances. Volatility breaches a threshold, correlations shift, and liquidity buffers tighten automatically. No portfolio manager is awake. No committee convenes. Yet trades execute — in the firm’s name, and on behalf of its clients. 

This is no longer hypothetical. In mid-2025, Man Group’s quant equity unit confirmed it had deployed AlphaGPT, an agentic AI system that autonomously generates, codes, and backtests trading ideas. BlackRock’s Aladdin platform runs risk analytics and rebalancing workflows across more than $21 trillion in assets.

The architecture for autonomous action is here. The governance question: Has oversight kept pace?

The real question facing investment professionals isn’t whether agentic AI will enter portfolio management, but whether it can be deployed in a way that strengthens decision quality, accountability, and fiduciary responsibility — or quietly undermines them.

The Governance Model We Know

Investment management has traditionally relied on human decisions: portfolio managers (PMs) propose trades, risk teams challenge exposures, investment committees approve allocations, and compliance reviews activity after the fact. Authority, accountability, and documentation are tied to identifiable decision points. For any material trade, the key questions are: Who approved it? Why? What assumptions supported it?

This framework assumes decisions are human-initiated and reviewable. Agentic AI disrupts that assumption.

From Execution to Architecture

Traditional AI and analytics inform decisions. Agentic AI executes them. These systems don’t merely recommend. They evaluate, decide, and act within predefined constraints. A portfolio may rebalance based on volatility shifts, adjust hedge ratios as correlations evolve, or tighten liquidity buffers in response to deteriorating market depth. No human presses “execute.” The system acts because it’s been designed to.

The critical decision, therefore, is no longer the trade itself. It is the design of the system — the guardrails, thresholds, escalation rules, and constraints within which the system operates. This represents a profound shift: In traditional models, the decision equals the trade. Humans initiate action. Accountability resides at execution. In agentic systems, the decision equals system design. The system initiates action. Accountability resides in the architecture. Governance moves upstream.

subscribe button

Redefining Accountability

When decisions emerge from system behavior rather than human instruction, accountability becomes more complex — but no less critical. Portfolio managers remain accountable for outcomes, even as day-to-day decisions are embedded within agent logic rather than trade tickets. Risk leaders shift from retrospective reporting to forward-looking guardrail design, stress testing, and behavioral monitoring. The key question is no longer “What did the PM do yesterday?” but “What is the system permitted to do tomorrow?”

Investment committees move toward meta-decisions: determining where autonomy is acceptable, how it is controlled, and what evidence is required before expanding it. Model governance teams become fiduciary gatekeepers, responsible not only for validating models but also for validating entire decision systems — their objectives, constraints, failure modes, and change-control processes.

Consider a scenario where a portfolio gradually builds unintended concentration risk. No individual trade breaches limits, yet risk accumulates over time. Performance deteriorates, and questions arise: Who is accountable?

The CFA Institute Code of Ethics and Standards of Professional Conduct requires members to act with loyalty, prudence, and care, and to have a reasonable and adequate basis for investment actions. These obligations do not diminish when the initiating agent is a machine. But the locus of “reasonable basis” shifts — from trade rationale to system design rationale. In an agentic environment, accountability does not disappear. It becomes distributed across design, approval, and oversight.

The Risk of Governance Drift

The greatest risk of agentic AI is not prediction error. Investment professionals are accustomed to model risk and regime shifts. The greater risk is governance drift — the gradual expansion of autonomy without corresponding evolution in oversight. Parameters are widened to reduce overrides — and never tightened again. Use cases expand beyond original mandates because the system “seems to work.” Dashboards replace structured challenge. Outputs are treated as presumptively correct.

The pattern is not theoretical. In January 2025, Two Sigma Investments settled SEC charges totaling $90 million after a researcher was found to have modified live algorithmic trading models without adequate oversight for nearly four years.

The SEC explicitly attributed the failure to inadequate internal controls over automated systems — a textbook case of governance drift in an algorithmic environment. Over time, decision authority shifts from humans to systems — not by design, but by inertia. Yet fiduciary responsibility remains firmly with the institution.

Are Existing Structures Fit for Purpose?

Most governance frameworks are designed for human decision-making. They assume clear approval points, identifiable decision-makers, and post-trade surveillance that can detect issues after the fact. Agentic systems operate continuously, executing thousands of micro-decisions at machine speed. The UK Financial Conduct Authority’s 2025 multi-firm review of algorithmic trading firms found persistent weaknesses — outdated policies, unclear accountability structures, and insufficient testing — across the industry.

The question for every investment organization becomes: Are your governance structures designed for machine-initiated decisions that still carry your fiduciary signature? If not, the work required is architectural, not cosmetic.

Designing Governance for Agentic Systems

Treat guardrails as governance decisions, not technical settings. Risk limits, factor bands, liquidity thresholds, ESG exclusions, and escalation triggers should be owned and approved through the same channels as investment policy statements. They are expressions of risk appetite and fiduciary duty. Separate design authority from execution responsibility. Who has authority to modify the agent’s logic? Who approves parameter changes or expanded use cases? These decisions require structured challenge and documented rationale. No single team should be able to quietly increase autonomy.

Validate the system, not just the model. Validation must cover behavior under stress, boundary conditions, and rare events — not just predictive accuracy. Investment and risk committees should schedule periodic “agent challenge” sessions focused on scenario analysis and failure modes. Shift oversight from trades to trajectories. For a global equity strategy running an agentic rebalancer, the investment committee’s standing agenda should include: how often the agent operates near its constraint boundaries, where human overrides and escalations have occurred, and how system behavior evolves as market regimes shift.

What Practitioners Should Do Now

For portfolio managers, CIOs, and risk officers, a practical starting point is to map where agentic behavior already exists — or is planned — in the investment process. Then ask:

  • Where are decisions effectively being taken by systems rather than people?
  • Do current policies, committees, and documentation reflect that reality?
  • If a client, regulator, or board asked, “Who is accountable for this pattern of trades?” could the answer be given clearly?

Agentic AI does not eliminate accountability. It relocates it.

The Competitive Advantage of Trust

The future of investment management will not be defined by automation alone. It will be defined by trust. The firms that navigate this well will not be those with the most sophisticated models, but those with the clearest, most deliberate governance. The real competitive advantage lies in aligning intelligent systems with fiduciary duty.

Because in an autonomous world, the ultimate responsibility remains human.

If you liked this post, don’t forget to subscribe to the Enterprising Investor.

All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.

Image credit: ©Getty Images