Governing AI Agents: What Your Organisation Is Missing
.jpg)
According to PwC's 2025 AI agent survey, 79% of senior executives confirmed that AI agents are already being adopted in their organisations. Yet adoption is only part of the full picture. The bigger question for compliance, legal, and data protection professionals is whether governance frameworks are keeping pace with the risks these systems introduce.
The honest answer, for most organisations, is no.
Firstly, it’s important to identify that AI agents are not the same as chatbots. They are not large language models waiting to respond to a prompt; rather, they are systems designed to operate with a degree of autonomy - planning and carrying out multi-step tasks, making decisions or recommendations, retrieving information from multiple data sources, and interacting with internal systems and third-party tools. The key distinction is that they operate within systems, rather than simply generating outputs. That difference changes the risk profile entirely, and it changes what effective governance has to look like, including, for many organisations, who is responsible for it. It is precisely this shift that has driven the emergence of the AI Officer as a dedicated governance role.
The biggest risk for any organisation is not the use of AI agents, but in deploying them without the frameworks, skills, and oversight needed to manage their impact.
Five Governance Risks with AI Agents (and how to mitigate them)
The challenge for most organisations is translating regulatory requirements into day-to-day operational practices. The following principles provide a practical foundation for organisations looking to govern AI agents effectively, regardless of which jurisdictions they operate in.
1. Access that extends further than intended
AI agents typically require access to data and systems to function. The problem is that, over time, more access gets added without updating the original risk assessment. Permissions accumulate, scope drifts and before long, the agent is operating across a far wider data landscape than was ever sanctioned, putting compliance with GDPR data minimisation principles at risk.
The solution lies in defining least-privilege access from the outset, as well as having periodic reviews to ensure permissions remain aligned to the agent's defined purpose. Access granted at deployment should be treated as dynamic not permanent.
2. Accountability that belongs to everyone - and therefore no one
When an AI agent triggers actions across multiple workflows, accountability tends to become fragmented across departments. For example, technical controls sit with the IT team, policies with Compliance, and the team using the agent owns the application. When something goes wrong, the organisation as a whole bears the liability without anyone having been clearly responsible for the outcome.
To counteract this, every AI agent deployment needs a named owner - someone with responsibility for oversight, escalation, and sign-off. Accountability should be documented, not assumed.
3. Decision-making that cannot be traced
Multi-step autonomous decision-making creates a traceability problem. When an agent plans and executes a sequence of actions, it can be genuinely difficult to reconstruct what data was used, what path was chosen, and why. During a regulatory audit or in response to a Data Subject Access Request, that opacity becomes a serious compliance liability.
AI agents should generate appropriate audit logs and decision records as a matter of design, not retrofit. This ability to explain an agent's actions is a regulatory expectation.
4. Changes after deployment
Prompt updates, model changes, new data sources and additional tool integrations. Any of these can gradually expand what an AI agent does - often without a formal reassessment of the associated risks. The result is function creep: the agent ends up processing data or taking actions that were never part of the original approval.
Organisations need formal change controls for AI agent deployments. Any material update should trigger a review of risk, governance measures, and approvals before it goes live. The same rigour applied at deployment should apply throughout the lifecycle.
5. Small errors that scale at speed
Where AI-driven decisions affect customers or employees, even minor errors can propagate quickly and at scale. A miscalibrated output that affects one decision might affect thousands before anyone notices. The regulatory and reputational consequences can be severe, and trust, once lost, is difficult to recover.
Human review thresholds and escalation mechanisms for higher-risk decisions are not bureaucratic overhead - they are essential safeguards. The question is not whether humans should be in the loop, but where and at what threshold.
Global AI Governance Frameworks
For organisations deploying AI agents, the absence of a unified international framework for AI governance creates significant operational complexity. Since agents routinely work across multiple systems, datasets, and geographies at once, no single regulatory regime can serve as the sole reference point. Governance must therefore be designed to scale - meeting the requirements of each jurisdiction an organisation touches, and built to withstand the most demanding of them.
United Kingdom
Rather than introducing standalone AI legislation, the UK has opted for a principles-based model that draws on existing legal frameworks, with the UK GDPR remaining central to this approach. Article 22 is a particular focal point for organisations using AI agents, applying where systems make solely automated decisions with legal or similarly significant effects on individuals. This places a clear obligation on organisations to understand where automation is taking place, ensure meaningful human involvement where required, and provide individuals with a route to challenge decisions made about them.
European Union
The EU AI Act establishes a risk-based regulatory framework in which the obligations placed on organisations depend on both the intended use of an AI system and the level of risk it poses to individuals. The General Data Protection Regulation (GDPR) runs alongside it, continuing to govern how personal data is processed.
Although the EU AI Act entered into force in 2024, most of its substantive obligations take effect from 2026 - a timeline that signals a move from legislative intent to active enforcement at both EU and member state level. For organisations working with AI agents, the practical implication is that governance cannot be an afterthought. Classification of the agent, the context of its deployment, and the organisation's position within the broader AI lifecycle will each determine what oversight, documentation, and controls are required.
Canada
At the federal level, Canada has yet to enact dedicated AI legislation. Bill C-27 - which includes the Artificial Intelligence and Data Act (AIDA) - has stalled, leaving the national regulatory picture unresolved. In the interim, AI governance is primarily shaped by the Personal Information Protection and Electronic Documents Act (PIPEDA), with provincial frameworks such as Quebec's Law 25 adding further obligations for some organisations.
The practical consequence is that governance programmes need to be both substantive enough to satisfy current requirements and flexible enough to incorporate new obligations as legislation develops.
United States
Federal AI legislation does not yet exist in the US. Instead, a fragmented picture has developed, with state-level laws developing in parallel with existing privacy, consumer protection, and anti-discrimination frameworks. An AI Executive Order issued by President Trump in December 2025 sought to address this fragmentation - using executive authority to discourage further divergence at state level and to lay the groundwork for a potential national approach. It does not, however, constitute binding federal law.
In the meantime, state-level requirements remain in force, with particular relevance in high-risk domains including employment, financial services, and consumer decision-making.
Organisations with operations across multiple states should prioritise governance frameworks that are scalable and supported by reusable documentation - both to manage their current obligations and to allow rapid adaptation as the federal picture takes shape.
Developing Your AI Agent Governance
Given the pace at which AI agent deployments are accelerating, and the complexity of the regulatory environments in which they operate, organisations cannot afford to treat governance as an afterthought. The following principles provide a practical foundation.
Assign clear ownership. Decide who is responsible for approving AI agent use cases, signing off risk assessments, and overseeing ongoing performance and change. This may sit with an AI Officer or form part of an existing role, such as the DPO. What matters is not the job title - it is that accountability is clearly defined and documented.
Define the boundaries before deployment. Set clear limits on what an AI agent is designed to do. Document the agent's purpose and intended outcomes, the decisions it can make or influence, actions that are explicitly out of scope, and the data sources and tools it is permitted to access. Defined boundaries help prevent scope creep and support data minimisation.
Ensure the right capability is in place. Effective AI governance depends on having the right mix of skills and experience across technical, product, legal, and compliance functions. The capability to design, assess, and oversee AI agents throughout their lifecycle needs to be in place before deployment - not sourced reactively when something goes wrong. In practice, this is often where skills gaps are most visible.
Build human oversight that can actually intervene. Oversight should be proportionate to risk and capable of meaningful intervention - not a checkbox that exists on paper. In practice, this means validating outputs, setting escalation routes, enabling overrides, and maintaining audit trails. This is particularly important where automated decision-making may apply under UK GDPR Article 22 or equivalent provisions.
Monitor for bias and unfair outcomes. Organisations should actively identify and monitor potential bias introduced by AI agents - reviewing outputs over time, assessing patterns in decision-making, and ensuring that automated processes do not disadvantage individuals or groups without justification.
Embed governance into existing processes. AI governance should not sit in isolation from the rest of the compliance framework. AI agents should be incorporated into existing Records of Processing Activities (RoPAs), risk management documentation, privacy assessments, and security frameworks. Governance should be a living function - with risks reviewed regularly as systems, use cases, and regulations evolve.
The Cost of Waiting
The transition to agentic AI is not a future consideration - it is already underway. Organisations that delay governance work until enforcement begins will find themselves in a reactive position, facing regulatory scrutiny with inadequate documentation, fragmented accountability, and limited visibility into what their systems have been doing.
The organisations that get this right will be the ones that treat AI governance not as a constraint on deployment, but as a precondition for it. The differentiating factor is whether governance is designed into deployment from day one or applied reactively as gaps and problems appear.
.jpg)

.jpg)