← Back to Insights

AI Compliance

The AI Agent DPIA: What Changes When AI Stops Answering and Starts Acting

Michael K. Onyekwere··9 min read

The OpenAI API setup work I walked through recently assumes your AI is a chatbot. It answers questions. It generates text. It doesn't do anything in the world.

That assumption is breaking down fast. The AI products I'm now scoping in client conversations are agents. They book meetings, process refunds, file tickets, send emails, query databases, post to Slack, edit documents, complete transactions. They take actions in the world.

The DPIA changes from the core out.

Most teams I've spoken to in the last few months have copy-pasted a chatbot DPIA template, swapped a few words, and called it the agent DPIA. The output is a document that doesn't survive five minutes of regulator questioning. Five things change when AI moves from answering to acting, and a chatbot template covers exactly none of them.

Get articles like this every Tuesday. Compliance Engineering, practical AI compliance for engineers and founders. Free, weekly, written by a CIPP/E certified practitioner who actually builds these systems.

Building an AI agent and want a practitioner read on the DPIA? A DPO-as-a-Service retainer is built for that work. Monthly engagement, real DPO sign-off, regulator-ready documentation.

1. The processing flow has tool-use as a first-class step

A chatbot DPIA documents:

User → app → LLM → app → user

That's the whole flow. Maybe a database read for context, maybe a logging step. Short.

An agent DPIA documents something closer to this:

User
  → app
    → LLM (planning step)
      → tool-use decision
        → external service A
        → results back
      → LLM (synthesis step)
        → tool-use decision
        → external service B
        → results back
      → LLM (final step)
        → action taken in the world
  → user notified

Each external service is a separate processing activity. Salesforce, Slack, your accounting system, Google Calendar, your CRM, your email provider. Each one is a sub-processor or first-party system that's now in the agent's data path. Each one needs:

  • Lawful basis for the data the agent sends to it
  • Sub-processor agreement, if it's a third party
  • Cross-border transfer mechanism, if data moves outside the UK or EEA
  • Retention rules for any data the tool returns

Most chatbot DPIA templates have one row for "LLM provider" in the sub-processor table. Your agent DPIA needs one row for every tool the agent can use, plus one row for the LLM, plus the inter-step routing.

I've seen agent products with twelve tools enabled. Twelve tools means twelve sub-processors, twelve transfer assessments, twelve retention conversations. The DPIA scope is twelve times what a chatbot would need.

2. Article 22 is no longer optional

GDPR Article 22 governs automated decision-making with significant effects on individuals. A chatbot answering "what's your return policy?" does not make decisions about anyone. It conveys information. Article 22 doesn't engage.

An autonomous agent that processes a refund, escalates a complaint, books an appointment, files an HR ticket, or sends a customer-facing email is making decisions. The fact that a human approves the action half the time doesn't change the analysis if the human is rubber-stamping. Article 22 looks at whether the decision is based solely on automated processing. Meaningful human review changes that. Checkbox approval doesn't.

For each agent action that meets the threshold, the DPIA needs:

  • The Article 22(2) exception you're relying on (necessary for contract, authorised by law, or explicit consent)
  • The decision logic, in plain language: walk through what factors the agent weighs and what typical outcomes look like
  • Genuine human oversight: who reviews, what they review, whether they can realistically change the outcome
  • The data subject's right to contest, in language they can act on
  • A human reviewer for the contest, who is not the same agent

Most chatbot templates skip Article 22 entirely because chatbots usually don't trigger it. Agent templates have to engage with it head-on.

If your agent makes consequential decisions, this is the section a regulator will read first.

3. The risk table needs autonomous-action entries

The standard AI risk table covers data breach, third-party processor compromise, inadequate retention, and prompt injection. That covers the chatbot failure modes. Agents have additional ones the chatbot template doesn't anticipate.

Agent-specific risks the chatbot template doesn't capture:

Autonomous action drift. The agent takes an action it shouldn't have because the model misinterpreted the user's intent or the tool's response. The action is real and visible: an email got sent, a refund got processed, a ticket got filed. Likelihood: medium. Severity: depends on the action, often high. Mitigation: action authorisation gates (see section 4).

Tool-use spillover. The agent uses a tool with personal data the original task didn't require. User asks "find me a meeting time with Tunde." Agent helpfully reads Tunde's full calendar including private appointments. The user got their answer. Tunde's privacy got brushed past. Likelihood: medium. Severity: medium. Mitigation: data minimisation in tool prompts, scoped tool permissions, principle of least privilege applied to tool authentication.

Action permanence. Unlike chatbot output, agent actions often can't be undone. A sent email is sent. A posted Slack message is posted. A processed payment is processed. The error window is short and expensive. Likelihood: low. Severity: high. Mitigation: pre-action confirmation for high-stakes tools, undo where the API supports it, kill switch.

Multi-step reasoning compounding. The agent makes a small reasoning error in step 1 that leads to a wrong premise in step 3 that triggers an unwanted action in step 5. None of the individual steps look wrong in isolation. Likelihood: medium. Severity: variable. Mitigation: step-level logging, anomaly detection on action patterns, action limits per session.

Inferential disclosure via tool combinations. Single-tool use is bounded. Multi-tool use lets the agent combine inputs to infer information no single tool revealed. The chatbot template doesn't anticipate this combinatorial risk because chatbots don't have it.

These belong in your risk table. Likelihood and severity will vary by agent design. The point is they need to be there.

4. Controls move from output filtering to action authorisation

Chatbots have output filtering. Scrub PII from generated text, refuse certain categories of response, redact specific patterns. That's a content control.

Agents need action controls. Different requirement, different design.

The chatbot DPIA template lists "output filtering" and moves on. Your agent DPIA needs:

Pre-action authorisation gates. Before the agent takes any action above a defined risk threshold, a human approves. Document who, how fast, and what the SLA looks like when the human is offline.

Tool-use logging. Capture what the agent did at each step. Every tool call, every parameter, every external response. Logged in a way that supports investigation when something goes wrong.

Action limits per session. A misbehaving agent shouldn't be able to send 200 emails before someone notices. Per-session caps on high-stakes actions.

Kill switch. A documented way to stop the agent mid-task. Tested, not theoretical. The DPIA should reference where it lives and who can pull it. Saying "we have a kill switch" fails inspection. The DPIA needs the operational specifics: the kill switch is at /admin/agent-stop, three named operations engineers can trigger it within 60 seconds, last tested March 2026, retested quarterly.

Reconciliation jobs. Agents drift. Reconciliation jobs check whether the agent's actions match what was authorised and surface discrepancies for review.

Each control needs an owner, an evidence trail, and a testing cadence.

5. Residual risk acceptance is harder, and the DPIA has to admit it

The cleanest section in a chatbot DPIA is residual risk acceptance. After controls, here's what's left, here's the DPO sign-off, done.

Agent DPIAs cannot be that clean. Honest agent DPIAs say something close to this:

The agent's behaviour is emergent. Reasoning combined with tool-use can produce action sequences that neither the model designers nor the deployment team anticipated. Standard testing covers known failure modes. Unknown failure modes will surface in production. Residual risk is therefore non-zero and not fully characterisable at deployment. Mitigating practices: production monitoring with anomaly detection, monthly review of action logs against intended behaviour, defined escalation path for novel failure modes.

This is uncomfortable because it doesn't say "all risks mitigated, sign here." It says "we know there are unknowns, here's how we're managing them." A regulator who asks "is this safe?" hears how the system surfaces failures, in place of a clean assertion.

Honest beats clean. Clean is what gets revisited after the first incident.

What the toolkit gives you

The Compliance Engineering Toolkit (open source, CC BY 4.0) has two DPIA templates: chatbot and autonomous agent. Fork either one, attribute, adapt to your system. The agent template covers the five things above.

The chatbot DPIA template is the right starting point if your AI is genuinely Q&A only. The autonomous agent DPIA template is the right starting point if your AI takes actions on behalf of users.

If you're not sure which side of the line your product sits on, ask: when the AI finishes responding to a user, has anything in the world outside of the chat changed? If yes, you have an agent. Use the agent template.

How this fits with the rest of the DPIA work

Writing the AI DPIA from scratch is covered in the practical DPIA writing guide. What you've just read is the agent-specific layer that goes on top of that work.

If you're earlier in the process and asking whether you need a DPIA at all, start here.

If you've already written one and want to know whether it covers your agent properly, the audit question is short. Does your processing flow document tool-use? Does your risk table have autonomous-action entries? Do your controls include action authorisation gates? Does your residual-risk section admit what you don't know?

If three of four are missing, you have a chatbot DPIA pretending to be an agent DPIA. That gap is where most companies sit right now.

I'd rather they fix it before a regulator does.

Frequently Asked Questions

Do I need a different DPIA for an AI agent versus an AI chatbot?

Yes. The DPIA scope changes from the core out. Where a chatbot DPIA documents one processing path (user, app, LLM, app, user) and one external sub-processor (the LLM provider), an agent DPIA documents tool-use as a first-class step. There's one row in the sub-processor table for every tool the agent can use. An agent that calls Salesforce, Slack, your accounting system, and Google Calendar has four sub-processors plus the LLM, plus inter-step routing. The risk table also needs autonomous-action entries (action drift, tool-use spillover, action permanence, multi-step reasoning errors) that don't apply to chatbots.

When does an AI agent trigger Article 22?

Article 22 governs decisions based solely on automated processing that have legal or similarly significant effects on individuals. A chatbot answering 'what's your return policy?' does not trigger it because no decision is being made about a person. An agent that processes a refund, books an appointment, escalates a complaint, files an HR ticket, or sends a customer-facing email is making decisions. A human approving the action half the time only changes the analysis if the human is genuinely reviewing rather than rubber-stamping. If your agent makes consequential decisions, the DPIA needs to identify the Article 22(2) exception (necessary for contract, authorised by law, or explicit consent), the decision logic in plain language, the human oversight design, and the data subject's right to contest.

What's the difference between a chatbot and an AI agent for compliance purposes?

The world-state question is the simple test. When the AI finishes responding, has anything in the world outside of the chat changed? If no, it's a chatbot, and it conveyed information. If yes, it's an agent, and it took an action: sent an email, posted a message, processed a payment, queried a database, edited a document, called an API. The action distinction triggers a different risk surface, different controls, different processing-flow documentation, and Article 22 considerations the chatbot DPIA doesn't engage with.

Can I use the ICO's DPIA template for an AI agent?

The ICO template is a reasonable starting point for any DPIA, but it's generic, not AI-specific, and not agent-specific. Used as-is for an AI agent, it produces a document that won't survive regulator questioning. You'll need to extend it significantly to cover tool-use processing flows, autonomous-action risks, action authorisation controls, and Article 22 analysis. The open-source autonomous agent DPIA template at github.com/Thezenmonster/compliance-engineering-toolkit (CC BY 4.0) is built specifically for this and is faster to adapt than retrofitting the ICO template.

Who signs off on the residual risk for an AI agent DPIA?

The DPO (or whoever holds DPO-equivalent responsibilities under your operating model) signs off on the residual risk acceptance. For agents, the residual risk section has to say something a chatbot DPIA doesn't have to. The agent's behaviour is partly emergent. Unknown failure modes will surface in production. The mitigation is monitoring plus escalation rather than 'all risks mitigated, sign here.' That sign-off should be supported by an explicit production monitoring plan, defined anomaly detection, and a documented escalation path. Article 36 prior consultation with the supervisory authority is required if residual risk remains high after controls.

Start with a £500 scoping review

If you need GDPR documentation, AI Act work, or a compliant AI build, the first step is a written scoping review. You get a real report, not a generic discovery call.

AI agent DPIAAI agent complianceautonomous AI complianceGDPR Article 22DPIA AI agent templateAI compliance practitioneragent governance