Compliance Engineering
AI Agent Compliance
Chatbots are 2024. Document processing is 2025. AI agents — autonomous systems using tools, calling APIs, and making decisions across multiple systems — are 2026 and beyond.
And almost nobody has worked out what compliance looks like for them yet. That's the gap this work lives in.
Why agents change the question
An AI chatbot answers a question. The DPIA writes itself. The data flow is one direction: user → server → LLM → server → user. You can document it on a single page.
An AI agent is different. The agent reads an email, decides whether to schedule a meeting, calls your calendar API, drafts a reply, and updates your CRM. It uses tools. It chains decisions. It accesses systems your DPIA never considered.
The compliance questions multiply:
- Who is the data controller when an autonomous agent makes a decision?
- How do you DPIA a system that adapts its behaviour based on past interactions?
- What does Article 22 (automated decision-making) require when the agent escalates to a human after 3 failed attempts?
- How do you audit a tool-using LLM when each tool has its own data flow?
- What's the lawful basis when an agent reads private data to decide whether to take an action?
- How do you handle a data subject access request when the agent's memory is stored across vector databases, conversation logs, and tool-call histories?
The current DPIA template doesn't cover any of this. The current privacy notice says nothing useful. The vendor due diligence checklist doesn't ask the right questions about tool-use boundaries.
That's the work I'm focused on. New patterns for new systems.
What I cover
Agent governance
When an agent acts autonomously, who's responsible? How do you set boundaries? What human-in-the-loop checkpoints does GDPR Article 22 actually require? What does the EU AI Act say about autonomous systems making consequential decisions?
Tool use and MCP
Agents that use tools (Anthropic MCP, OpenAI function calling, LangChain tools) expand the data flow surface area. Each tool is a new processor relationship. Compliance documentation needs to cover the agent's decisions AND the tool calls it makes. Most current DPIAs miss this entirely.
Memory, learning, and adaptation
Agents with persistent memory or learning loops complicate retention, purpose limitation, and data subject rights. Vector embeddings of personal data are personal data. Agent fine-tuning datasets are processing activities. Current frameworks don't handle this cleanly.
Auditability
If a regulator asks "why did your agent make that decision?" — can you answer? Logging every tool call, every model output, every chain-of-thought step. What you keep, for how long, and what you delete on a data subject request.
Liability and accountability
Who's liable when an autonomous agent breaches GDPR? When it makes a discriminatory decision? When it sends a customer the wrong information? The EU AI Act Liability Directive answers some of this; contracts answer the rest. Most agent deployments have neither sorted.
Newsletter
Get one issue a month on agent compliance
Compliance Engineering covers AI compliance generally. One in every four issues focuses on agents specifically — practical patterns for autonomous systems, tool use, memory, and auditability.
Free. Unsubscribe anytime. No spam.
Open-source toolkit
DPIA templates for agent systems, tool-use vendor due diligence checklists, and conformity assessment templates — published as open source on GitHub. CC BY licensed, attribution required when reused.
Coming May 2026: github.com/janus-compliance/compliance-engineering-toolkit
Working with me on agent compliance
If you're deploying an AI agent — autonomous customer support, internal workflow automation, anything that uses tools or makes decisions across systems — and you need the compliance posture sorted, the right starting point is a scoping review.
One week, fixed price, written report. I look at your agent architecture, tool boundaries, decision points, data flows, and tell you exactly what documentation, controls, and disclosures the system needs.
Book a £500 scoping reviewFor ongoing agent governance support, see DPO-as-a-Service.