The short answer: yes, you can use the ChatGPT API and stay GDPR compliant. But there's a massive difference between using the API properly and letting your team paste customer data into the free chat interface.
One is manageable. The other is a data protection incident waiting to happen.
I've configured LLM integrations for businesses that process personal data daily — customer support systems, document analysis pipelines, internal tools. Here's exactly what you need to get right.
The Consumer Chat vs API Distinction (This Is Everything)
Most GDPR problems with ChatGPT come from people confusing the consumer product with the API.
Consumer ChatGPT (chat.openai.com):
- Conversations stored and potentially used for training (unless you opt out per-conversation)
- No Data Processing Agreement
- No guaranteed data residency
- No retention controls
- Fine for personal use. Not fine for business data.
ChatGPT API (api.openai.com):
- Data NOT used for model training by default
- Data Processing Agreement available
- Configurable retention (down to zero)
- EU data processing options available
- Can be made fully GDPR compliant
If your staff are typing customer names, order numbers, or complaints into chat.openai.com — that's the problem. The API, configured properly, is a different story entirely.
Step 1: Sign the Data Processing Agreement
This isn't optional. Under Article 28 of GDPR, any time you use a third party to process personal data on your behalf, you need a written contract — a Data Processing Agreement.
OpenAI offers a standard DPA. You'll find it in your account settings. It covers:
- What data they process and why
- Security measures they implement
- Sub-processor list (where your data might flow)
- Their obligations on data breaches (notification within 72 hours)
- Your right to audit
Sign it before you write a single line of code. If a regulator asks "do you have a DPA with your AI provider?" and the answer is no, everything else is irrelevant.
Step 2: Configure Zero-Retention
By default, OpenAI retains API data for 30 days for abuse monitoring. They don't use it for training, but they do store it temporarily.
For GDPR compliance, you want to minimise this. In your API account settings, opt out of data storage. With zero-retention enabled, your data passes through OpenAI's systems for processing but isn't retained afterwards.
This matters because GDPR's data minimisation principle says you shouldn't store data longer than necessary. If OpenAI doesn't need to keep your customers' conversations, they shouldn't.
Step 3: Minimise What You Send
This is where most businesses get it wrong. They send entire customer records to the API when they only need a fraction of the data.
Bad approach:
"Customer John Smith (john@email.com, account #45678,
DOB 15/03/1985, address: 42 Oak Street, London)
is asking about their recent order."
Better approach:
"A customer is asking about order status. Their question:
'When will my order arrive?' Order date: March 10.
Expected delivery: March 15."
Strip out names, email addresses, account numbers, and any identifying information that isn't needed for the AI to answer the question. Send the minimum. This is data minimisation in practice — not a theoretical exercise, but an architectural decision in how you construct your API calls.
If you absolutely need to include personal data (medical queries, legal questions, financial advice), document exactly why in your DPIA.
Step 4: Write a DPIA
A Data Protection Impact Assessment isn't just a form to fill in. It's the document that proves you thought about the risks before you went live.
For an AI API integration, your DPIA should cover:
What personal data flows through the API? Be specific. Not "customer data" but "customer first name, support query text, and order reference number."
Why is this processing necessary? What's the lawful basis? For most business chatbots, it's legitimate interest (providing efficient customer support) or contract performance (fulfilling a service the customer signed up for).
What are the risks? Data breach at OpenAI. Unexpected data retention. Model outputs that expose other users' data (extremely unlikely with API, but document that you've considered it).
What safeguards are in place? DPA signed, zero-retention enabled, data minimisation in API calls, encryption in transit, access controls on your side, staff training.
If you need a DPIA template specifically for AI systems, we've written a step-by-step guide.
Step 5: Update Your Privacy Notice
Your users need to know you're using an AI processor. You don't need to write a technical whitepaper — just clear, plain language:
- What AI processing you do (e.g. "We use AI to help answer your support questions faster")
- Who the processor is (OpenAI, or whichever provider you use)
- Where data is processed (relevant for international transfers)
- How long data is retained
- Their rights (access, deletion, objection)
This goes in your privacy policy. If you're using AI in a chatbot, mention it at the start of the conversation too. Transparency isn't just a legal requirement — it's increasingly what users expect.
Step 6: Handle International Data Transfers
OpenAI is a US company. If you're processing EU/UK personal data through their API, that's an international data transfer.
This is covered by:
- The EU-US Data Privacy Framework (for EU transfers)
- The UK Extension to the DPF (for UK transfers)
- Standard Contractual Clauses in OpenAI's DPA (belt and braces)
In practice, OpenAI's DPA includes SCCs, so you're covered. But document this in your records of processing activities. If the DPF is ever invalidated (like Safe Harbor and Privacy Shield before it), the SCCs provide your fallback.
What About Alternatives?
OpenAI isn't your only option. Here's how the main LLM providers compare on GDPR readiness:
Anthropic (Claude API): DPA uses Irish governing law. SCCs included. No API data used for training by default. Strong privacy positioning. Our preferred choice for most builds.
Google (Gemini API): DPA available through Google Cloud terms. EU data residency options. Covered by DPF and SCCs.
Mistral: French company, EU-based. Simplifies data residency questions. DPA available.
Self-hosted open models (Llama, Mixtral): Maximum data control — data never leaves your infrastructure. No DPA needed because there's no external processor. Higher infrastructure cost but zero transfer risk.
The choice depends on your specific needs. If data residency is your primary concern, self-hosted or EU-based providers simplify things. If you need the best model performance, OpenAI or Anthropic with proper configuration is the pragmatic choice.
The Enforcement Reality
The ICO and EU Data Protection Authorities are paying attention to AI. The Italian DPA temporarily banned ChatGPT in 2023 over GDPR concerns — specifically about the consumer product, not the API.
In February 2026, the ICO fined MediaLab.AI £247,590 for processing children's data without a DPIA. The lesson: regulators aren't waiting for complaints. They're proactively investigating AI systems that process personal data.
Using an LLM API without a DPA, without a DPIA, without data minimisation — that's the kind of thing that generates enforcement action. Not because the technology is banned, but because the basic governance steps were skipped.
Quick Compliance Checklist
Before you go live with ChatGPT API (or any LLM) in production:
- DPA signed with your LLM provider
- Zero-retention or minimum retention configured
- Data minimisation implemented in API calls (strip unnecessary PII)
- DPIA completed and documented
- Privacy notice updated to mention AI processing
- Lawful basis identified (legitimate interest or contract)
- International transfer mechanism documented (DPF + SCCs)
- Staff trained not to use consumer chat for business data
- Data deletion process in place (your side, not just theirs)
- Breach response plan includes AI processor scenarios
The Bottom Line
You don't need to avoid ChatGPT or any LLM to stay GDPR compliant. You need to use it properly — API, not consumer chat; DPA signed; retention minimised; data flows documented.
The businesses that get into trouble are the ones that skip the governance. The API call takes milliseconds. The compliance setup takes a few hours. The fine for getting it wrong can run to millions.
If you're building an AI system that processes personal data and want it done right from the start, that's what we do. We build the system AND deliver the compliance documentation. You don't get one without the other.
M.K. Onyekwere is a CIPP/E certified data protection professional and the founder of Janus Compliance. We build AI systems that are compliant from day one — chatbots, workflow automation, document processing — delivered with DPIA, privacy notices, and full documentation. Talk to us.
Frequently Asked Questions
Is using ChatGPT at work a GDPR violation?
Using the free consumer ChatGPT (chat.openai.com) for business purposes is risky and likely non-compliant. User conversations may be stored and used for model training, there's no Data Processing Agreement in place, and you have no control over data residency or retention. Using the ChatGPT API with proper configuration — DPA signed, zero-retention enabled, EU data residency — is a different matter entirely and can be made GDPR compliant.
Does OpenAI store data from API calls?
By default, OpenAI retains API input and output data for up to 30 days for abuse monitoring, but does NOT use API data for model training. You can request zero-retention by opting out of data storage entirely through your API settings. With zero-retention enabled, your data passes through OpenAI's systems but isn't stored. This is the configuration you want for GDPR compliance.
Do I need a Data Processing Agreement with OpenAI?
Yes, if you're processing personal data of EU/UK individuals through the API. OpenAI offers a standard DPA that covers Article 28 GDPR requirements. You need to sign it before going live. It's available in your OpenAI account settings under 'Data Processing Agreement'. Don't skip this — it's not optional, and it's the first thing a regulator will ask for.
Can I use ChatGPT API for customer data?
Yes, with safeguards. Sign the DPA, enable zero-retention, document the data flows in a DPIA, minimise what you send (don't pass entire customer records when you only need a name and question), and inform your users via your privacy notice that you use an AI processor. You should also implement your own data retention and deletion policies on your side.
Is Claude (Anthropic) API more GDPR-friendly than ChatGPT?
Both can be made GDPR compliant. Anthropic's DPA uses Irish governing law and includes Standard Contractual Clauses, which some privacy professionals prefer. OpenAI offers similar protections. The key differences are in default data handling: Anthropic doesn't use API data for training by default, while OpenAI requires you to opt out. In practice, with proper configuration, both work. Choose based on your technical needs, then configure for compliance.
Need help with this?
We build compliant AI systems and handle the documentation. Tell us what you need.
Get in TouchRelated Articles
GDPR
Do I Need a DPIA for My AI System?
Yes, you almost certainly need a Data Protection Impact Assessment if your AI system processes personal data. Here's when it's required and how to do one.
AI Development
AI Credit Scoring in Nigeria: How to Build It and Keep the Regulators Happy
37.5% of Nigerian fintechs already use AI for credit scoring. Here's how to build an AI credit scoring system that works, what it costs, and how to satisfy both the NDPA and CBN requirements.
AI for Business
AI for Insurance Companies: Claims, Underwriting, and Staying Compliant
Insurance companies are using AI to speed up claims, sharpen underwriting, and catch fraud. Here's how to build these systems without falling foul of the EU AI Act.