← Back to Insights

EU AI Act

EU AI Act Compliance: What SMEs Must Do Before August 2026

Michael K. Onyekwere··6 min read

Four months. That's what you have before the EU AI Act high-risk obligations become enforceable on August 2, 2026.

I keep hearing the same thing from SMEs: "we'll deal with it when it happens." That's what businesses said about GDPR in 2018. The ones who waited spent three times as much scrambling after the deadline as the ones who prepared.

Who this applies to

The Act applies to anyone who develops, deploys, or imports AI systems used within the EU. You don't need to be based in the EU.

If you're a UK business serving EU customers — your AI chatbot talks to people in Dublin or Berlin — the Act reaches you. If you're an Irish business using AI in any customer-facing capacity — it applies directly. If you use AI tools built by EU-based providers — you're in scope as a deployer.

The EU AI Office oversees enforcement at EU level. National competent authorities handle it locally — Ireland has designated 15 authorities across sectors.

The classification that matters

Everything hinges on what risk category your AI system falls into. The EU published an AI Act Compliance Checker to help with this — worth running through before you do anything else.

Prohibited (banned since February 2, 2025). Social scoring, manipulative AI that exploits vulnerabilities, real-time biometric ID in public spaces (narrow exceptions), emotion recognition in workplaces and schools. If you're running any of these, compliance is the least of your worries.

High-risk (August 2, 2026). This is where most business AI sits. Credit scoring. Fraud detection. Recruitment screening. Insurance pricing. Customer eligibility decisions. Performance monitoring. If your AI system makes or materially influences decisions about people, it's probably here.

The question I get asked most: "Is my chatbot high-risk?" It depends. A chatbot answering FAQ questions — limited risk, transparency obligations only. A chatbot that decides whether a customer gets a refund, qualifies for a service, or gets escalated based on risk scoring — that's starting to look high-risk.

Limited risk (transparency only). Chatbots must disclose they're AI. AI-generated content must be labelled. Deepfakes must be disclosed. The main obligation is telling people they're interacting with AI. Most customer service chatbots land here.

Minimal risk. Spam filters, inventory management, recommendation engines that don't affect rights. No specific obligations.

What high-risk compliance actually requires

Seven requirements. They sound heavy but most of them should be happening already if you're building AI responsibly. The Act just makes them mandatory and documented.

Risk management system. Not a one-off assessment — an ongoing process. Identify risks, assess likelihood and severity, implement mitigations, monitor continuously. Update when the system changes. Document everything.

Data governance. Your training, validation, and testing datasets need documented practices. What data did you use? How was it selected? How did you test for bias? If you can't answer these, start documenting now.

Technical documentation. A detailed description of the system — purpose, architecture, accuracy metrics, limitations, intended use, foreseeable misuse. This is the conformity assessment evidence. The EU has committed to publishing simplified templates for SMEs.

Record-keeping. Automatic logging of the system's operations. What decisions did the AI make? What inputs triggered them? What was the confidence level? You need audit trails.

Transparency. Clear information to deployers and users about what the system does, how it works, and what its limitations are. Not buried in a terms document — genuinely accessible.

Human oversight. Mechanisms for a human to intervene and override. This can't be rubber-stamping. The human needs to understand the AI's output well enough to make an independent judgment. If your "human oversight" is someone clicking approve on 500 decisions per hour without reading them, that won't hold up.

Accuracy, robustness, cybersecurity. The system must perform as intended and resist adversarial manipulation. Testing, validation, ongoing monitoring.

What to do right now

Inventory every AI system. Everything. The chatbot on your website. The recommendation engine in your product. The GPT integration your developer built in a weekend. The third-party SaaS tool with "AI-powered" in the marketing. All of it.

Classify each one. Use the EU compliance checker. When you're not sure, assume higher risk. Over-preparing is cheaper than under-preparing.

Check what you already have. If you've done GDPR compliance properly — DPIAs, DPAs, privacy notices, retention policies — you're further along than you think. The AI Act doesn't replace GDPR. It adds to it. And a lot of the documentation overlaps.

Fill the gaps. For each high-risk system, you'll need the risk management documentation, technical documentation, logging capability, human oversight procedures, and transparency notices. For limited-risk systems, you mainly need the transparency disclosure.

Don't wait until July. The conformity assessments for high-risk systems take weeks, sometimes months. If you start in July for an August 2 deadline, you're too late. Start now.

The SME angle

The panic headlines make this sound unsurvivable. It's not — especially for SMEs.

You have fewer AI systems to audit than a large enterprise. Your documentation is simpler. The regulation explicitly provides for SMEs: simplified documentation forms, AI regulatory sandboxes (free for SMEs), and proportionate penalties.

If you run one AI chatbot and two automation workflows, this is a bounded project. Not free, but bounded. The businesses that will struggle are the ones with dozens of undocumented AI systems scattered across departments. That's probably not you.

Ireland specifically

Ireland has designated 15 competent authorities for AI Act enforcement. Which one you deal with depends on your sector — the Data Protection Commission for personal data aspects, ComReg for telecoms, the Central Bank for financial services AI.

Enterprise Ireland and Local Enterprise Offices offer digitalisation vouchers that may cover compliance assessment costs. Worth checking — having someone else pay for compliance takes the sting out.

UK businesses

The UK isn't directly subject to the EU AI Act. But if your AI system's output reaches EU users — and it probably does if you have any European customers — the Act applies to you extraterritorially.

The UK government is developing its own AI regulatory framework but moving slower than the EU. Practically speaking, if you comply with the EU AI Act, you'll be ahead of whatever the UK eventually requires. GDPR compliance is already required for both.

What non-compliance actually looks like

Fines: up to €35 million or 7% of global turnover for prohibited practices. Up to €15 million or 3% for other violations.

But the practical risk for an SME isn't a €35M fine. It's a customer complaint about an AI decision you can't explain, with no documentation, no risk assessment, and no oversight mechanism on record. That's a story that writes itself.

The businesses that prepare look professional. The businesses that scramble after August 2 look negligent.


Need to know what the EU AI Act means for your systems before August 2? Start with an AI Act Scoping Review — £500. We classify the systems, identify the documentation gap, and tell you what has to be done. If you need the broader service, see our EU AI Act Compliance page.

Frequently Asked Questions

Does the EU AI Act apply to small businesses?

Yes. The Act applies based on what the AI system does, not how big the company is. If you deploy a high-risk AI system, you comply regardless of size. SMEs get some reduced documentation requirements, but the core obligations still apply.

What are the penalties for non-compliance with the EU AI Act?

Up to €35 million or 7% of global annual turnover for prohibited AI practices. Up to €15 million or 3% for other violations. The regulation says penalties should be proportionate to company size — but proportionate to a regulator still hurts.

What counts as high-risk AI under the EU AI Act?

AI systems used in employment decisions, credit scoring, insurance risk assessment, access to essential services, education assessment, and law enforcement. AI chatbots that make decisions materially affecting customers may also qualify.

When do I need to comply with the EU AI Act?

Prohibited AI practices were banned February 2, 2025. General-purpose AI model rules applied from August 2, 2025. High-risk AI system obligations kick in August 2, 2026 — four months from now.

How much does EU AI Act compliance cost for an SME?

A basic compliance assessment starts around £2,500. Full documentation for a high-risk system — risk management, data governance, technical documentation — runs £5,000–£15,000 through a specialist. We include AI Act classification in every build from £3,500.

Start with a £500 scoping review

If you need GDPR documentation, AI Act work, or a compliant AI build, the first step is a written scoping review. You get a real report, not a generic discovery call.

EU AI ActAI complianceSMEAI Act 2026high risk AIAI governance