There is a very common misunderstanding among UK founders about the EU AI Act. It goes like this: "We're a UK business. We left the EU. EU regulation doesn't apply to us."

This is wrong — and the mistake is expensive. The EU AI Act has extraterritorial reach by design. It applies based on where the impact of an AI system is felt, not where the company deploying it is based. If your AI system produces outputs that affect people in the EU — as customers, employees, users, or subjects of automated decisions — you are within scope.

For most UK businesses with any EU-facing operations, this means you have exposure you may not have assessed. Here is what the Act actually requires and how to figure out where you stand.

Brexit does not protect you from the EU AI Act

The EU AI Act (Regulation (EU) 2024/1689) follows the same extraterritorial model as the GDPR. The key provisions are in Article 2:

The Act applies to:

  • Providers — anyone who develops or places on the market an AI system, regardless of where they are established, if the system is used in the EU
  • Deployers — anyone who deploys an AI system within the EU, or whose AI system affects EU individuals
  • Importers and distributors — anyone who makes an AI system available in the EU market

The critical phrase is "used in the Union" and "output used in the Union." If the output of your AI system — a decision, a recommendation, a classification, a piece of generated content — is consumed by or affects a person in the EU, you are in scope. Your company being headquartered in Darlington or London does not change that.

The Act became law on 1 August 2024. Prohibited practices became enforceable on 2 February 2025. High-risk AI obligations begin applying from August 2026, with most other requirements phased in through 2027.

What the EU AI Act actually covers

The Act regulates "AI systems" — defined as machine-based systems that operate with varying levels of autonomy and produce outputs such as predictions, recommendations, decisions, or content that influence real-world environments.

This definition is deliberately broad. It covers:

  • Machine learning models used for any business purpose
  • Generative AI tools (ChatGPT, Claude, Gemini, Copilot, etc.) when deployed in a business context
  • Recommendation systems (for products, content, pricing)
  • Automated decision-making systems (for hiring, lending, scoring, access)
  • Computer vision, NLP, and other AI processing pipelines
  • AI tools embedded in SaaS products you use or distribute

What it does not cover (generally): AI systems used solely for military or national security purposes, AI used purely for personal non-professional purposes, and most basic statistical processing that doesn't involve machine learning or neural networks.

The distinction matters practically. If you're using a spreadsheet formula to filter data, that is almost certainly outside scope. If you're using a trained model to score customer credit risk or generate personalised output at scale, you are inside scope.

The four risk tiers — and where most UK SMEs sit

The EU AI Act classifies AI systems by risk level. Your obligations depend entirely on which tier your systems fall into.

Unacceptable risk (prohibited) — AI systems that pose a fundamental threat to rights or safety. These are banned outright. Examples: social scoring systems by public authorities, real-time biometric surveillance in public spaces, subliminal manipulation, exploitation of vulnerabilities in specific groups.

High risk — AI systems in specific regulated domains where errors have serious consequences. This is where most compliance obligations concentrate. Examples: AI used in hiring and employment decisions, access to education, credit scoring, insurance, critical infrastructure, biometric identification, migration, legal proceedings, and AI components in safety-critical products.

Limited risk — AI systems that interact with people in ways requiring transparency. Examples: chatbots, deep fakes, emotion recognition systems. The primary obligation is disclosure — users must know they're interacting with AI.

Minimal risk — Everything else. AI used for spam filters, inventory management, productivity tools, standard recommendation engines without significant individual impact. No mandatory requirements beyond good practice.

Where most UK SMEs actually land: The majority of small businesses using standard AI tools (ChatGPT for content, AI assistants, basic automation) are in the minimal risk tier — no mandatory obligations beyond the prohibited practices rules. However, if you have any AI in your hiring process, client scoring, automated credit or insurance assessment, or if you've built a product with AI that makes consequential decisions about EU individuals, you may be in the high-risk tier. This is worth checking explicitly, not assuming.

Prohibited practices — enforceable since February 2025

These are the hard rules. They apply regardless of risk tier and have been in force since 2 February 2025. If your AI systems do any of the following, you are already in breach:

  • Subliminal manipulation: AI that influences a person's behaviour below the threshold of their conscious awareness, causing harm. This includes dark patterns powered by AI, manipulative personalisation that exploits psychological vulnerabilities without the person realising.
  • Exploiting vulnerabilities: AI that specifically exploits the vulnerabilities of certain groups — elderly individuals, people with disabilities, those under socioeconomic pressure — to distort their behaviour.
  • Social scoring: General-purpose social scoring systems that rate people based on their behaviour or characteristics for purposes unrelated to the context in which the data was generated. This applies to private entities, not just governments.
  • Real-time biometric surveillance: Real-time remote biometric identification in publicly accessible spaces for law enforcement (with very limited exceptions). This applies differently if you're not law enforcement, but biometric AI in public-facing contexts needs careful scrutiny.
  • Emotion inference in certain contexts: AI that infers emotions of individuals in workplace or educational settings (with limited exceptions for safety purposes).
  • Scraping facial images: Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.

The practical implication for most SMEs: if your AI tools are doing any of these things in your EU-facing operations, you need to stop. The prohibitions have no grace period.

High-risk AI obligations — what's required

If you are in the high-risk tier — either as a provider (you built or substantially customised the system) or a deployer (you're using high-risk AI in your operations affecting EU individuals) — the obligations are substantial.

For providers of high-risk AI systems, you must:

  • Implement a risk management system throughout the AI lifecycle
  • Ensure training, validation, and testing datasets meet quality criteria for accuracy, representativeness, and absence of harmful bias
  • Maintain technical documentation demonstrating conformity
  • Enable logging and traceability of the system's operations
  • Ensure transparency so deployers understand the system's capabilities and limitations
  • Enable human oversight by the deployer
  • Meet accuracy, robustness, and cybersecurity standards
  • Register the system in the EU database before deployment
  • Appoint an EU representative if established outside the EU

For deployers of high-risk AI systems, you must:

  • Use the system in accordance with the provider's instructions
  • Assign human oversight to qualified persons
  • Conduct a Fundamental Rights Impact Assessment before deploying in certain contexts
  • Ensure staff operating the system are appropriately trained
  • Inform affected individuals when consequential decisions are made about them using the system
  • Maintain logs and cooperate with supervisory authorities

This is not a light compliance load. If you're genuinely in the high-risk tier, this requires substantive programme work — not just a policy document.

Are you a provider or a deployer?

The distinction matters for what you owe. In most cases, UK SMEs using AI tools built by someone else (OpenAI, Google, Microsoft, Anthropic, etc.) are deployers, not providers. The provider obligations sit with the company that built the model.

You become a provider — and take on provider obligations — when you:

  • Build your own AI system (including fine-tuning a base model on proprietary data for a specific high-risk purpose)
  • Substantially modify an existing AI system before deploying it
  • Put your own name or brand on an AI system and place it on the market
  • Build an AI-powered product that someone else uses (you're the SaaS provider, they're the end deployer)

The line is not always clean. If you've taken a general-purpose model and built a system around it that makes consequential decisions about EU individuals — automated credit decisions, hiring scoring, tenant screening — the question of whether you've "substantially modified" a model is live and worth assessing properly.

What to do now

The starting point is an honest inventory. List every AI system your business uses or has built. For each one, answer three questions:

1. Does this system's output reach or affect EU individuals? If no — you're out of scope for this system. If yes, continue.

2. Does this system fall into a prohibited category? Check the list above. If yes — stop, restructure, or seek advice immediately.

3. Does this system fall into the high-risk tier? If yes — assess whether you're a provider or deployer, and identify which obligations apply to you and when they become mandatory.

4. If limited risk: Ensure you have disclosure in place so EU users know they're interacting with AI. Document this decision.

5. If minimal risk: No mandatory obligations — but document the risk assessment that reached this conclusion, so you can demonstrate it if asked.

Most UK SMEs with EU-facing operations will find the majority of their AI tools sit in the minimal or limited risk tiers. That doesn't mean zero work — it means the work is a risk classification exercise, appropriate disclosures, and documentation. That is achievable. But it requires actually doing it, which almost no one has.

The businesses at real risk are those in sectors where AI makes consequential decisions about people: HR tech, fintech, insurtech, legal tech, healthcare, and any SaaS product that automates access, scoring, or decisions at scale for EU users. If that is your business, the high-risk obligations are coming into force now and through 2026-2027 — and the preparation time is not generous.

If you want to work through which tier your AI systems sit in and what your obligations actually look like, the first conversation is free.