One of the most common mistakes UK founders make with AI tools is treating the lawful basis question as a formality — something to fill in on a form and forget. It isn't. The lawful basis you choose shapes what you're legally permitted to do with data, what rights the data subject has, and what happens if the ICO comes knocking.

Get it wrong and you're not just technically non-compliant. You've structured your entire processing operation on a foundation that doesn't hold. Every downstream document — your privacy notice, your ROPA, your DPAs — reflects the wrong basis and compounds the problem.

So here are the six bases, what they actually mean, and — critically — which one most AI processes should be using.

The six lawful bases under UK GDPR

Article 6 of UK GDPR sets out six lawful bases for processing personal data. You need at least one to apply before you touch personal data. They are:

  1. Consent — the data subject has freely given, specific, informed, and unambiguous consent
  2. Contract — processing is necessary for the performance of a contract with the data subject
  3. Legal obligation — processing is necessary to comply with a legal obligation
  4. Vital interests — processing is necessary to protect someone's life
  5. Public task — processing is necessary for a task in the public interest (rarely applies to private businesses)
  6. Legitimate interests — processing is necessary for your legitimate interests, provided they're not overridden by the data subject's rights

Numbers 3, 4, and 5 are niche. Most private businesses using AI tools are choosing between 1, 2, and 6. And most are choosing 6 when they should be choosing 1 or 2.

Consent sounds simple. Ask someone if they agree, they say yes, you process. In practice it's one of the hardest bases to get right — and one of the easiest to get wrong in a way that feels compliant but isn't.

For consent to be valid under UK GDPR it must be:

  • Freely given — no coercion or imbalance of power. Pre-ticked boxes don't count. "Agree to our terms to use the service" is almost never valid consent for data processing.
  • Specific — consented to a specific purpose, not a vague "and other uses."
  • Informed — the person genuinely understood what they were agreeing to.
  • Unambiguous — a clear affirmative action. Silence, pre-ticked boxes, or inaction don't count.
  • Withdrawable — you must make it as easy to withdraw as to give.

Here is the practical implication for AI: if you're using AI to process data about your own customers in B2B contexts — analysing their company data, processing communications, running their information through automation — consent is almost never the right basis. The power imbalance between a service provider and a client, the bundling of consent with service access, and the difficulty of making consent genuinely free all make it the wrong tool for the job.

Consent works well for: marketing emails, cookies, processing sensitive personal data alongside another basis, situations where you genuinely want to ask and can genuinely offer a meaningful choice.

Contract — the most underused basis in AI

Processing is necessary for the performance of a contract with the data subject. This is the basis most B2B AI processing should be using — and almost no one is documenting it properly.

The key word is "necessary." Not convenient. Not useful. Necessary. You need to be able to demonstrate that without this processing, you couldn't perform the contract. For most AI use cases in service delivery, this is straightforwardly true — and the documentation just doesn't exist.

If you're using AI to process a client's data as part of delivering a service to them, and the contract covers that service, the contract basis is almost certainly the right one. This means:

  • Your contract with the client needs to actually describe the relevant processing
  • Your ROPA entry needs to reference the contract as the lawful basis
  • Your privacy notice needs to explain that contract performance is why you're processing

Most businesses have none of these three things documented correctly. The contract exists, the processing happens, but the paper trail connecting them doesn't.

Legitimate interests — the most abused basis in UK data protection

Legitimate interests is often the default choice when businesses can't identify any other basis. This is a problem.

Legitimate interests requires a three-part test:

  1. Purpose test: Is there a legitimate interest? (Almost any genuine business interest qualifies.)
  2. Necessity test: Is the processing necessary for that interest?
  3. Balancing test: Do the data subject's interests, rights, and freedoms override your legitimate interest?

Most businesses applying legitimate interest stop at step one. They identify a legitimate business interest, note it down, and proceed. They never complete the necessity or balancing tests. This isn't legal compliance — it's documentation theatre.

The balancing test in particular is where AI processing tends to fail. AI systems often process data at scale, in novel ways, with limited transparency, and with outcomes that individuals have limited ability to understand or contest. These factors all weigh heavily against the data controller in a balancing test. Legitimate interest as a basis for AI-powered profiling, automated decision-making, or large-scale data analysis is fragile ground.

The ICO has been explicit about this. Their guidance on AI and data protection notes that legitimate interest "may not be appropriate where there is a significant imbalance of power between the organisation and individuals, for example where individuals cannot reasonably expect the processing or where it is intrusive."

The practical test for your AI process

Ask these questions in order:

1. Is this processing necessary to perform a contract with the data subject? If yes ? contract basis. Document which contract and why the processing is necessary for it.

2. Has the data subject given valid, specific, informed, and freely withdrawable consent for this exact processing? If yes ? consent basis. Keep records of when, how, and for what.

3. Do you have a genuine legitimate interest, AND is this processing strictly necessary for it, AND does the data subject's right not to be processed this way not override it? Only if all three: legitimate interest — with a documented Legitimate Interest Assessment.

4. If you can't answer yes to any of the above — you don't have a lawful basis. You need to stop processing or restructure until you do.

How to document your decision

Choosing the right basis isn't enough. The ICO requires that you be able to demonstrate your decision — why you chose it, how you assessed it, and when. This is the documentation most founders completely lack.

For each AI process that handles personal data, you need:

  • A ROPA entry identifying the lawful basis
  • A privacy notice that discloses the basis to data subjects
  • If legitimate interest: a written Legitimate Interest Assessment that completes all three parts of the test
  • If consent: records of what was consented to, when, by whom, and how withdrawal is handled

None of this is complicated. All of it requires sitting down and doing it — which almost no one does until a complaint or inquiry forces the issue.

What to do now

Start with the list. Write down every AI tool your business uses that touches personal data. For each one, go through the test above and identify the most defensible lawful basis. Then check whether your contracts, privacy notice, and internal records reflect that basis accurately.

If the answer to that last question is "no" — which it almost always is — that is your compliance gap. It's fixable. But it requires more than a new privacy notice. It requires the whole paper chain to be consistent: contract, notice, ROPA, and any third-party agreements with your AI vendors.

If you want to work through this for your specific business and AI tools, the first conversation is free.