AI in your business: what you're actually responsible for
If your business uses any AI tool that touches personal data, you have obligations under UK GDPR that most founders have never been told about. This is what they are and how to meet them.
The problem nobody is naming
There is a specific kind of legal exposure that UK founders using AI tools are accumulating right now, and most of them have no idea it is happening.
It is not because they are careless. It is because the guidance around AI and data protection is split across multiple regulatory frameworks, written in language that assumes you already have a compliance team, and distributed across ICO guidance documents that most founders will never read.
The result is that founders who would never dream of mishandling customer data are doing so — every day, in ways that are entirely visible to regulators — simply because no one has explained what the rules actually require when AI is in the processing chain.
This article is an attempt to fix that. It is not comprehensive legal advice. It is a plain-English description of the obligations that apply to your business if you are using AI tools that handle personal data — which, if you are using any mainstream AI productivity or automation tool, you almost certainly are.
The ICO does not distinguish between accidental and deliberate non-compliance. Both result in enforcement. The only distinction that matters is whether you can demonstrate you took reasonable steps.
What counts as processing personal data
Under UK GDPR, "processing" covers virtually anything you do with personal data: collecting it, storing it, organising it, analysing it, transmitting it, or deleting it. "Personal data" means any information that relates to an identified or identifiable natural person.
This matters because when you run customer emails through an AI drafting tool, you are processing personal data. When you use an AI CRM assistant that reads your contact database, you are processing personal data. When you run meeting transcripts through a summarisation tool that contains names, you are processing personal data.
Most founders think of "data processing" as something that happens in databases. It does not. It happens every time personal data moves through a system — including, very definitely, through AI tools.
AI email assistants reading your inbox. AI meeting tools transcribing calls with clients. AI CRM tools analysing customer behaviour. AI chatbots handling customer queries. AI HR tools reviewing CVs. Automation platforms that route customer data between systems. All of these involve processing personal data under UK GDPR.
The lawful basis requirement
UK GDPR requires that every processing activity has a documented lawful basis. There are six. For most AI-driven business processes, the relevant ones are:
- Consent — the data subject has given clear, specific, informed consent to the processing
- Contract — processing is necessary for the performance of a contract with the data subject
- Legitimate interests — processing is necessary for your legitimate interests, provided those interests are not overridden by the data subject's rights
The problem with most AI use cases is that founders have not identified which lawful basis applies. They have simply started using the tool. This is a compliance failure regardless of whether the actual processing causes harm — the absence of documented lawful basis is, itself, a breach.
Legitimate interests is the most commonly claimed basis for business AI use, but it is not a free pass. It requires a three-part test: you must identify the legitimate interest, demonstrate the processing is necessary, and balance your interests against the individual's. This must be documented.
Legitimate interests is a justification, not an exemption. If you cannot write down your balancing test, you have not met the standard.
The transparency obligation
Even if you have a lawful basis, you must tell people what you are doing with their data. This is the transparency obligation under Articles 13 and 14 of UK GDPR, and it applies to AI processing as much as to any other kind.
If your privacy notice was written before you started using AI tools — which for most businesses means before 2022 — it almost certainly does not cover AI processing. It will not mention:
- That personal data is processed by third-party AI systems
- Which AI vendors receive personal data
- The purpose of AI-assisted processing
- Whether any automated decision-making takes place
- Data transfers outside the UK that result from AI tool use (common with US-based AI platforms)
A privacy notice that does not reflect your actual processing activities is not just incomplete — it actively creates liability. It is, in effect, a document that tells your clients and customers something inaccurate about how their data is handled.
Third-party AI vendors as data processors
When you pass personal data to an AI tool, the company behind that tool becomes a data processor acting on your behalf. Under UK GDPR Article 28, you are required to have a written contract — a Data Processing Agreement — with every data processor.
Most AI platforms do publish terms that include data processing provisions. The problem is that those terms are almost never reviewed by the businesses using the platforms. And when they are reviewed, they frequently contain provisions that are incompatible with the business's own client commitments.
Specific issues to look for:
- Model training clauses — some platforms reserve the right to use your data to train their AI models. This is not compatible with most client confidentiality obligations and may not be compatible with UK GDPR depending on the lawful basis.
- Data retention periods — platforms often retain data for longer than the business that submitted it realises.
- Sub-processor chains — AI platforms often use their own sub-processors. You are responsible for understanding and accepting this chain.
- Data location — most major AI platforms are US-based. Data transfers outside the UK require specific safeguards under UK GDPR.
If you have signed a client contract that prohibits sharing confidential information with third parties, and you are then running that client's data through an AI platform, you may already be in breach of that contract — regardless of whether you have a DPA in place with the AI vendor.
The EU AI Act dimension
The EU AI Act came into force in August 2024. For most UK founders, the immediate question is: does it apply to me?
If you are a UK business with no EU customers, no EU data subjects, and no EU operations, the direct application is limited. But this describes fewer businesses than founders assume. If you have any clients, suppliers, employees or website visitors in the EU, you have EU data subjects, and EU AI Act provisions apply to the extent that AI systems interact with them.
The Act introduces a risk classification system for AI systems:
- Unacceptable risk — prohibited AI applications (social scoring, certain biometric surveillance)
- High risk — AI in critical infrastructure, employment decisions, credit scoring, law enforcement, and similar contexts — subject to strict obligations
- Limited risk — transparency obligations apply (e.g. AI chatbots must identify themselves as AI)
- Minimal risk — no specific obligations beyond general principles
Most small business AI use falls in the limited or minimal risk categories. But if you use AI in any employment-related decisions — including CV screening, performance assessment, or scheduling — you may be operating in the high-risk category without knowing it.
The 12-question compliance checklist
These are the twelve questions that determine whether your AI use is compliant. If you cannot answer yes to all of them, you have a gap that needs addressing.
- Have you identified every AI tool in your business that processes personal data?
- For each tool, have you documented what personal data is processed?
- For each processing activity, have you identified and documented a lawful basis?
- If relying on legitimate interests, have you completed and documented a balancing test?
- Does your privacy notice accurately describe AI-assisted processing?
- Does your privacy notice name or describe the AI vendors who receive personal data?
- Do you have a signed Data Processing Agreement with each AI vendor?
- Have you reviewed each AI vendor's terms for model training and data retention provisions?
- Have you assessed whether any AI vendor involves a cross-border data transfer and put appropriate safeguards in place?
- Do your client contracts permit you to pass their data to AI vendors?
- If any AI system makes or assists in decisions about individuals, have you assessed whether a DPIA is required?
- Is your Record of Processing Activities (ROPA) up to date and does it include AI-assisted processes?
Most founders, when they work through this list honestly, find they can answer yes to two or three questions. The rest are gaps. Each gap is a potential enforcement point.
What to do next
The good news is that none of this is technically complex to fix. The documentation is not difficult to produce once you know what it needs to say. The vendor terms reviews follow a consistent pattern. The privacy notice updates are substantive but not lengthy.
The issue is that most founders do not know where to start, which gaps are most urgent, or what "good enough" looks like. That is where Lexl is useful.
A Lexl AI Compliance Audit will work through your specific tool stack, identify your specific gaps, and give you a prioritised remediation plan — in five business days, for a fixed fee. If you want us to produce the documentation as well, the Compliance Build tier covers the full output.
If you would rather start with a conversation, book a free 30-minute clarity session. No obligation. Just an honest assessment of where you stand.
Know where you stand
If reading this raised questions about your own AI tool stack, a free 30-minute clarity session will give you specific answers about your specific situation. No obligation, no invoice.