AI and UK GDPR: Practical Guidance
Practical guidance for entering workplace data into AI tools under UK GDPR. What you can and cannot do, how to assess risk, and how to document your approach.
The Core Question: Can I Put Work Data Into AI Tools?
This is the most common question teams ask when starting to use AI at work. The answer is: it depends on what data you're entering and which AI tool you're using.
Under UK GDPR, personal data is any information that can identify a living individual — names, email addresses, ID numbers, and even combinations of data that could identify someone indirectly. If you paste personal data into an AI tool, you are processing that data, and UK GDPR applies.
The key considerations are: what data you are entering (personal, sensitive, commercial, or generic), which tool you are using (free-tier tools often use your input for training; enterprise tiers typically do not), and what your lawful basis is for processing that data in this way.
Generic, non-personal data — such as asking an AI to draft a job description template or summarise publicly available guidance — carries very low GDPR risk. Entering a spreadsheet of employee names and performance ratings carries very high risk.
Free Tier vs Enterprise: Why It Matters
Most AI tools offer both free and paid tiers, and the data handling differs significantly between them.
Free-tier tools (such as the free version of ChatGPT) typically state in their terms that user inputs may be used to train future models. This means any data you enter could become part of the AI's training data and potentially surface in other users' outputs. For personal or commercially sensitive data, this is a serious GDPR concern.
Enterprise-tier tools (such as ChatGPT Enterprise, Microsoft Copilot for Microsoft 365, Google Gemini for Workspace) generally commit to not using customer data for training. They may also offer data residency guarantees (where your data is stored) and additional security controls.
Before using any AI tool with work data, check the tool's data processing terms. Look specifically for: whether inputs are used for training, where data is stored, how long it is retained, and whether you can request deletion. Document this assessment as part of your AI governance.
Lawful Basis and Data Protection Impact Assessments
UK GDPR requires a lawful basis for any processing of personal data. The most commonly relied-upon bases for AI use in the workplace are legitimate interests (for general business efficiency) and consent (where specific data subjects have agreed).
If your AI use involves personal data at any scale, the ICO recommends conducting a Data Protection Impact Assessment (DPIA). This doesn't need to be a lengthy legal document — it's a structured way to identify and mitigate risks. Our AI Risk Assessment Starter template provides a practical starting framework.
Key questions your assessment should answer: What personal data is involved? What is the lawful basis? What risks does this processing create for data subjects? What technical and organisational measures mitigate those risks? Is the AI tool's data processing agreement adequate?
For higher-risk processing — such as automated decision-making that affects individuals — UK GDPR Article 22 provides additional protections, including the right not to be subject to purely automated decisions with legal or significant effects.
Practical Rules for Your Team
Here are straightforward rules you can adopt immediately:
- Never enter personal data into free-tier AI tools. If you need to use AI with personal data, use an enterprise-tier tool with appropriate data processing terms.
- Anonymise before you paste. If you need AI help with a document containing personal data, remove or replace identifying information first.
- Check the tool's terms. Before adopting any new AI tool, read the data processing section. If inputs are used for training, it's unsuitable for personal or sensitive data.
- Document your approach. Keep a record of which AI tools are approved, what data may be entered into each, and who made that decision. Our AI Tool Approval Log template helps with this.
- Brief your team. Make sure everyone who uses AI tools understands the data rules. Include this in your AI acceptable use policy.
These rules don't require legal expertise to implement. They are common-sense measures that significantly reduce your GDPR exposure while still allowing your team to benefit from AI tools.