Skip to main content
PDF

EU AI Act: What UK Organisations Need to Know

What the EU AI Act means for UK organisations that operate in the EU or supply EU customers. Key obligations, risk categories, and practical steps.

Policyadvanced10 min read·Updated 21 April 2026

What Is the EU AI Act?

The EU AI Act is the world's first comprehensive AI regulation, adopted by the European Parliament in March 2024. It establishes a risk-based framework for regulating AI systems within the European Union.

The Act classifies AI systems into four risk categories: unacceptable risk (banned), high risk (subject to strict requirements), limited risk (transparency obligations), and minimal risk (no specific requirements). Most workplace AI tools fall into the limited or minimal risk categories.

Although the UK is no longer in the EU, the Act matters for UK organisations that provide AI systems or AI-enabled services to EU customers, operate in the EU, or are part of supply chains that include EU entities.

When Are UK Organisations Affected?

UK organisations may be caught by the EU AI Act if they:

  • Place AI systems on the EU market — if you develop or sell AI tools used by customers in the EU, the Act applies to those systems
  • Deploy AI systems in the EU — if you use AI tools to make decisions affecting people in the EU (even from a UK office), certain obligations may apply
  • Form part of an EU supply chain — if you provide components or data that feed into an AI system used in the EU

This is similar to how UK GDPR applies: even though the UK has its own data protection regime, UK organisations processing EU residents' data must comply with EU GDPR. The same extraterritorial logic applies to the AI Act.

If your organisation operates purely within the UK and does not serve EU customers or form part of EU supply chains, the Act does not directly apply — but watching its development is still worthwhile, as the UK government may adopt similar approaches.

The Risk Categories Explained

Unacceptable risk (banned): AI systems that manipulate behaviour, exploit vulnerabilities, enable social scoring by governments, or use real-time biometric identification in public spaces (with limited exceptions for law enforcement).

High risk: AI systems used in areas like employment decisions, education, migration, and critical infrastructure. These must meet requirements including risk management, data governance, documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. Most workplace AI tools do not fall into this category unless they are used for recruitment, performance management, or similar high-impact decisions.

Limited risk: AI systems that interact with people (chatbots), generate content (generative AI), or use biometric data must meet transparency requirements. Users must be told they are interacting with AI or viewing AI-generated content.

Minimal risk: Most AI tools — such as spam filters, search algorithms, and general-purpose productivity assistants — have no specific obligations under the Act.

Practical Steps for UK Organisations

If your organisation may be affected, here are practical steps to take now:

  1. Map your AI use. Create an inventory of all AI systems you develop, deploy, or contribute to. Our AI Tool Approval Log template can help structure this.
  2. Assess which systems serve the EU. For each AI system, determine whether it is placed on the EU market or used to make decisions about EU-based individuals.
  3. Classify the risk level. Determine whether each affected system falls into the high, limited, or minimal risk category.
  4. Address compliance gaps. For high-risk systems, assess your position against the Act's requirements (risk management, documentation, human oversight, etc.). For limited-risk systems, ensure transparency obligations are met.
  5. Monitor UK developments. The UK government is developing its own approach to AI regulation. Stay informed through the Department for Science, Innovation and Technology's (DSIT) publications.

For most UK SMEs using mainstream AI tools for internal productivity, the EU AI Act will have minimal practical impact. The organisations most affected are those developing or selling AI systems, particularly in high-risk domains.

Frequently Asked Questions