The EU AI Act: What UK Technology Companies Need to Know

7 min read
EU AI Act, Regulation, Compliance
Share

The EU AI Act, which entered into force in August 2024 with phased compliance deadlines through 2027, is the world's first comprehensive AI regulation. UK technology companies might assume that post-Brexit, EU regulations are no longer their concern. That assumption is incorrect. If your AI system is used by customers in the EU, or if it processes data from EU residents, the AI Act applies to you.

The Act uses a risk-based classification system. Unacceptable risk AI systems, such as social scoring and real-time biometric surveillance, are banned outright. High-risk AI systems, which include those used in employment, education, critical infrastructure, and law enforcement, face the most stringent requirements: risk management systems, data governance, technical documentation, transparency obligations, human oversight, and accuracy and robustness standards. Limited risk systems require only transparency measures, while minimal risk systems face no specific requirements.

The EU AI Act applies to any company that places AI systems on the EU market, regardless of where that company is based.

Most business AI applications fall into the limited or minimal risk categories. A chatbot that assists customers with product selection is limited risk, requiring only that users be informed they are interacting with an AI. An internal tool that summarises meeting notes is minimal risk. However, an AI system that screens job applicants or assesses creditworthiness is high risk and must comply with the full set of requirements.

Risk-Based Classification

For UK companies, the practical implications are significant. If you provide AI-powered SaaS to EU customers, you need to classify your systems under the Act's risk framework. If any of your systems are high risk, you must establish a conformity assessment process, maintain technical documentation, implement a quality management system, and register in the EU database. The penalties for non-compliance are substantial: up to 35 million euros or 7% of global annual turnover.

The compliance timeline is staggered. Prohibited AI practices were banned from February 2025. Obligations for general-purpose AI models took effect in August 2025. High-risk system requirements apply from August 2026. This gives UK companies time to prepare, but the preparation itself takes months. Classifying your AI systems, identifying gaps, implementing governance frameworks, and creating the required documentation cannot be done in a rush.

  • The EU AI Act applies to UK companies whose AI systems are used in the EU market
  • Classify all your AI systems under the Act's risk-based framework
  • High-risk systems require risk management, documentation, and conformity assessment
  • Penalties reach up to 35 million euros or 7% of global annual turnover
  • Begin with a comprehensive AI system inventory and risk classification
  • Plan for the staggered compliance deadlines through August 2026

We recommend that UK technology companies begin with an AI system inventory: a complete catalogue of every AI and ML system you develop, deploy, or resell. For each system, assess the risk classification under the EU AI Act. Then prioritise compliance work based on the classification and the applicable deadline. This structured approach prevents the panic that comes from discovering compliance gaps close to the deadline.

Want to Chat?

Contact our friendly team for quick and helpful answers.

Contact us