Why Many UK Firms Will Need to Comply with the EU AI Act
For a comprehensive, structured treatment of these obligations and how they translate into engineering artefacts, see our full guide: Compliance-as-Architecture: An Engineering Leader's Guide to the EU AI Act.
The United Kingdom is no longer a Member State of the European Union. It is therefore tempting for UK organisations to assume that the EU AI Act is a continental concern that is no longer relevant. Such an assumption could entail significant risk.
For many UK firms, exposure under the EU AI Act is not determined by where they are incorporated, but by where their AI systems are used and where their effects are felt. The Act is structured to capture providers placing 'AI systems' on the EU market and deployers whose outputs are used within the EU.
By way of example, companies like OpenAI, Anthropic, and Google would be providers under the provisions of the act, as they develop AI systems and place them on the EU market under their own name or trademark.
Companies like Harvey AI and Perplexity.ai would be deployers under the provision of the act, because they bring products to market using the models created by the providers. Interestingly OpenAI would also qualify as a deployer, as ChatGPT is an end product used by consumers and businesses.
Extraterritorial reach in practice
A UK-based fintech offering AI-driven credit assessment to customers in France or Germany is likely within scope of the EU AI Act because its outputs directly affect individuals in the EU. The relevant question is not where the model runs; it is where the decision has effect.
Similarly, a UK SaaS provider supplying AI-enabled tooling to an EU financial institution may qualify as a provider placing an AI system on the EU market. Business-to-business status does not remove exposure.
Providers of high risk AI systems based outside the EU must also appoint an 'authorised representative' within the EU before placing their system on the market. This is a practical requirement that many UK firms will need to address.
Agentic architectures and classification
Much commentary assumes a single, bounded AI feature (eg, a deployed model that accepts some data and returns some data). In practice, many organisations are building multi-step agentic workflows with delegated task execution, tool invocation, and decision chains that span systems and jurisdictions. In such environments, it is not always obvious where the AI system begins and ends.
In an agentic architecture, compliance questions rarely attach cleanly to a single model; they attach to the design of the workflow. Risk often sits in orchestration rather than in the model itself.
Governance as design constraint
The EU AI Act expects different levels of risk management processes, oversight, monitoring, and documentation, depending on the AI product's risk classification.
A 'high risk' system needs everything previously mentioned, while 'limited risk' systems generally only need correct documentation, and 'minimal risk' systems have no specific, mandatory requirements.
Above the risk tiers, the Act also prohibits certain AI practices outright (eg, social scoring and certain forms of biometric categorisation) and imposes separate obligations on general-purpose AI models regardless of risk classification (see GPAI Obligations for Engineering Teams).
In high risk systems, the mandatory processes are not 'box ticking' exercises, but detailed, architecture-level decisions that are part of the product:
- For human oversight, AI workflows pausing at defined thresholds for a human to intervene at take a decision
- For monitoring, capturing performance data, monitor risks, and report serious incidents to authorities
- For documentation, mandatory technical documentation describing intended purpose, system design, data characteristics, risk controls, testing, and compliance measures
- For risk management, documented lifecycle risk assessment and mitigation required before market placement and during operation
For UK firms with EU exposure, governance cannot be retrofitted cheaply once systems are in production.
Implications for boards and Heads of AI
Exposure under the EU AI Act should be assessed structurally, not territorially. Cross-border AI strategy requires alignment between engineering, compliance, and commercial leadership. Architecture determines exposure as much as legal interpretation does.
With prohibited practice obligations (ie, the prohibited AI mentioned above) already in force since February 2025 and high risk system requirements applying from August 2026, the compliance window is narrowing.