Why the EU AI Act Applies to US AI Companies
For a comprehensive, structured treatment of these obligations and how they translate into engineering artefacts, see our full guide: Compliance-as-Architecture: An Engineering Leader's Guide to the EU AI Act.
The United States has no comprehensive federal AI regulation. It is therefore understandable that many US companies treat the EU AI Act as a European concern with no bearing on their operations. For a significant number of those companies, this assumption is wrong.
The EU AI Act determines scope not by where a company is incorporated, but by where its AI systems are placed on the market or put into service, and where the outputs of those systems are used. A US company whose AI-powered product is used by customers in the EU, or whose AI system produces outputs that affect individuals in the EU, is likely within scope.
This is not a theoretical edge case. It is the default position for any US technology company with international reach.
The extraterritorial principle
The EU AI Act applies to three categories of actor regardless of where they are established:
- Providers who place an AI system on the EU market or put it into service in the EU
- Deployers of AI systems who are established in the EU, or whose AI system outputs are used in the EU
- Importers and distributors who make AI systems available on the EU market
For US companies, the first two are the most relevant. If your company develops an AI product and it is accessible to EU users, or if an EU-based business uses your product, you are likely a provider placing a system on the EU market. If your company uses AI systems whose outputs affect individuals in the EU, you are likely a deployer.
The legal mechanism is deliberately similar to the GDPR's extraterritorial reach under Article 3. Companies that went through GDPR compliance will recognise the pattern: the regulation follows the data subject, not the data controller's jurisdiction.
How this applies to US companies in practice
The extraterritorial reach is not abstract. Consider the following scenarios, each of which is common among US technology companies.
LegalTech
A US LegalTech company offers AI-powered contract analysis and due diligence tools. EU law firms and in-house legal teams use the platform to review contracts involving EU counterparties, analyse regulatory filings, or conduct due diligence on EU targets in cross-border M&A transactions.
The AI system's outputs directly inform legal decisions affecting EU individuals and entities. The US company is placing an AI system on the EU market via its EU customer base. If the system assists with legal interpretation in areas such as immigration, asylum, or access to justice, it may also fall within the Act's high-risk classification under Annex III.
MedTech and HealthTech
A US MedTech company develops AI-driven diagnostic, triage, or clinical decision support tools. EU hospitals, clinics, or telemedicine platforms deploy the system to assist clinicians treating EU patients.
Health AI is explicitly classified as high-risk under the Act. AI systems intended to be used as safety components of medical devices, or that are themselves medical devices, fall under Annex I, Section A via existing EU product safety legislation. AI systems used to evaluate or assign priority in emergency dispatch, or to assess health risks for individuals, fall under Annex III, point 5.
The obligations for high-risk health AI are substantial: risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. For US MedTech companies already navigating FDA requirements, these are not unfamiliar concepts, but the specific requirements and evidence standards differ.
The forthcoming European Health Data Space (EHDS) regulation adds further complexity. It establishes rules for the secondary use of health data, including for AI training and development, that will interact with the AI Act's data governance requirements under Article 10.
FinTech
A US FinTech company provides AI-powered credit assessment, fraud detection, or algorithmic trading systems to EU financial institutions or to EU-resident consumers.
AI systems used for creditworthiness evaluation or credit scoring of individuals are explicitly listed as high-risk in Annex III, point 5(b). So are AI systems used for risk assessment and pricing in life and health insurance.
The EU financial services regulatory framework adds a further layer. Firms subject to MiFID II, DORA (the Digital Operational Resilience Act), or Solvency II already face supervisory expectations around algorithmic decision-making, operational resilience, and outsourcing. The AI Act creates an additional, distinct compliance obligation that does not replace these existing requirements but operates alongside them.
For US FinTech companies accustomed to a patchwork of state-level and federal regulations, the EU's approach is structurally different: a single, horizontal regulation that applies across all sectors, rather than sector-specific rules with varying thresholds.
The US regulatory vacuum
Understanding why the EU AI Act matters to US companies requires acknowledging the current US regulatory landscape.
As of early 2026, the United States has no comprehensive federal AI legislation. The NIST AI Risk Management Framework provides voluntary guidance but carries no legal obligation. State-level initiatives (such as Colorado's SB 205 on algorithmic discrimination) are emerging but remain fragmented and limited in scope.
The practical consequence is that the EU AI Act is, for many US companies, the most significant AI regulation they face. This mirrors the "Brussels Effect" observed with the GDPR: because compliance with the EU standard is required for EU market access, many global companies adopt it as their baseline, finding it more efficient to build one compliant system than to maintain separate standards for different jurisdictions.
Companies that design their AI systems to meet EU AI Act requirements will likely be well positioned for whatever federal US regulation eventually emerges. Companies that defer compliance until domestic legislation arrives may find themselves retrofitting controls into systems that were not designed for them.
The authorised representative requirement
Providers of high-risk AI systems who are established outside the EU must appoint an authorised representative within the EU before placing their system on the market. This is a specific, operational requirement, not an abstract obligation.
The authorised representative must be empowered to:
- Verify that the required conformity assessment and technical documentation have been produced
- Provide national competent authorities with documentation and information upon request
- Cooperate with authorities on any corrective action
For US companies, this means identifying and contracting with an EU-based entity that can serve in this role. It is a practical step that takes time to arrange and should not be left until the compliance deadline.
Agentic architectures complicate classification
Much of the discussion around the EU AI Act assumes a relatively simple AI deployment: a single model, a defined input, a defined output. In practice, many US companies are building multi-step agentic workflows where AI systems invoke tools, delegate tasks to sub-agents, and chain decisions across multiple models and services.
In these architectures, the "AI system" is not a single model. It is the orchestration layer, the decision chain, and the aggregate behaviour of the workflow. Risk classification becomes more complex because the risk may not reside in any individual component, but in how those components interact.
This has practical implications for compliance. A model that generates text may be minimal-risk in isolation. The same model, embedded in an agentic workflow that uses its output to make a credit decision, becomes part of a high-risk system. Classification follows the use, not the capability.
For a deeper treatment of how agentic architectures interact with the Act's requirements, see our engineering guide to EU AI Act compliance.
Risk classification and what it means
The EU AI Act establishes a tiered risk framework:
- Prohibited practices: Certain AI applications are banned outright (social scoring, certain biometric categorisation, emotion recognition in workplaces and schools, and others). These prohibitions have been in force since February 2025.
- High-risk systems: AI systems in areas listed in Annex III (including employment, credit scoring, access to essential services, law enforcement, and migration) face the full set of obligations: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, and robustness.
- Limited-risk systems: Systems that interact with individuals (such as chatbots) must disclose that the user is interacting with an AI system.
- General-purpose AI models: Models that can be used across a wide range of tasks face transparency and documentation obligations, with additional requirements for models posing systemic risks. These obligations apply from August 2025. For a detailed engineering breakdown, see GPAI Obligations for Engineering Teams.
For most US companies building AI-powered products in regulated industries, the high-risk category is the relevant one. The obligations are not cosmetic; they require architecture-level decisions about how systems are designed, tested, monitored, and documented.
We explored these obligations in engineering terms in our full compliance guide, including the specific requirements of Articles 9 (risk management), 10 (data governance), 11 (technical documentation), 12 (record-keeping), 13 (transparency), 14 (human oversight), and 15 (accuracy, robustness, and cybersecurity).
The compliance timeline
The AI Act's obligations are phased:
- February 2025: Prohibitions on unacceptable-risk AI practices in force
- August 2025: Obligations for general-purpose AI models apply
- August 2026: Full obligations for high-risk AI systems apply, including conformity assessment, CE marking, and registration in the EU database
August 2026 is not far away. For US companies that need to assess exposure, classify their systems, appoint an authorised representative, redesign engineering processes, and produce the required documentation, the compliance window is already tight.
What this means practically
For US companies building or deploying AI products with EU exposure, the questions are structural:
- Which of your AI systems are accessible to, or produce outputs affecting, individuals in the EU?
- Under the Act's risk classification, which of those systems are high-risk?
- Are you currently the provider, the deployer, or both?
- Does your engineering process produce the evidence that the Act requires?
- Do you have, or need, an authorised representative in the EU?
These are not questions that can be resolved by legal counsel alone. They require alignment between engineering, compliance, and commercial leadership, because the answers depend as much on system architecture as on legal interpretation.
We have explored this in detail for UK companies, and the structural analysis is similar. For US companies, the distance from Brussels may feel greater, but the regulatory reach is the same.
We work with organisations across the UK, EU, and internationally on AI Act compliance architecture. If your team needs to assess exposure or design compliant systems, we would welcome the conversation.