AI LeadershipRegulated Industries

Regulated Industries Can't Afford Generic AI Leadership

Every fractional Head of AI service claims to work across industries, and most of them can.

The question is whether they should.

In regulated industries, the consequences of wrong AI decisions are materially different from those in unregulated sectors:

A SaaS company that deploys a poorly-governed AI feature loses customers and accumulates technical debt, while:

  • A law firm that deploys a poorly governed AI research assistant faces professional conduct investigations.
  • An FCA-regulated fintech that deploys an agentic fraud detection system without adequate decision reconstruction faces enforcement action.
  • A MedTech company that deploys a diagnostic triage tool without proper classification faces dual regulatory proceedings under the AI Act and the Medical Devices Regulation simultaneously.

The common thread is not that AI is risky. It is that regulated industries operate under sector-specific regulatory regimes that interact with the EU AI Act in ways that generic AI leadership cannot anticipate. The six categories of architectural decisions that now carry regulatory weight become even more consequential when sector-specific obligations are layered on top.

Why sector-specific expertise matters more than AI expertise alone

The standard fractional AI leader brings model expertise, architecture knowledge, and strategic capability. These are necessary. They are also insufficient when the AI system operates in a domain where the consequences of failure are governed by professional standards, financial regulations, or medical device law.

The problem is not that generic AI leaders make bad technical decisions, but that they optimise for the wrong objective; namely, performance, cost, and capability.

In regulated industries on the other hand, the correct objective is defensibility; the ability to demonstrate to a regulator, a professional body, or a court that the system was designed, built, and operated in a way that satisfies the applicable legal requirements.

Defensibility is not a feature but an architectural property that must be designed in from the beginning, and it requires understanding not just the EU AI Act but the sector-specific regulations that interact with it.

The following three scenarios illustrate this concretely.

Scenario 1: A law firm deploys an AI research assistant

A mid-size law firm wants to deploy an AI-powered legal research assistant. The system uses a foundation model for natural language understanding, a RAG (retrieval-augmented generation) pipeline over the firm's knowledge base, and a citation verification layer that checks references against primary sources.

The regulatory landscape

This system sits at the intersection of multiple regulatory regimes.

The EU AI Act classifies AI systems used in the "administration of justice and democratic processes" as high-risk under Annex III. A legal research assistant that influences the advice given to clients may fall within this classification. If it does, the full suite of provider obligations applies: conformity assessment, risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity.

The SRA Standards and Regulations impose personal professional liability on the supervising solicitor for all advice given to clients, regardless of whether AI was involved in producing it. The solicitor must be able to demonstrate that they exercised proper oversight of the AI-assisted output.

Legal professional privilege applies to the firm's knowledge base. A RAG pipeline that processes privileged documents raises questions about whether the AI system (and by extension, the model provider) has been granted access to privileged material. If the model provider retains any data from the RAG queries, privilege may be compromised.

GDPR applies to any client data processed in the RAG pipeline. If the knowledge base contains personal data (as it inevitably will), the firm needs a lawful basis for processing that data through the AI system, and must satisfy the data minimisation principle.

What a generic AI leader does

Optimises for accuracy and cost. Selects the best foundation model. Builds a RAG pipeline with high retrieval precision. Measures hallucination rates. Implements citation checking. Ships the tool to fee earners with a brief training session.

What a regulated-industry AI leader does

Optimises for traceability, attribution, and defensibility under SRA Standards.

Ensures every AI-assisted output can be traced to the specific source documents that informed it. Builds an audit trail showing which model version, which context documents, and which prompt produced each output.

Evaluates whether the system's classification under Annex III triggers conformity assessment. Ensures client data in the RAG pipeline complies with GDPR data processing requirements.

Assesses whether the RAG architecture creates legal professional privilege issues. Designs the human oversight mechanism so that the supervising solicitor can meaningfully review the AI-assisted output, not just rubber-stamp it.

The consequence of the gap

The firm ships a tool that works well technically. A client receives advice that is partially based on AI-assisted research. The client is unhappy with the outcome and files a complaint with the SRA.

The SRA asks the supervising solicitor to demonstrate how they exercised oversight of the AI-assisted research. The solicitor cannot, because the system does not log which sources informed which outputs, what the model's confidence level was, or what the solicitor actually reviewed before signing off.

This is a compliance risk that vibe-coded legal AI tools make worse, and one that a generic AI leader never saw coming, because they optimised for accuracy instead of defensibility under the SRA Standards.

Scenario 2: An FCA-regulated fintech deploys agentic fraud detection

A payments company regulated by the FCA wants to deploy an agentic fraud detection system. The system monitors transaction patterns, flags suspicious activity, and in some cases autonomously blocks transactions pending human review. It uses multiple models in a chain: an anomaly detection model identifies suspicious patterns, a reasoning model assesses the context, and an action model determines whether to block, flag, or allow the transaction.

The regulatory landscape

The EU AI Act almost certainly classifies this as high-risk under Annex III: AI systems that serve as safety components of critical infrastructure, including AI-operated financial services. The full provider obligations apply.

The FCA expects firms using algorithmic decision-making to be able to explain individual decisions to affected customers. The Consumer Duty, which came into force in July 2023, requires firms to act to deliver good outcomes for retail customers, including in their use of automated systems.

PSD2 and the Payment Services Regulations impose specific requirements on payment service providers regarding the blocking of payment transactions, including the obligation to inform the payer of the reasons for blocking.

What a generic AI leader does

Builds for speed and accuracy. Optimises false positive and false negative rates. Ships an agentic system that autonomously escalates and blocks suspicious transactions. Measures the system's performance by its detection rate and false positive rate.

What a regulated-industry AI leader does

Builds for explainability, audit trails, and FCA defensibility.

Ensures every autonomous blocking decision can be explained to the customer in plain language, as the Consumer Duty requires.

Designs the decision chain so it can be reconstructed for the regulator: which model flagged the transaction, what contextual factors the reasoning model considered, and what threshold the action model applied.

Implements human oversight escalation points calibrated to Consumer Duty obligations, ensuring that high-impact decisions (blocking large transactions, freezing accounts) always involve human review.

Evaluates Annex III classification and implements conformity assessment if required. Documents the system's autonomous decision-making policies in a form that can be demonstrated to the FCA.

The consequence of the gap

The system blocks a legitimate transaction for a customer making a large purchase abroad. The customer complains. The FCA asks the firm to explain the automated decision that led to the block. The firm cannot, because the agentic system's multi-step reasoning was not logged at the granularity needed for regulatory reconstruction. The anomaly detection model flagged the transaction, the reasoning model assessed the context, and the action model blocked it, but nobody recorded the intermediate reasoning that connected these steps.

This is a Consumer Duty breach and a potential PSD2 violation. The generic AI leader built a system that detects fraud effectively. They did not build a system that can explain its decisions to regulators.

Scenario 3: A MedTech company deploys a diagnostic triage tool

A medtech company wants to deploy an AI-powered diagnostic triage tool. The system takes patient-reported symptoms, medical history, and vital signs as input, and produces a triage recommendation: emergency, urgent, routine, or self-care. It uses a fine-tuned model trained on clinical data, integrated with the company's electronic health record system.

The regulatory landscape

The EU AI Act almost certainly classifies this as high-risk under Annex III: AI systems intended to be used as safety components in the management and operation of healthcare facilities.

The Medical Devices Regulation (MDR) applies if the system qualifies as a medical device. Software that is intended by the manufacturer to be used for diagnostic purposes is a medical device under MDR Article 2. A triage tool that influences clinical decisions likely meets this definition. If it does, the company must comply with both the AI Act and the MDR, which have overlapping but not identical requirements.

GDPR's special category provisions apply to health data processing. The system processes sensitive personal data (health information), which requires explicit consent or another specific legal basis under Article 9.

In the UK, clinical safety standards DCB 0129 (manufacturer) and DCB 0160 (deployer) apply to health IT systems, requiring clinical risk management throughout the system's lifecycle.

What a generic AI leader does

Builds for diagnostic accuracy. Selects the best model. Fine-tunes on clinical data. Measures sensitivity and specificity. Ships with disclaimers that the system provides guidance, not diagnosis.

What a regulated-industry AI leader does

Navigates the dual regulatory regime (AI Act and MDR) simultaneously. Recognises that conformity assessment requirements differ between the two and plans for both.

Ensures clinical data processing complies with GDPR special category requirements. Implements a clinical safety case following DCB 0129/0160.

Builds an evaluation framework that satisfies both AI Act conformity assessment and MDR clinical evaluation requirements. Ensures the system's outputs are framed as decision support rather than autonomous diagnosis, to manage liability and classification.

Designs the system so that clinical professionals can meaningfully override the AI's recommendations, not just acknowledge them.

The consequence of the gap

The system is deployed without an MDR classification analysis. A regulatory review determines it qualifies as an unregistered medical device. The company faces enforcement action under both MDR and the AI Act simultaneously. The Head of AI did not know to ask the question, because their experience was in AI systems, not medical devices.

The structural problem

Each scenario demonstrates the same pattern: generic AI leadership optimises for the wrong objective in regulated industries. The right objective is not accuracy or cost efficiency. It is defensibility under the specific regulatory regimes that apply to the sector, the system architecture, and the use case.

The solicitor who cannot demonstrate oversight. The fintech that cannot explain a blocking decision. The medtech company that did not know to ask whether its product was a medical device. In each case, the AI worked. The problem was that nobody with the right combination of engineering depth and regulatory awareness was making the architectural decisions.

This is the combination we built the Fractional Head of AI service around: AI leadership that understands both how the system works and what obligations that architecture creates.