What to Look for in a Fractional Head of AI (and What to Avoid)
The fractional Head of AI market has grown rapidly, and the options look similar from the outside. Most services promise AI strategy, roadmapping, and leadership on a flexible basis. The pricing is comparable. The deliverables sound the same.
The differences are structural, and they determine whether the engagement produces outcomes or produces documents.
Two fundamentally different models
The fractional AI leadership market has consolidated around two models that look similar in a pitch deck but operate completely differently in practice.
The broker model
A firm maintains a roster of AI professionals and assigns one to your company.
The firm handles sales, account management, and quality assurance.
Your fractional Head of AI is an employee or contractor of the firm, deployed to you on a part-time basis, and the firm's value proposition is "access to a vetted pool of AI talent on flexible terms".
The practitioner model
You engage directly with a specific person who has a specific track record.
There is no intermediary layer between you and the person making decisions.
They have built and shipped AI systems in production, and are accountable for outcomes, not just deliverables.
Both models have a place; the broker model works well for companies at the very beginning of their AI journey that need general guidance, namely: where to start, which use cases to prioritise, how to build internal AI literacy.
If the company has no AI in production and no immediate plans to build anything complex, a generalist from a vetted roster can provide valuable direction.
The practitioner model is what companies need when the decisions have consequences.
- When the architecture choices determine whether the system works at scale.
- When the build-versus-buy decision affects the product roadmap for the next two years.
- When a wrong model selection wastes six months of engineering effort.
- When the company operates in a regulated sector and the AI system needs to be defensible, not just functional.
The signs that strategy advice is not enough
Several patterns indicate that a company needs a practitioner, not a strategist:
AI initiatives that stall between proof-of-concept and production
The demo worked. The stakeholders were impressed. But six months later, the feature still has not shipped.
This is almost never a technology problem, but a decision-making problem; nobody with sufficient technical authority is making the hard calls about architecture, scope, and trade-offs that move a POC into production.
Engineering teams building features that do not move commercial metrics
The AI team is shipping. But the features do not connect to revenue, retention, or operational efficiency in measurable ways.
This happens when AI development is disconnected from commercial strategy, typically because the person setting the AI direction does not have enough business context, or the person with the business context does not have enough technical depth to evaluate what engineering is building.
Vendor relationships that produce invoices but not outcomes
The company is paying for AI tools, platforms, or consulting engagements, but the value is unclear.
Nobody has the technical depth to evaluate whether the vendor's solution is genuinely the best option, or whether a simpler (or more custom) approach would deliver better results at lower cost.
The vendor knows this and prices accordingly.
A CTO who is stretched too thin to give AI the attention it needs
The CTO is managing infrastructure, security, hiring, product development, and board reporting. AI is one of fifteen priorities. The result is that AI decisions are made reactively (responding to vendor pitches, team requests, or competitor moves) rather than proactively (based on a deliberate architectural strategy).
The builder test
When evaluating a fractional Head of AI, whether from a broker firm or as an independent practitioner, these questions quickly reveal whether you are getting a strategist or a decision-maker:
Have they personally built and shipped AI features in production?
Not managed a team that built them. Not advised a company that built them. Personally been responsible for the architecture and delivery of AI systems that are running in production today.
This is the single most important question. If the answer is no, every other qualification is secondary.
Can they make architectural decisions, not just recommend them?
A roadmap that says "implement RAG for customer support" is the easy part.
The architectural decision determines which embedding model to use, how to chunk documents for retrieval, whether to use a vector database or a hybrid search approach, how to handle multi-turn context, and how to evaluate retrieval quality in production.
A strategist produces the roadmap; a practitioner makes the decisions that determine whether the feature works.
Can they evaluate your existing architecture and tell you what is wrong with it?
Not in the abstract ("you should consider a vector database") but specifically ("your chunking strategy is producing poor retrieval because your documents have inconsistent heading structures, and your embedding model is not optimised for the domain vocabulary your users actually use").
This requires hands-on experience, not theoretical knowledge.
Can they evaluate vendors at the architectural level, not just the feature level?
A generalist compares vendor proposals on features, pricing, and references. A practitioner opens the technical documentation, evaluates the API design, assesses the data residency implications, and determines whether the vendor's architecture is compatible with the company's infrastructure and scaling requirements.
They know what questions to ask because they have built similar systems themselves.
Do they understand the difference between a demo and a production system?
POCs are easy. Production is hard. The gap between a demo that works in a controlled environment and a system that handles edge cases, scales under load, degrades gracefully, and produces consistent results is where most AI initiatives fail. A practitioner has navigated this gap. A strategist has not.
If you operate in a regulated industry, do they understand the regulatory landscape?
For companies with exposure to the EU AI Act, the six categories of architectural decisions that now carry regulatory weight mean the Head of AI needs dual literacy in engineering and regulation.
For companies in legal, financial services, or healthcare, the challenge compounds further because sector-specific regulators layer additional obligations on top.
For UK-only companies in unregulated sectors, this is less critical, but an awareness of the emerging UK AI governance framework and data protection implications is still valuable.
What to ask before you engage
Whether you are evaluating us or another provider, these questions protect you from engaging a service that cannot deliver what you need:
- Will I work directly with the person who makes decisions for my AI function, or will there be an intermediary layer?
- Has this person built and shipped AI systems in production, or is their experience advisory?
- What is the first deliverable, and is it an assessment of my current state or a forward-looking strategy deck?
- Can they evaluate my existing architecture, or only recommend new initiatives?
- If I operate in a regulated sector, do they understand the specific regulatory regime that applies?
The answers will tell you whether you are getting a practitioner or a broker.
Both have their place. But if the decisions matter, you need the practitioner.
At Systima, when you engage the Fractional Head of AI service, you work directly with us.
There is no roster. There is no broker layer.
There is no account manager sitting between you and the person making decisions for your AI function.
The engagement starts with an architecture assessment, not a readiness questionnaire; the first deliverable is a truthful evaluation of where the company stands, not a forward-looking strategy deck.