Compliance-as-Architecture: An Engineering Leader's Guide to the EU AI Act
The EU AI Act is an operational maturity test. For high-risk AI systems, the majority of obligations are engineering obligations. Compliance is architecture, not paperwork.
Insights on agentic AI, governance, and building production-grade systems for regulated industries.
The EU AI Act is an operational maturity test. For high-risk AI systems, the majority of obligations are engineering obligations. Compliance is architecture, not paperwork.
A .npmignore oversight shipped Anthropic's entire Claude Code source to the public npm registry. The leaked codebase reveals engineering practices that should inform how regulated teams assess their AI toolchain dependencies.
The fractional AI leadership market has two models: brokers who deploy generalists from a roster, and practitioners who make architectural decisions. Six questions reveal which you are getting.
Generic AI leadership optimises for the wrong objective in regulated industries. Three sector-specific scenarios show why defensibility, not accuracy, is the correct architectural goal.
The EU AI Act attaches obligations to architectural decisions that were previously treated as purely technical. Some are precisely defined. Others are ambiguous and unresolved. A Head of AI needs to know the difference.
A leaked spreadsheet revealed that GRC platform Delve generated 494 near-identical SOC 2 reports with pre-written auditor conclusions. The EU AI Act's conformity assessment framework was designed to make this structurally impossible.
An open-source static analysis tool that scans your codebase for AI framework usage, validates risk classifications against the EU AI Act, and reports findings directly in pull requests. Think Snyk for AI regulation.
The barrier to building bespoke legal AI has collapsed. The EU AI Act's compliance obligations have not. Every vibe-coded tool is potentially an AI system, and every builder is potentially a provider.
The Commission's Omnibus proposal could delay high-risk AI Act obligations by up to 18 months. But it is still just a proposal. Engineering teams face a genuine strategic dilemma: plan for the original deadline or the extended one?
We tested XML, Markdown, and JSON delimiters across four frontier LLMs with 600 model calls. For three of four models, format does not matter. For MiniMax M2.5, Markdown has a measurable prompt injection vulnerability.