Compliance-as-Architecture: An Engineering Leader's Guide to the EU AI Act
The EU AI Act is an operational maturity test. For high-risk AI systems, the majority of obligations are engineering obligations. Compliance is architecture, not paperwork.
Insights on agentic AI, governance, and building production-grade systems for regulated industries.
The EU AI Act is an operational maturity test. For high-risk AI systems, the majority of obligations are engineering obligations. Compliance is architecture, not paperwork.
The GPAI chapter of the EU AI Act creates obligations that flow from foundation model providers to downstream engineering teams. Understanding the chain of responsibility is essential.
The EU AI Act determines scope by market reach, not incorporation.
We built an open-source audit logging library for the EU AI Act. This post explains what it does, what it deliberately does not do, and why the gap between those two things matters more than the code.
Article 12 is the most technically demanding obligation in the EU AI Act. Given a decision ID from six months ago, can you reconstruct the full chain?
Most engineering teams treat deployment as the finish line. Article 72 makes it a starting point. Post-market monitoring is a legal obligation, not a nice-to-have.
Article 10 requires testing for bias across protected characteristics. GDPR Article 9 restricts processing the very data you need. This tension has no clean resolution.
Article 14 does not require that every output is manually reviewed. It requires that meaningful human control exists where risk justifies it.
The EU AI Act does not primarily demand new policies. It demands evidence that risk is being managed in how AI systems are built, tested, deployed, and monitored.