AI GovernanceRegulated Industries

Delve Faked 494 Compliance Reports. The EU AI Act Was Designed to Prevent Exactly This.

In late 2025, a publicly accessible Google Sheet exposed the audit report generation pipeline of Delve, a Y Combinator-backed GRC automation platform that had raised $32 million in Series A funding from Insight Partners.

The spreadsheet contained links to hundreds of draft SOC 2 and ISO 27001 reports.

An investigation by affected clients, published in February 2026, documented what the leaked data revealed.

The findings are not subtle. They describe a compliance platform that generated auditor conclusions before any auditor reviewed evidence, produced near-identical reports across hundreds of clients, and partnered with certification firms that signed whatever was put in front of them.

This is not a story about one company behaving badly. It is a story about a structural vulnerability in self-policing attestation models; the same vulnerability that the EU AI Act's conformity assessment framework was specifically designed to eliminate.

What the leaked data shows

The investigation analysed 494 SOC 2 reports and 81 ISO 27001 registration forms extracted from the leak. The evidence of systematic report generation is extensive:

Near-identical boilerplate across all clients

Section 3 of a SOC 2 report is supposed to be the company-specific description of its security programme.

In Delve's reports, 99.8% contained identical text, including the same grammatical errors ("has developed an organization-wide Information Security Policies") and the same nonsensical descriptions ("The infrastructure comprises cloud architecture including database, networking devices, virtual servers, etc."). Every client, regardless of size, industry, or technical architecture, received the same security programme description.

Pre-written auditor conclusions

The "Independent Service Auditor's Report" in Section 1 and all test procedures and conclusions in Section 4 were present in draft reports before clients had provided their company description, network diagrams, or signatures. The auditor's conclusion existed before the auditor had anything to audit. This directly violates AICPA AT-C Section 205, which requires practitioners to maintain independence and form conclusions based on their own examination.

Test values in production reports

Spreadsheet rows containing keyboard-mashed test data ("sdf", "dlkjf", single-letter entries) appeared verbatim in generated draft reports. JavaScript error messages (TypeError: Cannot read properties of undefined) appeared in spreadsheet fields. These are artefacts of an automated generation script, not of manual audit work.

Interchangeable audit firm branding

One client's report had an Accorian cover page but Accorp's firm licence number embedded in the auditor conclusion. The same template produced reports for multiple "independent" audit firms with nothing changed but the cover page.

259 identical Type II conclusions

Every SOC 2 Type II report in the dataset contained the same "could not be tested" conclusions for four specific controls, complete with the same missing word ("because there no security incidents reported during the engagement"). Not one of 259 companies, across a three-month observation period each, had a security incident, a personnel change, a customer termination, or a cybersecurity insurance claim. The statistical improbability is less interesting than the structural implication: these conclusions were copied from a template, not formed through examination.

Certification mills behind US shell addresses

Delve marketed its auditors as "US-based CPA firms". The investigation traced the primary SOC 2 auditor, Accorp, to Indian operations using virtual office addresses in the US and UAE. The primary ISO 27001 auditor, Gradient Certification, was registered in Wyoming through a mailbox agent popular with shell companies, with its president listed at the same Delhi address as the Indian entity. A newer ISO 27001 auditor, Glocert, claimed to be headquartered in the UK but had filed dormant company accounts with Companies House for four consecutive years, reporting zero trading activity and zero revenue. These were not audit firms in any meaningful sense. They were rubber stamps.

Pre-populated fake evidence

The platform provided pre-fabricated board meeting minutes, security simulation reports, and risk assessments that clients could adopt with a single click. For employees who had not completed onboarding tasks, Delve auto-generated passing evidence for device security, background checks, and training. Trust pages displayed a complete list of "implemented" security controls before any work had been performed, and the list did not change after the compliance process was complete.

This is not an isolated incident

It is tempting to treat the Delve case as an outlier; a single bad actor in an otherwise functioning system. The evidence suggests otherwise:

The same Substack investigation noted that CompAI, a competitor operating at an even lower price point, had developed a similar reputation. CompAI's founder was documented bribing a Reddit moderator to gain control of the r/ISO27001 subreddit for marketing purposes.

The pattern extends beyond individual companies. The GRC automation market has structural incentives that reward speed and volume over verification. When the company being assessed selects and pays its own assessor, when the assessor self-polices through professional standards that nobody independently enforces, and when a platform vendor sits between them controlling the information flow and generating the artefacts, the system is vulnerable by design. Delve did not break a functioning model. It optimised the model's existing failure modes.

SOC 2's self-policing attestation structure assumes that professional standards and market reputation are sufficient checks on auditor independence. The Delve case demonstrates that this assumption fails when a sufficiently motivated intermediary can automate the production of attestation artefacts at scale, and when the auditors whose names appear on those artefacts have no commercial incentive to look closely.

Why the EU AI Act was designed differently

Engineering teams encountering the EU AI Act's conformity assessment requirements for the first time often ask: "Why can we not just self-assess? We already have SOC 2".

The Delve case answers that question.

The EU AI Act's conformity assessment framework, set out in Articles 40 through 49, structurally separates the roles that Delve collapsed into one.

No platform vendor can simultaneously generate the evidence, draft the assessor's conclusions, and control the information flow to the certification body, because the framework does not permit it.

Two assessment paths

Article 43 establishes two conformity assessment procedures for high-risk AI systems.

Internal control (Annex VI)

This allows providers of certain high-risk systems to assess their own conformity, but only when harmonised standards or common specifications exist and the provider has applied them. The provider must document compliance with every applicable requirement from Chapter III, Section 2. A notified body is not involved, but the documentation must be available to market surveillance authorities on request.

Third-party assessment (Annex VII)

This requires an independent conformity assessment by a notified body. The notified body examines the quality management system and the technical documentation, may perform testing, and issues a certificate. This path is mandatory for certain categories, including biometric identification systems under Annex III, paragraph 1.

The critical distinction from SOC 2 is that even the self-assessment path operates under the supervision of market surveillance authorities with enforcement powers. There is no equivalent of the AICPA's purely self-regulatory model.

Notified bodies are not audit firms you choose from a menu

Articles 29 through 39 establish the notified body framework. A notified body must be accredited by a national accrediting authority, demonstrate technical competence, maintain organisational independence, and carry professional indemnity insurance. It cannot be commercially dependent on the entities it assesses. It must participate in the coordination activities of the European Artificial Intelligence Board.

Compare this to the SOC 2 model:

A company like Delve could partner with Accorp, an entity operating from India through US shell addresses, and present it to clients as a "US-based CPA firm".

The AICPA framework places no structural barrier between a platform vendor and its preferred rubber stamp.

The EU AI Act's framework on the other hand, does; a notified body that failed to perform independent assessment would face accreditation withdrawal, not just reputational damage.

Quality management is continuous, not a point-in-time artefact

Article 17 requires providers of high-risk AI systems to implement and maintain a quality management system. The requirements are specific: systematic procedures and instructions for every stage of the system lifecycle, design control and verification techniques, quality control and assurance procedures, examination and testing before and during deployment, and a process for reporting serious incidents.

Contrast this with Delve's approach:

Trust pages displayed 100% completion before any work was done and never changed after "completion".

Evidence consisted of point-in-time screenshots and pre-populated forms. The entire process was, by design, a one-time exercise. Under Article 17, a quality management system must be demonstrably operational on an ongoing basis. Pre-populated forms and static trust pages cannot satisfy this because the requirement is architectural, not documentary.

Market surveillance closes the enforcement gap

Articles 63 through 68 establish market surveillance authorities with powers to access data, request documentation, conduct system evaluations, order corrective actions, and withdraw non-compliant systems from the market. Penalties under Article 99 reach up to 35 million euros or 7% of global annual turnover.

It should be noted however that the EU AI Act's framework is not yet battle-tested: Notified bodies are still being designated, market surveillance infrastructure is being stood up, and the Commission's Omnibus proposal could extend high-risk compliance deadlines by up to 18 months. It is entirely possible that implementation failures will introduce their own version of the problems described here. But the structural difference remains: the AI Act's design makes a Delve-style collapse a failure of execution rather than a failure of architecture. The SOC 2 model made it a feature, because it has no equivalent enforcement mechanism. If a SOC 2 report is fraudulent, the affected parties must pursue private remedies. There is no regulator that can compel corrective action, withdraw a non-compliant system, or impose fines. The Delve case exists in part because the SOC 2 ecosystem lacks the enforcement infrastructure to prevent it.

What this means for engineering teams

The structural lesson is not "SOC 2 is bad and the EU AI Act is good". It is that compliance artefacts are only as trustworthy as the architecture that produces them. If your compliance evidence consists of screenshots, pre-populated forms, and reports generated by the same platform that sold you the compliance programme, you do not have compliance. You have documentation.

The EU AI Act demands something different. It demands that compliance is embedded in the system's architecture, not layered on as a reporting exercise after the fact. For engineering teams building high-risk AI systems, the practical implications are:

Audit trails must be immutable and machine-verifiable

Article 12 requires automatic logging throughout the AI system's lifecycle. The logs must enable post-deployment monitoring, support incident investigation, and be retained for at least six months. Point-in-time screenshots of configuration settings do not satisfy this. Structured, tamper-evident, queryable logging infrastructure does. For a detailed treatment of what to log, retention requirements, and decision reconstruction, see our post on Article 12 for engineers. For an open-source implementation, @systima/aiact-audit-log provides the logging layer.

Evidence must be continuous, not point-in-time

Article 72 requires providers to establish and document a post-market monitoring system proportionate to the AI system's risk. This is not a quarterly form submission. It is runtime monitoring that produces continuous evidence of system behaviour. Our post on post-market monitoring under Article 72 covers the engineering architecture this requires.

Risk management must be system-specific

Article 9 requires a risk management system that identifies and analyses known and reasonably foreseeable risks specific to the system. Delve gave every client the same ten pre-generated risks regardless of their architecture, industry, or use case. Under Article 9, risk identification must be grounded in the actual system, tested against it, and updated throughout its lifecycle. For the full obligation set, see our engineering compliance guide.

Conformity assessment preparation starts at architecture time

Whether your system will go through internal control (Annex VI) or third-party assessment (Annex VII), the evidence base is the same: technical documentation per Annex IV, a functioning quality management system per Article 17, and logging infrastructure per Article 12. These cannot be retrofitted after the build. They must be designed in.

The Delve case is a cautionary tale, but it is also a structural argument. Self-policing attestation models fail when commercial incentives override professional standards. The EU AI Act's conformity assessment framework exists because the legislators understood this. For engineering teams building systems that will be subject to conformity assessment, the lesson is straightforward: build the compliance infrastructure into the architecture, or discover later that the artefacts you produced were never worth the paper they were printed on.

If your team is building AI systems subject to the EU AI Act and you need help designing compliance architecture that will withstand conformity assessment, Systima's AI Governance and Compliance practice works with engineering teams to embed governance into system design from the start.