The EU AI Act Documentation Reality: Think Early, Don't Panic
The EU AI Act conversation has drifted between ignoring it and panicking about fines. Both miss the actual shape of the regulation. A practitioner view on how enforcement really works, why the Omnibus delay is a time gift, and why Annex IV describes artefacts that good engineering has always needed to manage a system over time. AI finally makes producing them affordable as a byproduct of the build.

Datum
The EU AI Act conversation has drifted between two failure modes. On one side, teams that have decided the regulation is someone else's problem and stopped tracking it. On the other, teams that have read the fine ceiling, seen EUR 35 million or 7% of global turnover, and internalised that number as a running clock over every AI system in the company. Both miss an important part of the regulation, I believe.
The EU AI Act is real. Annex IV technical documentation is real. The enforcement machine behind it is neither a paper tiger nor a guillotine. It looks a lot more like how the EU already regulates consumer products and food, and that shapes how a sensible engineering leader should respond in April 2026.
The pragmatic reality: why the fine discussion is overblown
In my reading, enforcement across the Union in early 2026 is still mostly precautionary: warnings, preparation, guidance, not widespread fines. Diplo's framing captures the shape well: the system looks closer to how food safety or product safety already works today. Inspectors do not check every kitchen or workshop every day. They react to complaints, spot-check the big suppliers, and pull dangerous products when they find them. That model maps almost exactly to how the AI Act is being stood up.
Enforcement runs through national Market Surveillance Authorities, the MSAs. Around 2'000 of them across the EU, most sitting inside existing sector regulators. They have real powers under Article 74: access documentation, request training and testing data, and propose joint investigations with the Commission. But they act in a procedural chain. A complaint is lodged, often by a downstream provider or user. For general-purpose AI models, the scientific panel can send a qualified alert to the AI Office under Article 90. An MSA opens an investigation, requests documentation, establishes findings, proposes corrective measures. Penalties themselves sit with the competent national authority under Article 99, at the end of that chain, not the beginning.
The fine-ceiling headlines obscure that structural detail. Article 99 sets ceilings: up to EUR 35 million or 7% of global annual turnover for breaches of Article 5 prohibitions, EUR 15 million or 3% for most other high-risk obligations, and EUR 7.5 million or 1% for supplying incorrect, incomplete or misleading information to notified bodies or national competent authorities, whichever is higher. Real numbers, but not automatic. They sit at the end of an investigation that has to be triggered, substantiated and then decided.
The GDPR curve is the reference point. It took years to produce high-profile fines. Meta and X are already inside the EU's broader regulatory crosshairs, but the concrete actions in 2025 and early 2026 sit under the Digital Services Act and antitrust law, not the AI Act. The January 2026 Commission order requiring X to retain Grok-related documents until the end of 2026 is a DSA measure, not an AI Act enforcement step. Formal AI Act penalty decisions are still years away. If your AI system is well-documented, observed and responsive to complaints, you are not the target. If it is opaque, silently drifting, and the first anyone hears about a problem is from a user on LinkedIn, you are more exposed than the fine-ceiling debate suggests, even if the eventual penalty is far below the headline number.
The Omnibus delay: a time gift, not an excuse
Sitting on top of this, the Digital Omnibus trilogue is active. Parliament adopted its position on 26 March 2026, and Council on 13 March 2026. Both institutions are broadly aligned on the same fixed dates: 2 December 2027 for Annex III standalone high-risk systems, and 2 August 2028 for Annex I product-embedded systems. For any of this to land before the original 2 August 2026 deadline actually bites, the trilogue needs political agreement in roughly the May to June 2026 window. If the trilogue collapses, the original deadline stands.
The OneTrust and A-LIGN analyses land on the same practical read. A delay is likely, not certain. Everyone is planning as if the window extends, but nobody serious is betting the quarter on it.
I treat the Omnibus as a time gift. Not permission to stop, not a reason to panic, just more runway to build the documentation as a byproduct of how the software is made. Teams that use the extra months to embed Annex IV artefacts into their SDLC end up in a very different place from teams that schedule a "compliance sprint" for Q3 2027 and start reconstructing documentation from git history.
Switzerland: factual, not existential
The Swiss picture is narrow. The Federal Council decided in February 2025 to go sector-specific rather than draft a standalone Swiss AI Act. A consultation draft is expected by the end of 2026, full legal effect probably not before 2029. Swiss firms that place AI systems on the EU market, or whose systems affect people in the EU, are in scope of the EU AI Act directly. Factual statement about cross-border scope, not a reason to treat the regulation as existential. The same rule applies to any non-EU provider.
What Annex IV actually asks for
Before mapping anything to tooling, it helps to read Annex IV on its own terms. Article 11 requires technical documentation for high-risk AI systems, drawn up before market placement or putting into service, and kept up to date thereafter. Annex IV specifies the contents. Nine blocks:
- General description of the system and its intended purpose.
- Detailed description of the system elements and the development process, including training data, validation and testing, and cybersecurity.
- Monitoring, functioning and control of the system, including accuracy levels, foreseeable unintended outcomes and discrimination risks.
- Appropriateness of the performance metrics for the specific system.
- The risk management system per Article 9.
- Relevant changes made through the lifecycle.
- The harmonised standards applied, or alternative solutions.
- A copy of the EU declaration of conformity.
- The post-market monitoring system per Article 72, including the monitoring plan.
Article 12 requires automatic logging of events over the lifetime of the system. Article 19 requires providers to retain those logs for at least six months, longer if Union or national law demands it, typically GDPR. Article 18 separately requires the technical documentation, the quality management system documentation and the EU declaration of conformity to be kept at the disposal of national competent authorities for ten years after the system is placed on the market or put into service.
The key insight is that Annex IV is a continuous record, not a report. A design choice documented in April 2027 for a system built in 2024 is a reconstruction, not a record. Auditors know the difference. If your documentation pipeline only starts producing output after the product ships, you are already behind the regulation, regardless of whether the Omnibus delay lands.
Why a spec-driven SDLC already delivers this
Shipwright is an open-source AI framework that runs on Claude Code and orchestrates the SDLC from requirements through deployment. It is spec-driven: every line of code traces back to a requirement, and every artefact is structured and append-only.
That design decision was made for engineering reasons, not compliance ones. The same structure happens to produce most of what Annex IV asks for, as a byproduct. Nine concrete artefacts, mapped to the regulation:
shipwright_events.jsonl, an append-only event log of the build lifecycle: Article 12 logging, Article 19 retention.- Requirements Traceability Matrix, with a "Last Verified" timestamp per functional requirement: Annex IV design specification, Article 13 transparency to deployers.
test-evidence.md, capturing unit, integration, pgTAP, smoke, end-to-end and visual regression progression: Annex IV validation and testing procedures, Article 15 accuracy and robustness.change-history.md, combining Conventional Commits, ADR references and version tags: Annex IV design choices, Article 18 documentation retention.sbom.md, listing dependencies, versions, licences and copyleft flags: Annex IV component description, Article 15 cybersecurity.decision_log.mdin ADR format, status, context, decision, consequences: Annex IV design choices rationale.compliance_overrides.log, a structured record of each hook override: Article 14 human oversight evidence.- Phase-Quality Stop Hooks enforce that phases do not close with unresolved issues: Article 17 quality management system, as enforced process gates that leave a trail.
- The
/shipwright-compliancedetective audit runs seven groups of roughly 22 checks, including FR-evidence coverage and ADR integrity. The audit-readiness layer, run before an external auditor does.
This is not magic. Structuring an SDLC so that specifications, decisions, tests, changes and overrides are captured as structured artefacts at the moment they happen is good engineering hygiene. The regulatory mapping is a side effect. The honest version of the pitch: do this because your future self will thank you, and notice that Annex IV is satisfied on the way.
What is still missing, building on the baseline
The baseline above is useful, but it does not cover every high-risk obligation. Honest gaps. Here is what Shipwright still needs:
Structured risk management artefact per functional requirement. Article 9 asks for a continuous, iterative risk management system that identifies, estimates and evaluates risks, and adopts appropriate and targeted measures, run throughout the entire lifecycle of the system. Shipwright's iterate skill uses canonical risk flags today, but a dedicated artefact alongside the RTM, carrying identification, likelihood, severity, residual risk and mitigation per FR, is not there yet.
Data governance sheet for AI-in-the-built-system cases. Where the software being built is itself an AI system trained on personal or sensitive data, Article 10 asks for documentation of training data quality, bias checks, representativeness, labelling procedures and provenance.
DPIA template and workflow. Article 35 GDPR requires a DPIA whenever processing is likely to result in a high risk to the rights and freedoms of natural persons. For any high-risk AI system processing personal data, a DPIA is independently presumed necessary. An artefact that plugs into the spec phase and updates through the SDLC, so the GDPR and AI Act assessments trace to the same spec.
Post-market monitoring plan. Article 72 requires an active plan for monitoring performance and compliance after market placement, and the plan itself is part of the technical documentation. Generated during the release phase and updated from production telemetry, not drafted separately in Confluence after go-live.
Serious incident reporting workflow and log template. Article 73 sets three reporting windows: up to 15 days as the default, shortened to 2 days for widespread infringement or a critical-infrastructure serious incident under Article 3(49)(b), and 10 days when a death is involved, with the clock running from suspicion of a causal link, not confirmation. Log template and workflow in design, so the clock starts the moment the incident is classified, not the moment someone remembers to open a ticket.
None of these are shipped yet. Shipwright is going into early access soon.
A pragmatic close
The EU AI Act is not a cliff edge. Enforcement will look more like food safety inspection than a tax audit, and most teams will never see an MSA investigation. That is useful context, and it should end the panic reading of the regulation. But it is also not the reason to care about the artefacts Annex IV describes.
The real reason is older than the AI Act. Systems that run in production for five, ten, twenty years need a record of how they were built, what choices were made and why, what was tested and what was not, what dependencies they carry, and which overrides were accepted by which humans. That record is what a second engineer, a third owner, a new CISO, a forensic investigation or a cross-team refactor a decade from now actually needs. It has always been needed. Most engineering organisations have never had it, because producing it by hand was too expensive to keep current.
What is new in 2026 is that AI makes producing this record feasible as a byproduct of the build. A spec-driven SDLC with AI agents handling the traceability, the documentation, the change log and the audit trail is not a compliance solution. It is how software gets properly managed over time, at a cost a team can actually absorb. The AI Act happens to require a subset of this. The Omnibus delay, if it lands, buys time to build it well.
Annex IV is not a checklist to produce in a compliance sprint. It is a description of what a well-run engineering organisation has always needed to have, and that AI has finally made cheap enough to produce by default. Build the documentation as a byproduct of the build, notice which pieces are still missing, and work on those next. Think early, don't panic.
Sources:
- EU Digital Omnibus Proposes Delay of AI Compliance Deadlines - OneTrust
- EU AI Act Enforcement Delay - A-LIGN
- Implementation Timeline - EU Artificial Intelligence Act
- AI Regulation Meets Enforcement Reality: How the Rules Actually Work - Diplo
- Article 74: Market Surveillance - EU Artificial Intelligence Act
- Market Surveillance Authorities under the AI Act - European Commission
- There are around 2'000 AI Market Surveillance Authorities in the EU - CMS
- Enforcement of Chapter V under the EU AI Act - EU Artificial Intelligence Act
- Article 99: Penalties - EU Artificial Intelligence Act
- Annex IV: Technical Documentation Referred to in Article 11(1) - EU Artificial Intelligence Act
- Article 11: Technical Documentation - EU Artificial Intelligence Act
- Article 18: Documentation Keeping - EU Artificial Intelligence Act
- Article 19: Automatically Generated Logs - EU Artificial Intelligence Act
- Article 9: Risk Management System - EU Artificial Intelligence Act
- Article 73: Reporting of Serious Incidents - EU Artificial Intelligence Act
- Article 35 GDPR: Data Protection Impact Assessment
- Article 90: Alerts of the Scientific Panel - EU Artificial Intelligence Act
- Switzerland Sets Its Course on AI Legislation - Pestalozzi Attorneys at Law
- AI Watch: Global Regulatory Tracker - Switzerland - White and Case
- European Commission orders X to retain internal records on Grok - CADE
