Why August 2, 2026 Is the Date That Matters
The EU AI Act has been rolling out in phases since it entered force in August 2024. Most of the attention has focused on the prohibited practices prohibition that became effective in February 2025 and the general-purpose AI model requirements that followed in August 2025. The deadline arriving this August is different: it is the enforcement date for the full requirements on high-risk AI systems under Annex III, covering employment tools, credit scoring, biometrics, healthcare systems, critical infrastructure, and education. Organizations that deploy or use high-risk AI — including many internal workflow tools applied to HR decisions, contract review, and operational prioritization — face penalties of up to 35 million euros or seven percent of global annual revenue for non-compliance. The extraterritorial scope means this applies to any organization placing AI on the EU market or using AI whose output affects EU residents, regardless of where the organization is headquartered.
Step One: Complete an AI Inventory
Before any compliance work can be scoped, organizations need to know what AI systems they are actually running. An AI inventory should identify every system in development, procurement, evaluation, and production use across the organization. The inventory should capture the system's purpose, the data it processes, the decisions it informs or makes, the teams that rely on it, and the vendor providing it. Without this baseline, risk classification is guesswork and documentation efforts will be incomplete. Many organizations discover that their real AI footprint is two to three times larger than what IT formally tracks, because teams have adopted tools through shadow procurement, browser extensions, and direct API integrations that bypass central review.
Step Two: Classify Risk Tiers Accurately
The EU AI Act uses four risk tiers: unacceptable risk, high risk, limited risk, and minimal risk. Most enterprise workflow AI falls into the limited or minimal tiers, but the high-risk category is broader than many legal teams initially assume. Systems that make or materially inform decisions about employment, credit, access to essential services, or educational outcomes require the full compliance treatment. Importantly, it is the use of the system — not just its label or intended purpose — that determines classification. A general-purpose model used to rank job applications or screen contracts for risk exposure is a high-risk application regardless of how the vendor markets it. Classification decisions should be made jointly by legal, <a href='/use-cases/compliance-lead'>compliance leads</a>, and the operational teams that own the specific workflows.
Step Three: Build Required Technical Documentation
High-risk AI systems must maintain technical documentation covering model architecture, training data sources and governance, testing procedures, accuracy metrics, known limitations, and security measures. Auditors and national authorities increasingly expect a living document that reflects the system as deployed today, not a one-time filing. If your organization is using third-party models, the documentation burden partially shifts to the provider, but the deployer retains responsibility for ensuring the documentation exists and is accessible. Organizations should establish a documentation owner for each high-risk system and a review cadence tied to material changes in the model, the data, or the deployment context.
Step Four: Implement Human Oversight Mechanisms
The Act requires that high-risk systems be designed to allow human oversight throughout operation. Teams should consider adopting <a href='/features/policy-guardrails'>policy guardrails</a> to ensure consistent human-in-the-loop controls. This is not a passive requirement. It means establishing specific interfaces, roles, escalation paths, and training programs so that responsible humans can understand system behavior, interpret outputs, intervene when necessary, and override or halt the system. For governance teams, this translates into concrete controls: role-based access that limits who can act on AI-generated outputs, review workflows for high-stakes decisions, and audit records that reconstruct what the system did and how a human responded. Organizations that rely on broad employee training alone, without operational controls, are unlikely to satisfy an examiner's expectation of meaningful human oversight.
Step Five: Establish Post-Market Monitoring
The EU AI Act requires ongoing monitoring of high-risk systems after deployment, including incident reporting, performance tracking, and logging of malfunctions. Organizations need a monitoring program that goes beyond initial validation: tracking whether the system's outputs remain accurate and unbiased over time, whether edge cases are surfacing in production that were not covered in testing, and whether there are changes in the user population or input distribution that affect performance. <a href='/features/audit-trails'>Audit trails</a> of system behavior, policy events, and exception handling are the operational evidence that demonstrates a functioning monitoring program to regulators. Organizations should define specific metrics, review cadences, and escalation criteria for each high-risk system before the August deadline rather than building these processes reactively after an incident.
.png)