Compliance 8 min

EU AI Act Readiness Checklist for Generative AI

The EU AI Act is moving from policy discussion to operational readiness. Here is what companies using generative AI should organize now.

TL;DR

  • Know the Key EU AI Act Dates: The EU AI Act entered into force on August 1, 2024.
  • Create an AI System Inventory: Start with a complete inventory of AI systems and AI-enabled workflows.
  • Separate Low-Risk and Higher-Risk Usage: Most generative AI use inside companies is not automatically high risk, but some use cases deserve closer review.
  • Use these practices with governed controls for AI for companies.

Know the Key EU AI Act Dates

The EU AI Act entered into force on August 1, 2024. Prohibited AI practices and AI literacy obligations started applying on February 2, 2025. Governance rules and obligations for general-purpose AI models started applying on August 2, 2025. The Act is broadly applicable from August 2, 2026, with some exceptions and extended timelines for certain high-risk systems. Companies should confirm obligations with legal counsel, but operational teams should not wait until the deadline to build inventory, ownership, training, and evidence processes.

Create an AI System Inventory

Start with a complete inventory of AI systems and AI-enabled workflows. Include public chatbots, enterprise AI assistants, productivity copilots, internal model APIs, agents, AI search tools, automated decision workflows, and vendor tools that include AI features. For each system, record owner, purpose, users, data categories, model provider, geography, retention terms, human review process, and whether the system affects people, customers, access, employment, education, credit, healthcare, safety, or legal outcomes.

Separate Low-Risk and Higher-Risk Usage

Most generative AI use inside companies is not automatically high risk, but some use cases deserve closer review. Drafting a non-sensitive internal outline is very different from using AI to influence hiring, credit, access, healthcare, education, worker management, or safety decisions. Build a risk review process that flags consequential uses, regulated data, external-facing outputs, autonomous agents, and workflows where people may rely on AI without meaningful human review.

Document AI Literacy Efforts

AI literacy should be treated as an operational program, not a one-time training slide. Employees should understand approved tools, restricted data, hallucination risk, human review expectations, bias concerns, and how to report problems. Keep records of training content, target audience, completion, updates, and role-specific guidance. The training should be plain enough for non-technical employees and specific enough that people know what to do differently on Monday morning.

Prepare Audit Evidence

Readiness depends on evidence. Maintain records of approved tools, risk reviews, policy decisions, access controls, data protection settings, vendor reviews, training, incident reports, and audit logs. For generative AI workflows, useful evidence includes who used the system, which model was used, what policy controls fired, whether sensitive data was masked, when a human approved an output, and what administrative changes were made. Audit trails make readiness easier because evidence is generated continuously rather than reconstructed later.

Assign Governance Owners

EU AI Act readiness touches legal, compliance, security, IT, HR, procurement, finance, and business teams. Assign clear owners for AI inventory, employee guidance, vendor review, risk classification, technical controls, incident response, and evidence collection. Without ownership, readiness becomes a spreadsheet that is updated once and forgotten. The goal is a repeatable governance process that survives new tools, new models, and new business workflows.

Free Resource

The 1-Page AI Safety Sheet

Print this, pin it next to every screen. 10 rules your team should follow every time they use AI at work.

You get

A printable 1-page PDF with 10 clear do's and don'ts for AI use.

Operational Checklist

  • Assign an owner for "Know the Key EU AI Act Dates".
  • Define baseline controls and exception paths before broad rollout.
  • Track outcomes weekly and publish a short operational summary.
  • Review controls monthly and adjust based on incident patterns.

Metrics to Track

  • Audit evidence completeness
  • Retention exception count
  • Policy violation recurrence rate
  • Review cycle SLA adherence

Free Assessment

How Exposed Is Your Company?

Most companies already have employees using AI. The question is whether that's happening safely. Take 2 minutes to find out.

You get

A short report showing where your biggest AI risks are right now.

Knowledge Hub

Article FAQs

The EU AI Act entered into force on August 1, 2024. Prohibited practices and AI literacy obligations applied from February 2, 2025. GPAI-related obligations applied from August 2, 2025. The Act is broadly applicable from August 2, 2026, with some exceptions.
Start with an AI inventory, assign owners, classify use cases by risk, document AI literacy efforts, review vendor tools, and make sure audit evidence can be produced for important AI workflows.
It can, if the company provides or uses AI systems connected to the EU market or EU users. Companies should confirm legal applicability with counsel, but many governance practices are useful regardless of jurisdiction.
AI literacy means making sure people involved with AI have enough knowledge to use and oversee AI appropriately for their role, including understanding risks, limitations, and responsible use expectations.

SAFE AI FOR COMPANIES

Deploy AI for companies with centralized policy, safety, and cost controls.

Sign Up