Strategy
11 min

The EU AI Act in Practice: What Companies Actually Need to Do Now

The EU AI Act has been live for high-risk systems since February 2026. Most companies underestimate what that concretely means – and overestimate what applies to them.

From Paper to Practice

The EU AI Act has been on the books since August 2024, but the operational obligations come into force in stages. February 2026 activated the rules for high-risk systems. August 2026 will follow with obligations for general-purpose AI providers. Which means the discussion has only now reached the engine room of most companies.

What's striking: the majority of inquiries we receive are based on misunderstandings. Some clients believe they're affected when they aren't. Others feel safe even though they fall squarely into the high-risk category. Time for a pragmatic sort.

Who Is Actually Affected

The AI Act distinguishes four risk classes, but for most companies only one question matters: are we building or operating a high-risk system? The answer doesn't depend on whether the AI is "powerful" or "autonomous", but on what it's used for.

High-risk applications include, among others:

  • AI in recruiting (applicant screening, CV analysis, interview evaluation)
  • AI in personnel assessment (promotion decisions, performance evaluation)
  • AI in critical infrastructure (energy, water, traffic management)
  • AI in education and assessment contexts
  • AI in credit scoring and insurance pricing
  • AI in law enforcement and border management

If any of these fields apply, extensive obligations kick in – risk management, data quality, documentation, human oversight, logging, conformity assessment. Anyone running a chatbot for internal knowledge search, a campaign with image generation, or refactoring code with Claude is generally outside the high-risk class.

The Obligation That Hits Almost Everyone

One obligation reaches further than many realise: transparency. If you publish AI-generated content, offer AI-powered interactions, or create deepfakes, you must label it. That includes marketing copy, automated customer communication, AI avatars and synthetic voices.

Concretely: websites with AI-generated articles, automated reply emails, support bots – all need a notice. Not buried in a privacy policy, but where the user can perceive it. The labelling obligation is the lowest bar of the AI Act, but the one with the broadest reach.

What Foundation Model Providers Must Now Deliver

From August 2026, new obligations apply to providers of so-called general-purpose AI models – OpenAI, Anthropic, Google, Mistral and anyone bringing similar models to the European market. They must, among other things:

  • Document training data (with focus on copyrighted material)
  • Disclose model specifications
  • Conduct risk analyses for particularly capable models
  • Maintain a technical compliance report

For companies that only deploy these models, this has a practical effect: when picking a foundation model, compliance documentation becomes a selection criterion. Providers that meet these requirements cleanly will win enterprise deals. Providers that try to dodge the EU will lose them.

The Typical Implementation Mistakes

Three patterns repeat themselves:

Over-compliance from uncertainty. Companies treat every AI deployment like a high-risk system – with risk analysis, impact assessment, conformity evaluation. That's not just unnecessary, it slows AI initiatives so heavily they fizzle out. A clean risk classification at the start saves months.

Ignoring the transparency obligation. While everyone talks about the big high-risk requirements, many forget the simple labelling of AI content. This is exactly where the first wave of fines will land, because it's easy to verify.

Compliance without tooling. The requirements around logging, documentation and human oversight cannot be solved with a PDF. They need technical implementation – audit logs in AI workflows, documented model versions, traceable inputs and outputs. Building this only for an audit is too late.

What to Do Concretely Now

For companies wanting to put their AI compliance in order, there's a pragmatic sequence:

Step 1: Inventory. Which AI systems do we use, in which areas, with which data? This list is missing in 80% of the companies we talk to.

Step 2: Classify. Which of these systems fall into a high-risk category? Which are pure productivity tools? Which generate content with external impact?

Step 3: Establish transparency. Label all AI-generated content and AI interactions. It's the fastest, cheapest and most visible compliance measure.

Step 4: Secure high-risk systems. Only where the classification demands it, invest in risk management, data quality processes and conformity assessment.

Step 5: Review contracts. Contracts with AI providers must contain compliance commitments, audit rights and data provenance clauses. Standard contracts from 2023 no longer suffice.

What This Means for AI Tool Selection

EU compliance requirements are becoming a filter in the tooling market. Providers offering transparent model documentation, EU-compliant data handling and traceable audit logs will be preferred in B2B procurement. Buyers introducing tools without these properties in 2026 are accumulating technical debt that will be expensive in two years.

At nh labs, we run a short compliance triage at the start of every AI project: risk class, transparency obligation, data flows, audit needs. It saves significantly later – both in legal discussions and in architecture rework.

Conclusion

The EU AI Act is less terrifying than many believe – and at the same time less trivial. Most companies don't need to perform a conformity assessment, but nearly all need to label their AI content and re-evaluate their suppliers. Anyone who proceeds in a structured way now – inventory, classify, make transparent – is well positioned. Those who wait until the first fine in the industry hits then have only weeks to catch up. And weeks aren't enough for clean AI governance.