Back to Insights

EU AI Act: What Every Business Must Do Now — Even If You Only Use ChatGPT

EU AI Act: First obligations have been in force since February 2025. AI literacy, transparency, high-risk rules — what businesses face and what the Digital Omnibus changes.

You use ChatGPT at work? Microsoft Copilot? An AI tool for screening job applicants? Then the EU AI Act applies to you. Not someday — now.

Regulation (EU) 2024/1689 — better known as the EU AI Act — has been in force since August 2024. First obligations have applied since February 2025. The main deadline was set for August 2026 but is likely to be partially postponed through the so-called Digital Omnibus. Still: those who don't act now risk fines of up to €35 million or 7% of global annual turnover.

This article explains what the AI Act requires of your business — step by step, in plain language.

What is the EU AI Act?

The AI Act is the world's first comprehensive law regulating artificial intelligence. It applies to every business that develops, distributes, or deploys AI systems — in the EU or with effect in the EU. As an EU Regulation, it applies directly in all member states without the need for national transposition.

The core principle: the higher the risk an AI system poses to fundamental rights, health, and safety, the stricter the requirements. The AI Act distinguishes four risk levels:

  • Unacceptable risk: Banned. Social scoring, manipulative AI, emotion recognition in the workplace.
  • High risk: Strict requirements. AI in recruitment, credit scoring, performance monitoring.
  • Limited risk: Transparency obligations. Chatbots, deepfakes, AI-generated content.
  • Minimal risk: No special requirements. Spam filters, search suggestions, AI in video games.

Who is affected? The roles under the AI Act

Most businesses don't develop AI — they use it. The AI Act calls them deployers. And the obligations for deployers are real.

The AI Act distinguishes:

  • Providers: Those who develop an AI system and place it on the market. They bear the heaviest obligations.
  • Deployers: Those who use an AI system professionally — meaning almost every company using ChatGPT, Copilot, or other AI tools.
  • Importers/Distributors: Those who bring AI systems from outside the EU into the single market.

Key point: Even if you only use AI "on the side" — for drafting emails, translations, or data analysis — you are a deployer under the AI Act.

What already applies — since February 2025

1. AI Literacy Obligation (Art. 4)

Since 2 February 2025, all businesses using AI must ensure that their staff has a sufficient level of AI competence. This applies to everyone working with AI systems — from the CEO to the administrative assistant.

What "sufficient" means depends on:

  • The person's role (an HR manager using AI for recruiting needs more knowledge than someone using ChatGPT for emails)
  • The risk level of the system in use
  • The groups of people affected

Practical tip: Document which AI systems are in use in your company and who works with them. Conduct training — and document it. In the event of an audit, you must be able to demonstrate that you have taken measures.

2. Prohibited Practices (Art. 5)

Also since February 2025, the following AI practices are banned:

  • Manipulative AI: Systems deploying subliminal or deceptive techniques to influence behaviour
  • Social scoring: Evaluating individuals based on social behaviour or personality traits
  • Emotion recognition in the workplace and education (exception: medical or safety reasons)
  • Untargeted facial recognition: Building facial databases through scraping
  • Biometric categorisation by sensitive attributes (race, religion, sexual orientation)

Watch out: The workplace emotion recognition ban is broad. If you use a tool that measures employee mood or satisfaction via AI — that may already be prohibited.

What comes from August 2026

3. Transparency Obligations (Art. 50)

From 2 August 2026, businesses must:

  • Disclose AI interaction: If customers interact with an AI system (e.g. a chatbot on your website), they must be informed.
  • Label deepfakes: AI-generated or manipulated images, videos, and audio must be declared as such.
  • AI-generated text: If you publish AI-generated text intended to inform the public, you must disclose this — unless a human has editorially reviewed and takes responsibility for the content.

4. High-Risk Obligations

AI systems in particularly sensitive areas face the strictest requirements. The most relevant high-risk areas for businesses:

  • HR and recruitment: AI for job ads, application filtering, candidate assessment, performance monitoring, promotion and termination decisions
  • Credit scoring: AI-based creditworthiness assessment
  • Insurance: AI for risk assessment and pricing in life and health insurance
  • Education: AI for admission decisions and exam evaluation

If you deploy such a system, you must among other things:

  • Ensure human oversight by competent individuals
  • Use the system in accordance with the provider's instructions
  • Retain automatically generated logs for at least 6 months
  • Inform workers before deployment (works council!)
  • Immediately inform the provider and authorities if risks arise
  • Conduct a data protection impact assessment where applicable

Digital Omnibus: Are the deadlines shifting?

Yes — likely. In November 2025, the European Commission proposed the Digital Omnibus package, which postpones several deadlines:

  • High-risk obligations (Annex III): Instead of August 2026, likely December 2027
  • High-risk in products (Annex I): Postponed to August 2028
  • AI content labelling (Art. 50(2)): For existing systems until February 2027
  • SME reliefs extended to "small midcaps" (up to 750 employees)

Important: The Digital Omnibus has not been adopted yet. On 18 March 2026, the parliamentary committees IMCO and LIBE voted; trilogues with the Council are ongoing. Until the legislative process is complete, the original August 2026 deadline applies.

The wisest strategy: Prepare as if August 2026 is real — and plan as if December 2027 is the likely enforcement date.

What does this mean for your business specifically?

You use ChatGPT or Copilot internally

→ The AI literacy obligation applies now. Train your staff and document it. From August 2026: transparency required when customers interact with AI.

You use AI in recruitment

→ This is a high-risk system. Human oversight, logs, worker information — full deployer obligations. Check whether your vendor meets provider obligations (CE marking, EU database registration).

You have a chatbot on your website

→ From August 2026: users must know they're interacting with AI. A simple notice is sufficient.

You use AI for pay analysis

→ Double issue: AI Act (potentially high-risk under Annex III) and EU Pay Transparency Directive (deadline: June 2026). Two regulations, one action needed.

The penalties

The AI Act mirrors the GDPR on penalties — with even higher ceilings in some cases:

  • Prohibited practices: Up to €35m or 7% of global annual turnover
  • High-risk and transparency obligations: Up to €15m or 3% of annual turnover
  • False information to authorities: Up to €7.5m or 1% of annual turnover

SME rule: For small and medium-sized enterprises, the lower of the two values applies. An SME with €5m turnover would face a maximum of €150,000 for a transparency violation (3% of €5m) — not €15 million.

Five steps you can take now

  1. AI inventory: Which AI systems do you use? List them all — including those used "only" by individual employees.
  2. Clarify roles: Are you a deployer? In most cases: yes. For each system, identify who the provider is.
  3. Determine risk class: Does a system fall into a high-risk area (HR, credit, insurance, education)? If so, stricter obligations apply.
  4. Train AI competence: Train your staff. Document the training. This is not optional — it has been mandatory since February 2025.
  5. Set up processes: Who is responsible for AI governance in your company? Who monitors deployment? Who documents?

Conclusion: Don't wait, prepare

The EU AI Act is not a future topic — it's the present. The AI literacy obligation is already in force. The prohibitions are already in force. And even if the high-risk deadlines shift through the Digital Omnibus: the risk classification of your systems doesn't change. Knowing where you stand is the first step — and you can take it today.

Want to know what the AI Act means specifically for your business? We help with the stocktake — structured, practical, understandable. Book a free initial call.

Frequently Asked Questions

Disclaimer: The contents of this article are for general information purposes only and do not constitute legal advice. For a binding assessment of your individual situation, please consult a qualified legal professional.

JD

Jens Druckenmüller, LL.M.

Entrepreneur & Independent Advisor

20 years of experience in boardrooms, due diligence and advisory. Today as an independent advisor based in Luxembourg — the topics change, but the standards never do.

Ready to prepare?

In a complimentary initial consultation, we'll assess where your organization stands and identify the right next steps.

Schedule a Consultation