Legal

The EU AI Act Explained – What It Means for Businesses

thumbnail photo of jordy hartendorp from kleritt
by
Jordy Hartendorp
Created on:
October 9, 2025
Last updated on:
November 5, 2025
visual of data protection on mobile phone
The European Union is introducing the world’s first comprehensive law regulating artificial intelligence: the AI Act. This groundbreaking legislation defines how AI systems can be developed, used, and sold within the EU. Its goal is to encourage innovation while protecting people’s rights and safety.The AI Act doesn’t just apply to tech giants, it affects every company that uses AI. Whether you run a small business using chatbots, an e-commerce platform with product recommendations, or an educational tool that analyses student data, these rules apply to you.

TL;DR

  • The AI Act is the EU’s new law to regulate artificial intelligence.
  • It classifies AI systems into four categories: unacceptable, high, limited, and minimal risk.
  • High-risk AI (like in education, healthcare, or HR) faces strict documentation and transparency rules.
  • Businesses must ensure human oversight, data accuracy, and clear communication when AI is used.
  • Non-compliance can lead to fines up to €35 million or 7% of global revenue.
  • Sectors most affected: construction, education, SMBs, e-commerce, and sales.
  • Kleritt helps businesses prepare for compliance while keeping innovation moving

What Is the EU AI Act?

The EU Artificial Intelligence Act is the first law designed specifically to regulate AI systems. While the GDPR focuses on data protection, the AI Act focuses on how AI behaves, makes decisions, and affects people.

It’s built on a risk-based framework, meaning that the higher the potential risk an AI system poses to people, the stricter the rules become.

The Four AI Risk Categories

1. Unacceptable risk

AI systems that threaten safety or human rights are banned.

Examples: social scoring, predictive policing, or manipulative behavioural algorithms.

2. High risk

AI used in areas such as healthcare, recruitment, finance, education, or law enforcement.

These systems must meet strict standards for transparency, documentation, human oversight, and testing.

3. Limited risk

Systems that interact directly with people (like chatbots) must disclose that the user is talking to an AI.

4. Minimal risk

Low-impact applications such as simple automation or internal data analysis. No strict legal obligations, but ethical best practices are encouraged.

What the AI Act Means for Businesses

The AI Act introduces new legal obligations for companies that develop or use AI. It requires organisations to ensure their AI systems are safe, transparent, and accountable before being deployed in the market.

Key business requirements:

  • Risk assessments and documentation: companies must record how their AI works, what data is used, and what safeguards are in place.
  • Human oversight: automated decisions that affect people must always include human review.
  • Transparency: users must be informed when they are interacting with AI.
  • Data governance: datasets must be accurate, representative, and free from bias.
  • Certification (CE marking): high-risk AI systems require EU compliance certification before they can be used commercially.

Non-compliance can result in fines up to €35 million or 7% of global annual turnover, depending on the severity of the breach.

Sector-by-Sector Impact

The AI Act’s impact will vary depending on the industry, but no sector is exempt. Below is an overview of how it affects key business areas.

1. Construction

AI is already being used for project management, safety monitoring, and risk prediction. Under the AI Act, construction companies must ensure that these tools are traceable, auditable, and explainable.

For instance, if an AI predicts project delays or safety risks, the decision logic must be documented and reviewable.

Impact:

  • Construction software providers must validate the accuracy of their algorithms.
  • Companies must monitor how AI uses sensor and drone data, especially when it involves personal information.

2. Education

AI in education, from student performance analytics to adaptive learning systems, falls into the high-risk category because it can directly influence educational opportunities.

Requirements:

  • Full transparency about how the AI evaluates or grades students.
  • Human intervention required in decision-making.
  • Student data must not be reused for unrelated purposes without explicit consent.

Impact:

Educational institutions and edtech providers will need to document every AI decision and ensure fairness in automated assessments.

3. Small and Medium-Sized Businesses (SMBs)

For SMBs, the AI Act brings new compliance responsibilities but also new opportunities. The EU provides guidance, financial support, and “regulatory sandboxes”, controlled environments to test AI safely.

Benefits:

  • Increased trust from customers through transparent AI use.
  • Easier access to European markets with certified, compliant products.

Challenges:

  • Additional documentation and testing requirements may strain smaller teams.
  • The need for legal and technical understanding of AI compliance.

4. E-commerce

AI powers most modern e-commerce, from recommendation engines to dynamic pricing and chatbots. These typically fall under limited risk, as long as they’re transparent.

Requirements:

  • Customers must be informed when they interact with an AI (e.g., chat support).
  • No discriminatory practices in AI-based pricing or recommendations.
  • All customer data must comply with GDPR standards.

Impact:

Online retailers will need to review and audit algorithms to ensure fair, explainable, and bias-free results.

5. Sales and Marketing

AI is transforming sales automation, lead scoring, and customer insights. The AI Act requires businesses to disclose when AI influences communication or decision-making.

Implications:

  • Transparency in AI-generated content (ads, emails, recommendations).
  • Restrictions on manipulative AI, like emotion recognition or behavioural targeting.
  • Clear policies on ethical data usage.

For marketing teams, this shift encourages trust-based automation, creating value without crossing ethical or privacy boundaries.

How Companies Can Prepare

To stay compliant and competitive under the AI Act, businesses should start preparing now.

Five practical steps:

  1. Map your AI systems: Identify where and how AI is used in your organisation.
  2. Assess data quality: Use clean, representative datasets.
  3. Document decision logic: Be able to explain how AI reaches conclusions.
  4. Perform risk assessments: Especially for high-risk AI applications.
  5. Work with trusted partners: Choose vendors and consultants that understand EU compliance.

How Kleritt Helps Businesses Prepare for the AI Act

At Kleritt, we help companies, especially SMBs, navigate the AI Act and implement responsible AI strategies.

We offer:

  • AI Implementation: from concept to compliant execution.
  • Compliance audits: evaluation of AI systems under the AI Act and GDPR.
  • Risk management & documentation: ensuring your AI meets EU transparency standards.
  • Integration support: linking AI tools with your business workflows safely and efficiently.

The AI Act isn’t designed to limit innovation, it’s meant to build trust and ensure accountability. With the right preparation, your business can meet all legal requirements and stay ahead in the new era of responsible AI.