alsace

EU AI Act: what it means for Alsatian SMEs

Philippe Braun 3 min read

The EU AI Act has been making the rounds in professional conversations. This European regulation on artificial intelligence, adopted in 2024, is now progressively taking effect — and it applies to every company that uses or develops AI tools on European soil.

For a business owner running an SME in Alsace, the first reaction is often: “Another regulation… does this actually apply to me?”

Short answer: probably less than you fear. But it’s worth checking.

What the AI Act regulates, in plain terms

The core principle is straightforward: the more likely an AI system is to cause serious harm, the stricter the requirements.

The AI Act classifies AI applications into four categories:

  • Unacceptable risk — Prohibited. This covers things like mass social scoring by public authorities, subliminal behavioral manipulation, and real-time facial recognition in public spaces (with limited exceptions). Nothing here concerns SMEs.
  • High risk — Heavily regulated. This is where the most significant obligations apply.
  • Limited risk — Some transparency requirements, particularly for chatbots (users must know they’re talking to a machine).
  • Minimal risk — No specific obligations. The vast majority of everyday applications fall here.

What counts as “high risk”

This is the category to pay attention to. High-risk applications include, among others:

  • Recruitment and candidate selection tools (automated CV sorting, applicant scoring)
  • Employee evaluation systems (automated performance scoring without human oversight)
  • Access to certain essential services (credit, insurance, housing)
  • Some critical infrastructure and equipment

In practical terms, for an SME with 50 to 200 employees in Alsace: if you use an HR tool that automatically sorts applications or rates performance without human review, you may fall within the high-risk scope.

Common uses that aren’t a problem

The vast majority of AI applications in a typical SME fall into the “minimal risk” or “limited risk” categories:

  • A chatbot on your website — limited risk. The main obligation: inform users they’re interacting with an automated system. Straightforward to implement.
  • A text generation tool (summaries, emails, marketing content) — minimal risk. No specific obligations.
  • An assistant for accounting or invoicing — minimal risk in most cases.
  • Commercial data analysis tools — minimal risk.

If you’re using ChatGPT to draft emails or Copilot to summarize meetings, the AI Act doesn’t impose anything new on you.

What to do right now

No need to panic, but a few useful habits:

  1. Take stock. List the AI tools you’re currently using — including those built into your CRM, ERP, or HR software. You may be surprised by what you’re already using without calling it “AI.”
  2. Check the risk category. For each tool, ask: does it make decisions that significantly affect people (hiring, credit, safety)? If so, look more closely.
  3. Make sure your chatbots are transparent. If you have a virtual assistant on your site, a clear disclosure is generally sufficient.
  4. Get specialist advice for borderline cases. If a tool is involved in automated HR decisions or operates in a regulated sector, professional guidance is worthwhile.

The takeaway

The AI Act is fundamentally good news: it establishes clear rules in a domain that badly needed them. And for most SMEs in Alsace, it doesn’t call into question the tools already in use.

The real task is simply to have a clear picture of what you’re using — and to make that choice consciously, not by default. That’s good practice regardless of regulation.