The AI Act: What It Actually Means for Mid-Market Companies
The EU AI Act entered into force on 1 August 2024. Its first hard obligations took effect in February 2025. More deadlines arrive throughout 2026 and 2027. If you run technology or operations at a company with 100 to 500 employees in France or Italy, this is not a regulation you can delegate to legal and forget about. It will affect what you can ship, when you can ship it, and what documentation you must hold before a system goes live.
This post covers what the Act actually requires, which of your AI systems are in scope, and the concrete steps a 200-person company needs to take this year.
What the AI Act Actually Is
The AI Act is a product-safety regulation. It follows the logic of CE marking or GDPR risk tiers. The higher the potential harm of your AI system, the more obligations you carry before deployment. The Act does not ban AI. It does not require you to stop building. It requires you to classify what you are building, document it properly, and in some cases run third-party conformity assessments before going live. Non-compliance carries fines up to 30 million euros or 6% of global annual turnover, whichever is higher.
Which AI Systems Are High-Risk and Which Are Not
The Act draws a line between high-risk and everything else.
Prohibited outright (as of February 2025):
- Real-time biometric surveillance in public spaces (with narrow law-enforcement exceptions)
- Social scoring by public authorities
- AI systems that manipulate behaviour through subliminal techniques
- Exploitation of vulnerabilities based on age, disability, or social situation
If none of your systems do these things, you are not in the prohibited category.
High-risk AI (stricter obligations): These include AI used in recruitment and HR decisions, credit scoring, education or training assessment, safety components of critical infrastructure, and AI tools used by law enforcement or migration authorities. The full list is in Annex III of the regulation.
For most mid-market companies in Lyon, Milan, or Paris, the practical question is: are you using AI to screen CVs, score candidates, assess employee performance, or make credit or insurance decisions? If yes, those systems are high-risk.
Limited-risk AI: Chatbots, recommendation engines, content generation tools. These carry transparency obligations only. You must tell users they are interacting with AI. That is generally achievable in a sprint.
Minimal-risk AI: Spam filters, basic ML features in your product, AI-assisted analytics dashboards. No new obligations beyond good engineering practice.
Practical Checklist: What a 200-Person Company Must Do in 2026
This is not legal advice. It is what we have seen work during deployments. Get your lawyers to sign off on the specifics for your sector.
1. Take inventory. Map every AI system you use or build. Internal tools count. Third-party models you integrate via API count. Assign each one a risk tier using Annex III.
2. Check your GDPR posture first. The AI Act and GDPR overlap heavily. If you have a clean data governance structure and a legitimate basis for processing, you are already halfway to AI Act readiness on the data side.
3. For high-risk systems, start the documentation now. The Act requires a technical file that covers the system's intended purpose, training data sources, accuracy metrics, known limitations, and human oversight mechanisms. This takes time to produce properly. Starting in Q4 2026 is too late.
4. Establish a human oversight process. High-risk AI decisions cannot be fully automated. You need a documented process for human review of outputs, especially where the decision affects employees, customers, or financial eligibility.
5. Assign accountability. Someone in your organisation must own AI Act compliance. This is not a shared responsibility. Designate a person, give them scope and authority, and document it.
6. Review vendor contracts. If you use a SaaS product or API that embeds AI in a high-risk category, your supplier has obligations too. Check what your contracts say. If they are silent on AI Act compliance, update them.
7. Train the relevant teams. Developers, product managers, and HR teams using AI tools need to understand what oversight means in practice. One-hour training sessions suffice for most teams. This is not a deep learning curriculum.
What We Have Seen During Karven Deployments in France and Italy
We have worked with operations and technology teams across Lyon, Paris, and Milan to deploy AI systems inside regulated environments. A few patterns stand out.
The companies that move fastest are the ones that already have a data governance function. When data ownership is clear and processing logs exist, plugging in an AI Act compliance layer takes weeks, not months. Where that governance is missing, you end up doing two projects at once.
The biggest friction we see is in HR tech. Several clients had deployed AI-assisted recruitment tools built on top of third-party models. Those tools sat in a grey zone: high-risk under the Act, but built and managed by a vendor with no clear compliance posture. Untangling that took more time than the original deployment.
The companies that blocked deployment entirely were the ones waiting for perfect clarity from regulators. The Act has ambiguities. Guidelines are still being published by the AI Office. But waiting for certainty is the wrong move. The framework is clear enough to act on. Companies that started their compliance work in 2025 are now deploying with confidence. Those that waited are now scrambling.
The efficiency gains are real. We have seen teams that went through a structured compliance process ship AI features that stuck, because they had documented the scope and limits of their systems from the start. Across those deployments, we tracked an average 35% efficiency gain in the targeted workflows. The companies that rushed past compliance often had to pull features back after deployment. That is a much higher cost.
The Compliance-First Path to Production: How to Move Fast and Stay Clean
The goal is not to treat compliance as a gate at the end of your delivery pipeline. That model breaks under the Act, because the documentation requirements have to be built in from the start.
The approach that works is this:
At the start of any new AI project, classify the system. Write down the intended purpose, the input data, and the decision the system will make or support. This takes an afternoon, not a week.
During development, document as you build. Model cards, data provenance, accuracy benchmarks. These are not novel artifacts for engineers building production systems. They are good engineering practice that now has a regulatory basis.
Before deployment, run a review against the relevant obligations. If the system is high-risk, this review involves your compliance or legal function. If it is limited-risk, the review is an hour with your product and engineering leads.
After deployment, monitor. The Act requires post-market monitoring for high-risk systems. Build logging from day one. It is far cheaper than retrofitting.
This is not a slow path. Companies that follow this process deploy faster on the second, third, and fourth project because the team has the framework internalized. The first project takes longer. Every one after that goes faster.
Next Step: Book a 1-Hour Compliance Scoping Call
If you are a CTO, VP of Engineering, or operations leader in France or Italy and you are not sure where your systems sit under the Act, the fastest next step is a scoping call.
In one hour, we will map your current AI systems against the Act's risk tiers, identify the two or three things that need immediate attention, and give you a clear sequence of actions for 2026.
No slides. No generic framework. Specific to your stack and your sector.


