Skip to content
Back to blog

EU AI Act August 2026 Deadline: What Mid-Market Companies Need to Do Now

by Karven5 min read
Also available in: Français, Italiano
EU AI Act August 2026 Deadline: What Mid-Market Companies Need to Do Now

On August 2, 2026, the full compliance framework for high-risk AI systems under the EU AI Act becomes enforceable. Fines reach €15 million or 3% of global annual turnover. Five months out, most mid-market companies aren't ready. Many don't know if they're in scope.

Here's the honest state of play.


What Actually Changes on August 2

The EU AI Act has been rolling out in phases since it entered into force in August 2024. The first wave, covering prohibited AI practices, became enforceable in February 2025. If your company uses AI for social scoring or real-time public biometric identification, those prohibitions are already live.

August 2, 2026 is when the obligations for high-risk AI systems take effect. This is where most mid-market companies sit.

High-risk AI systems, as defined in Annex III of the regulation, must meet a specific set of requirements by that date:

  • Conformity assessments completed and documented
  • Technical documentation available and maintained
  • Logging systems operational so the AI's decisions are traceable
  • Human oversight mechanisms in place and assigned to a responsible person
  • Registration in the EU's AI database for certain categories
  • Staff training on the systems they operate

If you're a deployer (meaning you use AI built by a third party), your obligations are somewhat narrower than a provider's, but still substantial. You're responsible for ensuring the system is used according to the provider's instructions, that logging is maintained, that a responsible person is designated, and that deployment conditions meet what the AI Act requires.

Transparency obligations for AI systems interacting with people also kick in fully on August 2. Customer-facing chatbots and AI-generated content tools need to identify themselves as AI.


Who's Actually at Risk

Any mid-market European company using AI for HR or finance decisions. That's the short answer.

Specifically, if your company is between 50 and 500 employees and uses AI in any of the following ways, you're in scope for high-risk classification:

HR and employment: CV screening, candidate ranking, employee performance monitoring, promotion recommendations, scheduling based on productivity data. The ATS platforms most mid-market companies use have been adding these features for two years. Many are enabled by default.

Credit and financial decisions: Creditworthiness scoring, insurance risk assessment, loan underwriting. If your company has any form of credit or insurance, or if you use AI to make decisions about customer accounts, this covers you.

Education and training: Exam evaluation tools, student assessment systems. Less common in mid-market, but worth checking if you run training programs.

Access to essential services: If you use AI to determine who gets access to a service, benefit, or opportunity, and that decision has material consequences for individuals, the AI Act likely classifies it as high-risk.

The tricky part is that many companies don't know they're using AI in these categories. A feature embedded in the HR platform they've been using since 2021 still triggers the classification question.

Geography matters here. The AI Act applies to AI systems used in the EU, regardless of where the company is headquartered. A UK company with EU customers using AI for credit decisions is in scope. A US company providing HR screening tools used by a French employer is in scope as the provider.


What Compliance Actually Requires

This is where people get lost in legal summaries. Let's be specific about what you actually need to do.

Step 1: Know what you have. Run an AI inventory. List every tool that uses machine learning or automated decision-making affecting people. Include the AI features embedded in platforms you already use. This step takes two to four days with the right people in the room. Don't skip it. Everything else depends on it.

Step 2: Classify each tool. For each item on your inventory, ask: does this appear in Annex III? If yes, treat it as high-risk. If you're not sure, treat it as high-risk until you can document why it isn't.

Step 3: Close the vendor gaps. For every high-risk tool, you're entitled to request technical documentation and conformity assessment evidence from the vendor. If they can't provide it, they're out of compliance as a provider, and you have a procurement decision to make. Also check that Data Processing Agreements are current and cover the specific AI processing activities happening today.

Step 4: Designate oversight. Assign a responsible person for each high-risk system. That person must understand the system's capabilities and limitations, have the authority to override or pause it, and be on record as the responsible operator.

Step 5: Document. Build an AI register, a central record of every AI system, its classification, the responsible person, and the status of compliance activities. It can be a spreadsheet. What matters is that it exists and is current.

Step 6: Train your team. Article 4 of the AI Act requires that deployers ensure staff operating AI systems have sufficient AI literacy. Run a documented training session. Keep records of who attended and when.

None of this requires a legal team or an enterprise compliance budget. It requires a few months of structured effort and someone with the authority to drive it.


What Happens If You Miss It

Some realism here: August 2, 2026 is not the day that enforcement actions start landing en masse. The national competent authorities that EU Member States were required to designate by August 2025 are newly operational, and enforcement bodies typically work through major cases before pursuing mid-market companies.

That said, the consequences of inaction are real and compound over time.

Enforcement risk exists and will grow. The GDPR story is instructive. Enforcement was slow in 2018-2020. By 2022-2024, major fines against mid-market companies were routine. The AI Act will follow the same trajectory. Companies that are caught unprepared in 2027 or 2028 will wish they'd acted in 2026.

Customer and procurement requirements are already tightening. Enterprise customers and public sector procurement processes are starting to require documented AI compliance as a condition of contract. If you sell to larger companies or government entities in Europe, this will affect you regardless of whether regulators knock.

Vendor pressure compounds the problem. Your vendors are scrambling to get their compliance in order. Some will start requiring deployers to sign AI Act-specific addenda. Others may withdraw high-risk features from markets they can't support compliantly.

The insurance question is emerging. Several professional liability insurers have started asking about AI governance in renewal questionnaires. Documented compliance programs are becoming a factor in coverage terms, not just regulatory risk.

The fine structure: up to €15 million or 3% of global annual turnover for violations of high-risk AI system obligations. For a company with €50 million in revenue, that's up to €1.5 million. The more serious cases involving prohibited practices carry up to €35 million or 7% of turnover.


The Karven Compliance Sprint

Karven has a structured, time-boxed compliance sprint for mid-market companies that need to close the gap before August. It covers the AI inventory, risk classification, vendor gap assessment, and documentation package in about six weeks, with a fixed scope and a clear deliverable.

If you're starting from scratch, six weeks is enough to understand your exposure and have a documented compliance program in place. Some companies just need the inventory and a second opinion on what they find. Others need help from discovery through implementation.

Talk to us at karven.ai/contact. The calendar fills up fast as August approaches.

Five months sounds like plenty of time. Once you account for vendor response times, internal sign-off processes, procurement cycles, and the reality that the people who need to be in the room are already busy, it isn't.

Start the inventory now.

Ready to take the next step?

Describe your situation and we'll tell you honestly what AI can do for you.

Get in Touch