Skip to content
Back to blog

GDPR and EU AI Act Compliance Checklist for European Mid-Market Companies

by Karven8 min read
Also available in: Français, Italiano
GDPR and EU AI Act Compliance Checklist for European Mid-Market Companies

European companies have been navigating GDPR since 2018. Most have a process for it — a DPO, a set of policies, maybe an annual training. What many haven't done is revisit those foundations in light of AI. The EU AI Act changes the picture significantly, and 2026 is the year the compliance pressure becomes real. If you're an operations director or CEO at a company between 50 and 500 employees deploying AI tools — or even just evaluating them — this article is a working reference for what you need to check.


1. Why 2026 Is the Year This Matters

The EU AI Act entered into force on 1 August 2024, but its obligations apply in stages. The first tranche — prohibitions on unacceptable-risk AI systems — became enforceable on 2 February 2025. Penalties for those violations are already live: up to €35 million or 7% of global annual turnover, whichever is higher. If your company uses AI for social scoring, real-time biometric identification in public spaces, or manipulative subliminal techniques, you're already in scope.

The next major deadline is 2 August 2026, when the full compliance framework for high-risk AI systems kicks in. This covers a wide range of applications common in mid-market companies: AI-assisted recruitment, employee performance evaluation, credit and insurance scoring, and access to essential services. Companies that haven't completed their compliance groundwork by then face enforcement from newly designated national competent authorities, which Member States were required to nominate by August 2025.

Alongside the AI Act, GDPR enforcement on AI-related processing has been accelerating. In September 2024, the Dutch Data Protection Authority fined Clearview AI €30.5 million for processing biometric data without a lawful basis. The French CNIL had issued a €20 million fine against the same company two years earlier. In December 2024, the European Data Protection Board (EDPB) issued Opinion 28/2024, clarifying that AI models trained on personal data must, in most cases, be treated as subject to the GDPR — a statement with direct implications for any company using or building AI systems that process employee, customer, or prospect data.

The convergence of these two regimes — a maturing GDPR enforcement posture and a new AI-specific regulation with teeth — means 2026 is the year mid-market companies need to move from awareness to action.


2. Two Frameworks, One Problem

It helps to be clear about what each law covers, because they're often conflated.

GDPR governs the processing of personal data. It applies whenever your AI system touches data about identified or identifiable individuals. That's most operational AI use cases: customer analytics, employee scheduling, chatbots that retain conversation history, fraud detection, sentiment analysis on customer feedback. GDPR requires a lawful basis for processing, transparency toward data subjects, purpose limitation, and appropriate security measures. When AI is involved, it adds specific obligations around automated decision-making (Article 22) and Data Protection Impact Assessments (Article 35).

The EU AI Act governs AI systems by risk level, regardless of whether personal data is involved. Its primary concern is the potential harm an AI system can cause — to individuals, to fundamental rights, to safety. It imposes requirements on providers (those who develop or place AI systems on the market) and on deployers (those who use AI systems in a professional context). If your company is buying an AI tool and deploying it internally — for HR, finance, customer service — you're a deployer, and you have obligations.

The overlap is substantial. Most real-world AI deployments in mid-market companies involve personal data and carry at least some risk level under the AI Act. A recruitment screening tool, for example, is high-risk under the AI Act (Annex III, employment category) and processes special category or sensitive personal data under GDPR. You can't address one without the other.

The CNIL's 2025 guidance on AI and GDPR compliance, building on EDPB Opinion 28/2024, makes clear that when deploying AI that processes personal data, companies must conduct a DPIA, define a lawful basis before training or deployment, and build data minimisation into the design. These aren't new obligations — but AI amplifies the stakes of getting them wrong.


3. AI Act Risk Tiers: A Plain-English Guide

The AI Act classifies AI systems into four risk tiers. Understanding which tier applies to your use case is the first practical step in any compliance project.

Unacceptable risk — prohibited. These are AI applications the EU has banned outright because the potential harm is considered incompatible with fundamental rights. They include AI systems that manipulate individuals subliminally, exploit vulnerabilities based on age or disability, perform real-time remote biometric identification in public spaces (with narrow exceptions), and social scoring by public authorities. For most mid-market companies, none of these are in play — but it's worth confirming that your AI tools don't include any of these features in their broader product.

High risk — full compliance obligations. This is where most of the AI Act's substance lies, and where mid-market companies are most likely to find themselves in scope. High-risk systems include AI used in:

  • Recruitment and HR: CV screening tools, candidate ranking systems, performance monitoring, promotion recommendations
  • Credit and insurance: creditworthiness assessments, risk scoring, insurance underwriting
  • Education: exam evaluation, student assessment, dropout prediction
  • Access to essential services: eligibility screening for benefits, public services

If your company uses an AI tool for any of these purposes, the full requirements of the AI Act apply from 2 August 2026: conformity assessments, technical documentation, human oversight mechanisms, logging, and registration in the EU database.

Limited risk — transparency obligations only. AI systems that interact with humans — chatbots, AI-generated content, deepfake tools — must disclose that they're AI. This is primarily a transparency requirement. If you've deployed a customer-facing chatbot, you need to ensure it identifies itself as AI.

Minimal risk — no specific obligations. The vast majority of AI tools fall here: spam filters, recommendation engines, basic automation. No specific AI Act compliance steps are required, though GDPR still applies if personal data is processed.


4. Practical Audit Checklist

Work through each item methodically. Not every one will apply to your company, but each should be consciously evaluated rather than assumed away.

  1. List every AI system in use. Include third-party tools embedded in software you already have — AI features inside your CRM, HR platform, email tool, or analytics stack. You can't manage what you can't see. This inventory step typically takes two to four days with the right stakeholders in the room.
  2. Classify each system by AI Act risk tier. For each tool on your list, ask: does it appear in Annex III of the AI Act? The categories are employment, credit, education, essential services, biometrics, critical infrastructure, law enforcement, migration, and justice. If yes, treat it as high-risk until proven otherwise.
  3. Confirm your role: provider or deployer? Most mid-market companies are deployers — they use AI tools built by others. Your obligations as a deployer differ from a provider's, but for high-risk systems they're still substantial. Know the distinction before you start drafting policies.
  4. Check for a lawful basis under GDPR for each AI processing activity. Legitimate interest requires a balancing test. Consent must be specific, informed, and withdrawable. "It's in the contract" is not always sufficient. Where the basis is unclear, treat it as a gap.
  5. Update your Records of Processing Activities. Your RoPA should reflect AI-specific processing: categories of data involved, retention periods, third-party processors, and whether automated decisions are being made. Most RoPAs written before 2024 don't capture this adequately.
  6. Conduct a DPIA for high-risk or sensitive processing. Any AI processing that is likely to result in high risk to individuals requires a Data Protection Impact Assessment. For high-risk AI systems under the AI Act, a DPIA is effectively mandatory regardless of scale.
  7. Audit your vendor contracts. For each AI vendor processing personal data on your behalf, confirm a Data Processing Agreement is in place and review the sub-processor chain. For high-risk AI systems, request the technical documentation and conformity assessment evidence. If a vendor can't provide these, they're out of compliance as a provider — which is a risk for you as the deployer.
  8. Check employee transparency obligations. If AI is used in decisions that affect employees — performance evaluation, scheduling, monitoring — your Article 13/14 privacy notices need to say so. Several EU jurisdictions also require consultation with works councils before introducing AI tools that affect working conditions.
  9. Confirm human oversight capability. For every high-risk AI system, verify that someone in your organisation can pause, override, or stop the system's outputs. If the vendor doesn't provide that capability, that's a compliance gap.
  10. Document your classification decisions. If you've determined a system is not high-risk, write down why. Regulators will scrutinize this reasoning if an enforcement action ever arises. A one-page memo with the analysis is far better than nothing.

5. Implementation Timeline: Working Back from August 2, 2026

Five months isn't much time if you're starting from scratch. Here's what a realistic schedule looks like, working backwards from the deadline.

Now through April 2026: Complete the AI inventory and risk classification. Run vendor due diligence on any system touching Annex III categories. Identify gaps in your GDPR documentation — outdated RoPAs, missing DPAs, privacy notices that predate your AI deployments. Commission any required DPIAs. This phase is foundation work. Everything else depends on it.

May 2026: For high-risk systems, begin implementing or confirming human oversight mechanisms. Work with vendors to obtain or verify conformity assessment evidence and technical documentation. If any vendor cannot provide what the AI Act requires from providers, you have roughly 90 days to find an alternative or document a remediation plan.

June 2026: Train the staff responsible for operating high-risk AI systems. Article 4 of the AI Act requires that deployers take reasonable steps to ensure AI literacy. Document who received training, on what systems, and when. This documentation is what you'll show regulators if asked.

July 2026: Complete your AI register — the central record of all AI systems in use, their classification, your deployer responsibilities, and compliance status. Confirm that logging is enabled on high-risk systems as required. Verify that any required EU database registrations are underway with your vendors.

August 2, 2026: Full compliance obligations for high-risk AI systems are enforceable. Companies with documented, proportionate compliance programs are in a fundamentally different position from those that did nothing. You don't need perfection. You need evidence of genuine effort.


6. How Karven Helps

Karven works with European mid-market companies on AI strategy and implementation. In practice, that means helping you run the AI inventory and risk classification, identifying which current or planned deployments actually require action, and building the documentation practices that make a regulator conversation manageable. It also means helping you evaluate vendors before you sign — asking the right questions about conformity assessments, DPAs, and logging capabilities, rather than discovering the gaps six months into a deployment.

Start the conversation at karven.ai/contact if you want a second opinion on where you stand or help scoping the project before time runs short.


The companies that will have the smoothest August 2026 aren't the ones with the biggest legal budgets. They're the ones that started the inventory in March.


Sources Cited

  • EU AI Act — Official Text: Regulation (EU) 2024/1689, published in the Official Journal of the European Union, 12 July 2024
  • AI Act Implementation Timeline: artificialintelligenceact.eu/implementation-timeline/ (Future of Life Institute)
  • AI Act Annex III (High-Risk Systems): artificialintelligenceact.eu/annex/3/
  • EDPB Opinion 28/2024 on data protection aspects of AI models: edpb.europa.eu, 18 December 2024
  • CNIL guidance on AI and GDPR: cnil.fr/en/ai-cnil-finalises-its-recommendations-development-artificial-intelligence-systems (2025)
  • Dutch DPA fine against Clearview AI: €30.5 million, September 2024
  • CNIL fine against Clearview AI: €20 million, 2022
  • AI Act penalty structure: Articles 99-101, Regulation (EU) 2024/1689

Ready to take the next step?

Describe your situation and we'll tell you honestly what AI can do for you.

Get in Touch