Skip to content
Back to blog

How to Audit Your AI Systems for EU AI Act Compliance

by Karven6 min read
Also available in: Français, Italiano
How to Audit Your AI Systems for EU AI Act Compliance

A logistics company in the Netherlands — 180 employees, solid ops team, genuinely thoughtful leadership — brought in Karven in late 2025 convinced they were in good shape. They'd read the AI Act summaries. They'd attended two webinars. Their head of HR had flagged the August 2026 deadline six months in advance. When we ran the inventory, we found a CV screening tool in their applicant tracking system they'd enabled as a "helpful feature" eighteen months earlier. Nobody had classified it. Nobody had a DPA with the ATS vendor covering that specific processing activity. Nobody had told candidates it existed. Under the AI Act, that's a high-risk AI system deployed without conformity evidence, logging, or human oversight documentation. One tool. Eighteen months. A gap that would have been a serious problem come August.

That's what an AI system audit actually finds. Not what you know you're running — what's running that you forgot about.

Karven has now done twelve of these audits for European mid-market companies. Here's what they actually cover.


What an AI System Audit Actually Covers

People imagine an AI audit as a technical assessment. Code review, model evaluation, security testing. Sometimes that's part of it. More often, what matters most is operational and documentary — not whether the model is good, but whether you can prove it's being used appropriately.

A proper audit covers five areas.

1. Discovery: finding everything that counts as AI

The first job is building the inventory. This sounds simple. It isn't. AI has are embedded in dozens of tools companies already use: the "smart scheduling" feature in your calendar software, the sentiment analysis running on customer support tickets, the candidate scoring in your ATS, the fraud signals in your payment platform. None of these look like "AI projects" to most operations teams. All of them potentially trigger AI Act obligations.

We spend the first day of every audit in stakeholder interviews across HR, finance, operations, and IT. The goal is to surface every tool that uses machine learning or automated decision-making in a way that affects people. The inventory is the foundation — everything else depends on it.

2. Classification: mapping tools to risk tiers

Once you know what you have, you classify it. The AI Act uses four tiers: prohibited, high-risk, limited-risk, and minimal-risk. For mid-market companies, the critical question is which tools land in the high-risk category under Annex III.

The Annex III list is specific: employment and HR systems, credit and insurance scoring, education assessment, access to essential services, biometric identification. If you're using AI for any of these purposes — even as a secondary feature of a broader platform — you're in the high-risk category and the full compliance framework applies.

Classification decisions need to be documented. If you determine a tool is not high-risk, write down why. Regulators who come knocking will want to see the reasoning, not just the conclusion.

3. Documentation: what exists vs. what's required

High-risk AI systems require a specific set of documentation: technical documentation from the provider, instructions for use, logging capabilities, and evidence of a conformity assessment. As a deployer, you're entitled to request this from your vendor. If they can't provide it, they're out of compliance as a provider — which creates a risk for you.

On the GDPR side, documentation requirements overlap: Records of Processing Activities, Data Processing Agreements, Data Protection Impact Assessments, and privacy notices. The audit checks whether these exist, whether they're current, and whether they actually cover the AI processing in question. In our experience, about 60% of the DPAs we review in mid-market audits were written before the company's current AI toolset was in place. They don't cover it.

4. Human oversight: can you actually intervene?

The AI Act requires that deployers of high-risk systems maintain the ability to pause, override, or stop the system's outputs. It also requires that someone in the organisation understands the system's capabilities and limitations and is empowered to act.

This is often a gap in practice. Companies assume the vendor handles it. The vendor assumes the company has designated someone. In twelve audits, we've found at least one oversight gap in every single one. Sometimes the tool has no override capability exposed to the deployer. Sometimes the person nominally responsible has never actually been briefed on the system. The audit checks both — and produces a specific accountability mapping as an output.

5. Vendor due diligence: what your contracts actually say

Your AI Act obligations don't disappear because the AI was built by someone else. If you're deploying a high-risk system, you're responsible for the conditions of deployment. That means your vendor agreements need to cover conformity assessment evidence, technical documentation, sub-processor chains, data residency, and the oversight capabilities you're entitled to.

We review every vendor contract touching an AI system in scope. In practice, most need updates. Some are missing the AI Act-specific clauses entirely. A few — particularly older SaaS agreements — don't even have adequate GDPR DPAs in place for the current processing activities.


The Gaps We Find Most Often

After twelve audits, the patterns are consistent. These are the things mid-market companies reliably miss.

The ATS problem. Applicant tracking systems almost universally include AI-powered candidate screening now. It's often enabled by default or added as an upgrade without anyone formally evaluating the compliance implications. The system is high-risk under Annex III. The company has no conformity evidence, no logging review process, and candidates haven't been informed. This gap turns up in roughly 8 of 12 audits.

The embedded-feature blind spot. HR platforms, CRMs, and finance tools have been quietly adding AI features for two years. "Performance insights," "deal scoring," "expense anomaly detection." Each one that touches employment decisions or financial decisions is potentially high-risk. Because nobody activated a standalone "AI project," nobody did the classification work.

DPAs that predate the AI deployment. A company signed a DPA with their CRM vendor in 2021. In 2024, they enabled the AI deal-scoring feature. The DPA doesn't cover it. Processing has been happening without a current legal basis for the specific activity.

No designated oversight person. The AI Act requires that someone be responsible for monitoring high-risk AI systems. In most mid-market companies, nobody has been formally assigned this role. IT knows the tools are running. Nobody has been briefed on their limitations, nobody has override authority in practice, and nobody is reviewing logs.

Missing candidate and employee notifications. Article 13/14 of GDPR requires transparency about processing, including AI-assisted processing. Privacy notices written in 2020 don't mention the CV screening tool introduced in 2023. Works council consultation requirements in France, Germany, and the Netherlands add another layer that several companies haven't addressed.


DIY vs. Hiring Help

An honest assessment: a competent in-house team can do this. It takes longer, and it requires someone who is comfortable reading the AI Act's Annex III list, interpreting vendor documentation, and having credible conversations with legal and IT simultaneously. If you have a DPO and a technically literate compliance lead, you have the ingredients.

Where in-house efforts tend to fall short: the inventory phase. Internal teams underestimate what's in scope because they're focused on their known AI projects, not the embedded features in existing tools. They also tend to be more deferential to vendors when requesting documentation — and vendors of high-risk AI systems are legally required to provide that documentation upon request.

Hiring external help makes sense if your in-house team is already stretched, if you have AI tools in genuinely ambiguous territory (not clearly high-risk, not clearly not), or if you want the audit to produce documentation that would hold up to regulatory scrutiny rather than something that's internally credible but hasn't been stress-tested.

Cost range for a mid-market AI audit with external help: €8,000 to €25,000, depending on the number of tools in scope and the state of existing documentation. A company with 5 AI tools and reasonable GDPR foundations is at the low end. A company with 15 tools and a GDPR house that hasn't been touched since 2019 is at the high end.

The cost of a fine for a missing DPIA on a high-risk system: up to €15 million or 3% of global annual turnover. The math isn't complicated.


Where to Start

Run the inventory before anything else. Two days, the right stakeholders, a structured interview process. List every tool that uses AI or automated decision-making in any way that affects employees, customers, or financial decisions. For each one, note the vendor, the data it processes, and whether it touches any Annex III category.

That list is your compliance roadmap. Everything else — DPIAs, vendor outreach, oversight assignments, documentation — flows from knowing what you actually have.

If you want help running that process, or want Karven to benchmark what you find against what we've seen in similar companies, start here.

Ready to take the next step?

Describe your situation and we'll tell you honestly what AI can do for you.

Get in Touch