Skip to content

The AI Act [EU 2024] provides the first Europe-wide, legally binding framework for the use of artificial intelligence. For HR departments, this represents a paradigm shift: what was previously viewed as an innovation project now also becomes a compliance issue. Systems for candidate selection, performance evaluation, or workforce planning are classified as “high-risk AI” – with far-reaching consequences for governance, documentation, and control.

The regulatory clock is ticking: from August 2026, the AI Act applies to new HR systems. Existing systems must be retrofitted when substantially modified. Yet a gap exists between legal text and organizational reality: How can these requirements be implemented in heterogeneous tool landscapes, with SaaS providers, and involving HR staff and works councils? This article analyzes central implementation risks and demonstrates solution pathways.

Classification: What Qualifies as High-Risk AI?

The AI Act distinguishes AI systems by risk levels. Particularly relevant for HR is Annex III, which classifies systems for recruitment, selection, evaluation, promotion, and systems for termination of employment relationships as high-risk systems [EU 2024, Annex III]. The decisive factor is not the technology but its impact: even seemingly harmless tools like CV screening software or automated skill-matching systems fall under this category when they influence decisions.

Practical example: A company uses an AI-supported recruiting tool that pre-filters applications. If the recruiter uses this pre-selection as the basis for their decision, the system is high-risk – even if formally “a human decides.” The boundary between “decision support” and “high-risk influence” often remains unclear in practice and requires case-by-case assessment.

Where Regulation Meets Corporate Reality

The regulatory requirements increase implementation complexity:

Conformity Assessment: Every high-risk system must be tested before deployment – a task for which neither HR nor IT is typically equipped.

Human Oversight: The required oversight remains vague. How can this be practically designed for 500 automated CV screenings per day?
Transparency Obligation: Employees must be informed about AI use “in understandable language” – a balancing act between legal precision and comprehensibility.

Transparency Obligation: Employees must be informed about AI use “in understandable language” – a balancing act between legal precision and comprehensibility.

Technical Documentation: With SaaS solutions using AI models, training data and model logic often remain proprietary. The deploying organization nevertheless remains accountable – with incomplete information.

These requirements are primarily organizationally challenging, not technically. The real difficulty lies in cross-functional coordination between HR, IT, data protection, legal, and works councils.

Vendor Management: Responsibility Remains with the Deployer

Most HR systems are not developed internally but are purchased as SaaS services. Responsibility nevertheless remains with the deploying organization – regardless of who developed the system. The following aspects particularly require attention:

Contract Design: Supply contracts must explicitly define who is responsible for conformity assessment, documentation, and ongoing monitoring. Standard contracts require careful review on these points.

Due Diligence: Before implementing a system, providers should deliver evidence: CE marking, technical documentation, bias testing procedures, and implemented risk management systems.

Ongoing Monitoring: Even after implementation, systems must be regularly reviewed – for instance, during model updates or changes in usage. This requires escalation pathways and audit cycles.

Works Councils and Co-Determination: The Underestimated Dimension

In Germany, the AI Act encounters an established system of workplace co-determination. Section 87(1) No. 6 of the German Works Constitution Act (BetrVG) grants works councils co-determination rights for technical systems capable of monitoring employee behavior or performance [BetrVG] – a description that applies to many HR AI systems. This German model of employee participation represents one of Europe’s strongest frameworks for workplace rights and offers important lessons for organizations navigating AI implementation across different jurisdictions.

The AI Act and BetrVG pursue different logics: even an AI Act-compliant system can fail due to lack of works council agreement.

The following approaches have proven effective:

Early Transparency: Works councils should be informed during the planning phase – not just before deployment. The earlier the involvement, the lower the conflict potential.

Comprehensible Communication: Technical documentation alone is insufficient. Works councils require understandable explanations of functionality, data usage, and potential impacts on employees.

Clear Responsibilities: Works agreements should define who serves as contact person for problems or complaints and how corrections are initiated.

Pilot Phases and Evaluation: Works councils are more willing to approve new systems when these are initially tested and evaluated on a limited scale.

Organizations that view works councils as partners rather than obstacles create legal certainty while simultaneously strengthening acceptance among the workforce – a critical factor for sustainable AI implementation.

An Implementation Approach in Four Steps

Step 1 – Inventory and Classification: Capture all AI tools deployed in HR, including SaaS solutions and in-house developments. Document provider, functionality, and decision depth. Subsequently assess using AI Act criteria with a risk matrix (automation level x impact on employees).

Step 2 – Gap Analysis: Compare regulatory requirements with current status. Where is documentation missing? Where is vendor information incomplete?

Step 3 – Roles and Process Model: Define clear responsibilities. Who conducts conformity assessments? Who monitors ongoing systems? Who communicates with works councils and data protection? A RACI model (Responsible, Accountable, Consulted, Informed) creates clarity.

Step 4 – Integration with Existing Structures: Connect to existing compliance structures (e.g., data protection management, IT risk management). Leverage established processes – for instance, data protection impact assessments and AI Act documentation can be partially merged.

Organizations that act early secure compliance and create competitive advantages. AI compliance is becoming increasingly relevant for tenders and ESG ratings.

Conclusion

With the AI Act, the deployment of AI systems in HR is comprehensively regulated. The challenge lies not in understanding the requirements but in their implementation
– particularly at the interfaces between HR, IT, legal, and works councils. Key elements
include vendor management that clearly assigns responsibility; active involvement of works councils that integrates regulatory with co-determination requirements; and a structured implementation roadmap that proceeds pragmatically and maintains documentation.

Organizations need not demonstrate perfect systems from day one – but they require a traceable path forward. In an increasingly regulated environment, AI compliance becomes a competitive advantage.

Sources

EU [2024]: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Official Journal of the European Union L 2024/1689.

BetrVG [1972]: Betriebsverfassungsgesetz (German Works Constitution Act), as promulgated on 25 September 2001 (Federal Law Gazette I p. 2518), last amended by Article 6 of the Act of 20 May 2020 (Federal Law Gazette I p. 1044).

Author

Ihno Raab

Head of HR Consulting Services
d-fine GmbH, Frankfurt am Main