Using AI to screen resumes in the EU is legal. Using AI to read a candidate’s facial expressions during a video interview is not. That distinction became law on February 2, 2025, when the EU AI Act’s Article 5 prohibitions took effect. And it is just the beginning.
By August 2, 2026, every AI system used to filter applications, rank candidates, or evaluate job seekers becomes a high-risk AI system under Annex III of the EU AI Act. That means mandatory risk assessments, decision logging, human oversight, and candidate notification. Companies that ignore this face fines up to EUR 15 million or 3% of global revenue.
The rules are not vague. They are specific, and they draw clear lines between what is allowed, what is banned, and what requires compliance work. Here is the full breakdown.
What Is Banned: The Hard No-Go List
Three categories of recruiting AI are explicitly prohibited under Article 5 of the EU AI Act. These bans have been active since February 2, 2025, and carry the highest penalties: up to EUR 35 million or 7% of global turnover.
Emotion Recognition in Hiring
Article 5(1)(f) bans AI systems that infer emotions from biometric data in the workplace. The definition of “workplace” explicitly includes the recruitment process. Any tool that analyzes a candidate’s facial expressions, voice tone, micro-expressions, or body language during an interview to assess emotions, stress levels, or “truthfulness” is illegal.
This is not theoretical. Companies like HireVue previously offered AI-driven facial analysis of video interviews before dropping the feature in 2021 under pressure. Under the EU AI Act, using such a tool in the EU now carries the highest tier of penalties.
The only exception: AI emotion recognition for medical or safety purposes (e.g., monitoring fatigue in high-risk environments like construction or aviation). Recruitment does not qualify.
Social Scoring of Candidates
AI systems that evaluate or classify candidates based on social behavior or personality traits to determine their “trustworthiness” or “social standing” are banned. If a tool aggregates a candidate’s social media activity, purchasing behavior, or personal network to generate a “suitability score” unrelated to job qualifications, it falls under the social scoring prohibition.
Subliminal or Manipulative Techniques
AI that uses techniques below the threshold of a person’s consciousness to materially distort their behavior is prohibited. In recruiting, this could include AI-powered chatbots designed to manipulate candidates into accepting lower compensation or disclosing information they would not otherwise share.
What Is Legal but High-Risk: The Compliance Zone
Most AI recruiting tools fall into the “legal but heavily regulated” category. Annex III, Section 4 of the EU AI Act explicitly classifies these employment AI systems as high-risk:
- AI for placing targeted job advertisements to specific individuals
- AI for analyzing and filtering job applications (resume screening)
- AI for evaluating candidates in interviews or tests
- AI for making decisions on hiring, promotion, or termination
- AI for monitoring and evaluating employee performance
If you use tools like Greenhouse, Workable, Paradox, or any ATS with AI-powered candidate ranking, you are operating a high-risk AI system. It is legal to use these tools, but only if you meet the requirements below by August 2, 2026.
The Six Requirements for High-Risk Recruiting AI
Articles 9 through 15 of the EU AI Act define what high-risk system deployers (that is you, if you use these tools) must do.
1. Risk management (Article 9). Maintain a documented, continuously updated risk management process. For recruiting AI, this means identifying how the system could produce biased outcomes, documenting mitigation steps, and re-evaluating after any system update. This is not a one-time audit. It is an ongoing process.
2. Data governance (Article 10). Training data must be relevant, representative, and free from systematic errors. If your AI resume screener was trained on historical hiring data that skewed toward male candidates, that is a compliance violation. Ask your vendor for documentation on training data composition and bias testing.
3. Technical documentation (Article 11). Complete documentation of how the system works, including architecture, training methodology, performance metrics, and known limitations. Your vendor should provide this, but you are responsible for having it available for regulators.
4. Logging and traceability (Article 12). The system must maintain logs that allow reconstruction of how any individual hiring decision was influenced. If a candidate asks “why was I screened out?”, you need to be able to answer with specific, documented reasoning.
5. Transparency (Article 13). Candidates must be informed that AI is being used in the hiring process. This is not optional. The notification must explain what the AI does and what role it plays in the decision. Article 50 adds that individuals have the right to request an explanation of decisions made with AI assistance.
6. Human oversight (Article 14). A qualified human must be able to understand, interpret, override, and interrupt the AI system’s outputs at any time. “Rubber-stamping” AI recommendations does not count. The human reviewer must have the competence to evaluate whether the AI’s ranking makes sense and the authority to overrule it.
Beyond the EU: Other Laws That Apply
The EU AI Act is the most comprehensive, but it is not the only regulation affecting AI recruiting in 2026.
Colorado AI Act (Effective June 2026)
Colorado’s AI law, delayed from its original February 2026 start date, requires “reasonable care” to prevent algorithmic discrimination when AI makes or substantially influences employment decisions. Employers must conduct impact assessments, implement governance programs, and notify candidates before using AI in hiring. Penalties run up to $20,000 per violation under deceptive trade practices law.
Illinois AI Employment Law (Effective January 2026)
Illinois expanded its AI Video Interview Act with House Bill 3773. The updated law covers all AI in recruitment, hiring, and promotion (not just video interviews). Employers must notify candidates when AI is used and ensure it does not produce discriminatory outcomes based on protected characteristics.
New York City Local Law 144
Active since July 2023, NYC’s law requires annual bias audits for any automated employment decision tool (AEDT) used in hiring. Audit results must be published on the employer’s website. The law applies to any employer or staffing agency making hiring decisions for NYC-based positions.
Germany-Specific: DSGVO and Works Council Rights
For companies operating in Germany, two additional layers apply on top of the EU AI Act.
The DSGVO (GDPR) requires a legal basis for processing candidate data. AI that profiles applicants typically triggers Article 22 DSGVO, which gives individuals the right not to be subject to purely automated decisions with legal effects. Practically, this means a human must make the final hiring decision, not the algorithm.
German works councils (Betriebsrat) have mandatory co-determination rights under Section 87(1) No. 6 BetrVG for any technology that can monitor employee behavior or performance. Since recruiting AI falls into this category, employers must involve the works council before deploying any AI recruiting tool. Section 80(3) BetrVG even entitles the works council to hire an external AI expert at the employer’s expense.
No works council agreement, no AI tool. That is the legal reality in co-determined German companies.
A Practical Compliance Checklist for HR Teams
If your company uses AI in any part of the hiring process, here is what you should have in place by August 2, 2026.
Inventory your AI tools. List every AI-powered tool in your hiring workflow. Include ATS screening features, chatbots, scheduling assistants, video interview platforms, and assessment tools. Many companies discover they have more AI in recruiting than they realized.
Classify each tool. Determine whether each tool falls under high-risk (likely yes, if it influences candidate selection) or prohibited (emotion recognition, social scoring). If a tool offers “sentiment analysis” or “personality assessment” from video, check whether it crosses the prohibition line.
Audit your vendors. Request documentation from each vendor on training data governance, bias testing results, decision explainability, and EU AI Act compliance roadmap. If a vendor cannot provide this by mid-2026, consider switching.
Implement human oversight. Ensure that every AI-generated recommendation (screening, ranking, scoring) is reviewed by a qualified human before it becomes a hiring decision. Document the review process.
Set up candidate notification. Create a standard disclosure informing candidates that AI is used in your hiring process. Explain what the AI does and how candidates can request a human review.
Document everything. Maintain logs of AI system outputs, human review decisions, bias audits, and risk assessments. If a regulator or candidate challenges a decision, you need a paper trail.
Plan for ongoing monitoring. This is not a one-time certification. Set up quarterly reviews of AI system performance, bias metrics, and candidate feedback. Update your risk assessment after any system changes.
Frequently Asked Questions
Is AI resume screening legal under the EU AI Act?
Yes, AI resume screening is legal under the EU AI Act, but it is classified as a high-risk AI system under Annex III. Starting August 2, 2026, companies using AI to screen resumes must implement risk management systems, ensure data governance, maintain decision logs, provide human oversight, and inform candidates that AI is being used. Non-compliance carries fines up to EUR 15 million or 3% of global turnover.
Is emotion recognition in job interviews banned?
Yes. Since February 2, 2025, the EU AI Act prohibits AI systems that infer emotions from biometric data in the workplace, which explicitly includes the recruitment process. Any AI tool that analyzes facial expressions, voice tone, or body language during job interviews to assess emotions or personality traits is illegal in the EU, with fines up to EUR 35 million or 7% of global turnover.
What are the penalties for non-compliant AI recruiting tools?
Penalties depend on the violation type. Using prohibited AI practices (like emotion recognition in hiring) carries fines up to EUR 35 million or 7% of global annual turnover. Failing to comply with high-risk system requirements (like missing documentation or human oversight for resume screening AI) carries fines up to EUR 15 million or 3% of global turnover. These apply to any company whose AI affects EU-based candidates, regardless of where the company is headquartered.
Does the EU AI Act apply to non-EU companies that hire in Europe?
Yes. The EU AI Act has extraterritorial scope similar to GDPR. It applies to any company that places AI systems on the EU market or whose AI system outputs are used in the EU. If you use AI to screen, evaluate, or select candidates who are based in the EU, the AI Act applies to your hiring process regardless of your company’s location.
Do German companies need works council approval for AI recruiting tools?
Yes. Under Section 87(1) No. 6 of the German Works Constitution Act (BetrVG), the works council has mandatory co-determination rights over any technology capable of monitoring employee behavior or performance. AI recruiting tools fall under this provision. Employers must involve the works council before deployment, and the works council is entitled to hire an external AI expert at the employer’s expense under Section 80(3) BetrVG.
We track AI compliance developments and their impact on HR technology. Subscribe for practical guides every week.
