Five HR use cases for AI, each with the rule that applies, the 2024-2025 enforcement that shaped it, and the question to ask the vendor before you sign.
There are five places AI is showing up inside HR, and the EU AI Act drops all of them in the same Annex III bucket. Sourcing and ad targeting. CV screening and shortlisting. Video interviewing and trait inference. Performance scoring and task allocation. Decisions to terminate, discipline, or change someone's terms.
Annex III, point 4 names two categories: AI used in "recruitment or selection" (including targeted ads, application analysis, candidate evaluation), and AI used in "decisions affecting work-related relationships, the allocation of tasks based on individual behaviour or personal traits, or to monitor and evaluate the performance and behaviour" of workers. Five real-world uses, two regulatory categories, one high-risk classification. The deployer obligations under Article 26 take effect on 2 August 2026, and the FRIA obligation under Article 27 lands the same day. (The Digital Omnibus may push some Annex III obligations to December 2027; as of April 2026 the Commission proposal is on the table but not adopted, so the safe planning date is still August 2026.)
GDPR Article 22 has been the parallel constraint since 2018. The Italian Garante's Foodinho decision in July 2021 was the first full enforcement of Article 22 in an employment context. The fine was €2.6 million and the violation list ran from Article 22(3) (no human-review safeguards) through Articles 5, 13, 25, 30, 32, 35, 37 and Italian Privacy Code Article 88. None of the rest of the world's HR-AI enforcement has gotten cleaner since.
What follows is the use-case-by-use-case picture. For each one: the rule that applies, the live enforcement that shaped it, the question to ask the vendor before you sign.
The use is targeted job ads delivered through a platform's optimisation algorithm, plus AI-driven sourcing tools that scrape candidate profiles from LinkedIn, GitHub, or résumé databases.
The rule: ad targeting that uses profiling on protected characteristics is high-risk under Annex III as a "recruitment or selection" use, regardless of whether a human ever sees a candidate the algorithm chose to suppress. A 2018 ProPublica investigation showed Facebook ads being targeted in ways that excluded older workers from job-ad audiences, and the US settlement that followed reset the conversation. Under the EU AI Act, sourcing AI is squarely high-risk. Under GDPR, ad targeting that uses inferred protected attributes is special-category processing under Article 9, which requires an Article 9(2) basis you almost certainly don't have for hiring.
The question to ask the vendor: which protected attributes, or proxies for them, can the algorithm use as input or as a feature? If the answer is "we don't know," the answer is no.
The use is a model that reads applications and produces a ranking, score, or pass/fail flag. This is the most common AI in HR by volume, and the most enforced.
The rule on paper is simple. GDPR Article 22 prohibits decisions based solely on automated processing that produce legal effects or similarly significant effects, with narrow exceptions. A hiring decision is the textbook similar significant effect. Recital 71 names "e-recruiting practices without any human intervention" by name.
The rule in practice is harder. The CJEU's December 2023 SCHUFA judgment (C-634/21) expanded "decision based solely on automated processing" to cover situations where a third party's decision draws strongly on an automated score, even if a human formally signs off. The 27 February 2025 Dun & Bradstreet ruling (C-203/22) sharpened it further: the controller has to explain how the score was reached, and "trade secrets" do not override the data subject's right to a meaningful explanation. Both rulings were about credit, but the doctrine is portable. A recruiter who clicks "approve" on an AI-shortlisted top 10% is signing off on the same kind of decision SCHUFA describes.
The 90% problem is the structural pattern most teams miss. If your tool filters out 90% of candidates before any human looks at the pile, the rejected 90% were subject to a solely automated decision, no matter how careful the human review of the surviving 10% is. The UK ICO's audit of AI recruitment tools (eight months ending May 2024, nearly 300 recommendations) found that several tools were designed so recruiters never saw the rejected pool at all. Designing the rejected pool to be invisible is the version that loses at audit. Designing it to be reviewable on request, with the AI's reasoning attached, is the version that survives.
The Mobley v. Workday class action is what this looks like when it matures into litigation. Derek Mobley, a Black applicant over 40, claims he applied to more than 100 jobs through Workday's screening platform since 2017 and was rejected every time. On 16 May 2025, Judge Lin in the Northern District of California granted conditional certification of his Age Discrimination in Employment Act collective claim. On 29 July 2025, she expanded the scope to include applicants processed through Workday's HiredScore AI features. Workday was ordered to identify all HiredScore customers by 20 August 2025. The legal theory that matters most is the agent-liability ruling: AI vendors can be directly liable as "agents" of the employers using their tools, because the tool performs a traditional hiring function. That doctrine has not been tested in the EU yet, but the exposure for any vendor selling into both markets is now very different.
The question to ask the vendor: which candidates were rejected by your tool and how do we let them request a human review? If the workflow does not give you a way to surface the rejected pool, the workflow is the bug.
The use is a tool that records the candidate, transcribes the interview, scores the answers, and (until recently) inferred personality traits or emotions from facial expressions and voice.
The rule on emotion recognition is now flat. AI Act Article 5(1)(f) prohibits "the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions" with narrow exceptions for medical or safety reasons. The prohibition has been in force since 2 February 2025. There is no FRIA workaround, no DPIA workaround, no consent workaround. If an HR tool infers emotions from a candidate or employee in the EU, the tool is illegal in the EU.
The history is worth remembering. HireVue dropped facial analysis from its assessments in early 2020, four years before the AI Act prohibition existed, because internal research showed facial analysis contributed only 0.25% to job-performance prediction. The prohibition is not asking vendors to give up something that worked. It is closing a market for something that was not predictive of anything that mattered.
Voice and transcript analysis are not prohibited, but they remain high-risk under Annex III and Article 22 still applies. Accessibility is now a separate live exposure. The US EEOC settled charges against Intuit and HireVue in March 2025 over the rejection of a deaf Indigenous applicant whose request for video captioning was denied. The accessibility theory will travel to the EU under the European Accessibility Act and through national equality law.
The question to ask the vendor: does the tool infer emotions or personality traits from facial expressions, voice tone, or word choice? If the answer is yes for any EU candidate, the tool is non-compliant from the day you switch it on.
The use is a model that decides which deliveries to route to which rider, which task to assign to which agent, which contractor gets the next gig, or which warehouse worker is "underperforming" by some composite metric.
This is the use Foodinho lost on. The Garante found that Foodinho was making decisions about its riders based "solely on automated decision-making" by analysing and predicting professional performance, behaviour, location and movement; that those decisions excluded riders from work opportunities; and that no real channel existed for riders to exercise their Article 22(3) rights to express their view, contest the decision, or get human review. The fine list spans Article 5, 13, 22, 25, 30, 32, 35, 37(7), and 88, plus Italian Privacy Code Article 144. It is the fullest single-decision picture of how this kind of system fails the regulator's reading of the law.
Annex III now codifies the same concern. Performance monitoring, task allocation based on individual behaviour or personal traits, and decisions affecting work-related relationships all sit in the high-risk bucket. Italian Law 132/2025 added a parallel criminal track for some uses that goes beyond GDPR fines, which is one of several signals that algorithmic management is moving from administrative enforcement into employment-law enforcement.
The question to ask the vendor: when the model assigns or denies work to a specific worker, is there a documented human review point that the worker can trigger, with a real role attached? Not "someone reviews the dashboard." A named role, a frequency, a sample size.
The use is a tool that recommends or makes decisions about firing, demotion, schedule changes, or disciplinary action.
This is the most consequential use and the one with the least room for error. Annex III explicitly covers it. Article 22 explicitly covers it. The Foodinho doctrine covers it. The SCHUFA and Dun & Bradstreet rulings explicitly cover it once a human is signing off on a heavily-AI-influenced recommendation.
I think this is the use case where the gap between what a system can do and what a system should do is the widest. The tooling will tell you which workers are most likely to leave, most likely to underperform, or most likely to be involved in a complaint. None of those scores justify acting against the worker without independent corroboration. The DPIA and FRIA you write for this use case have to engage with the question of whether the score is a decision input or a decision substitute, and the documentation has to land on "input." Anything else loses at the first contested termination.
The question to ask the vendor: can you show me a deployment where a recommendation from your tool was overridden by a human reviewer, and can you tell me how often that happens? If the override rate is zero or unmeasured, the human-review claim is rhetorical.
For any of the five use cases, the assessment work is now two documents that should be one. GDPR Article 35 requires a DPIA. AI Act Article 27 requires a Fundamental Rights Impact Assessment for deployers of high-risk systems in regulated areas, which includes private-sector deployers in employment. Article 27(4) lets the DPIA carry over: where a DPIA has already been conducted for the same processing, the FRIA "shall complement" it.
The practical implication is that if you write the DPIA for an HR AI feature today, you can extend it next year into a FRIA without starting over. The marginal cost is one annex section covering the fundamental rights beyond data protection: non-discrimination, dignity, access to justice, freedom of expression, effective remedy. The cost of writing the FRIA as a separate artifact in August 2026 is several days, not several hours.
For HR AI specifically, the fundamental-rights annex should name two extra dimensions on top of the ones in the standard FRIA: worker representation rights under EU Charter Article 27 (the right to information and consultation within the undertaking), and the equality directives' protection against indirect discrimination. Belgian Collective Bargaining Agreement no. 39 has been cited by Crowell & Moring as an example of a member-state social-dialogue rule that bites independently of the AI Act timeline. Member-state employment law is the part most cross-border deployers underestimate.
The DPIA has to name the four AI-specific risks the CNIL listed in its 2024 recommendations: model extraction, automated discrimination, hallucinated content about real people, and AI-specific attacks. For HR tools, automated discrimination is the load-bearing risk and the one a DPA reviewer will go straight to. Naming it generically is not enough. The DPIA has to say how disparate impact is being measured (which protected groups, which decision points, which sample size, which threshold) and what happens when the measurement shows a problem. See the DPIA practical check for the four-traps framing and a worked example.
I am still not sure how strictly market surveillance authorities will read Article 27(4) once enforcement starts, particularly the phrase "shall complement." The most defensible reading is that a DPIA-plus-fundamental-rights-annex covers the FRIA obligation as long as the annex addresses every Charter right separately. Until the first enforcement decision lands, write the document so it would survive either reading.
The hardest question in HR AI is not "is this allowed" but "can a candidate or worker who was rejected, demoted, or fired by this tool request a real human review and get it." Build the workflow that answers yes. Document the override rate. Write the DPIA and the FRIA in one document. The other compliance work is tractable from there.
Three things, in this order. Pull out the vendor's most recent technical documentation and search it for the words "facial," "emotion," "personality," "tone," and "voice." If any of those describe a feature you can switch on for an EU candidate, you have a 2 February 2025 problem. Walk through the vendor demo with one specific question on a sticky note: where does the rejected pool live, who can see it, and how does a candidate request a review of their own rejection. Ask for the override rate from a customer reference call. Not the design intent. The measured rate.
If you are already running an AI HR tool in production without a DPIA or with a DPIA that does not name automated discrimination by metric, treat the August 2026 deadline as a hard backstop and add the assessment work to the next sprint. The work is not a checkbox. It is what you will hand the regulator if a complaint lands.
The trigger question is settled. The harder question is which assessment, and when. EDPB Opinion 28/2024, CNIL July 2025, and the Article 27(4) FRIA carry-over.
The April 2026 trilogue reshaped the deadline. What binds you regardless, what the Omnibus will probably move, and the deployer obligations most dev teams underestimate.
The 2026 state of the GDPR/AI Act interplay. What Joint Opinion 1/2026 and C-203/22 tell you about DPIAs, FRIAs, Article 22, Article 10 bias data, and fines.
Free tool · live
AI Act Obligation Scanner
Find out which EU AI Act obligations apply to your AI feature — risk category, your role, what you have to do, and by when.