Legal framework
The use of AI in recruitment in Bulgaria is governed by an interlocking set of European and national instruments. At its centre sits Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI Act), published in the Official Journal on 12 July 2024 and entering into force on 1 August 2024. The regulation is directly applicable in all Member States and requires no transposition.
In parallel, the processing of candidates’ personal data remains governed by the General Data Protection Regulation (GDPR), and in particular Article 22 concerning automated individual decision-making and profiling. Where an AI tool processes personal data and generates decisions with legal effects, both regulations apply cumulatively.
At national level, the Bulgarian Labour Code, the Protection Against Discrimination Act, the Personal Data Protection Act and related secondary legislation on employment relations also apply. The Commission for Personal Data Protection (CPDP) is the competent authority under the GDPR, while the Commission for Protection Against Discrimination handles discrimination complaints. The Bulgarian AI Act supervisory authority has not yet been formally designated as of April 2026.
AI Act application timeline
The AI Act applies in phases. The table below summarises the key dates directly relevant to HR practices:
| Date | What applies |
|---|---|
| 02.02.2025 | Chapter I (general provisions) + prohibited practices (Art. 5) |
| 02.08.2025 | General-purpose AI (GPAI) rules, Chapters III/V/VII/XII, governance and penalties |
| 02.08.2026 | Majority of provisions, including obligations for high-risk AI under Annex III |
| 02.08.2027 | Full application, including high-risk AI under Annex I |
IMPORTANT for employers: the high-risk provisions for recruitment become applicable from 02.08.2026. At the time of publication of this article, Bulgarian businesses have approximately 4 months before these obligations take effect. This is a relatively short window for conducting an internal audit, reviewing supplier contracts, completing DPIAs and implementing human-oversight procedures.
Recruitment as high-risk — Annex III, point 4
Article 6(2) of the AI Act designates as “high-risk” the AI systems listed in Annex III. Point 4 of the Annex covers employment, workers’ management and access to self-employment. This is a broadly framed category that captures virtually the entire HR lifecycle.
(a) Systems for recruitment or selection
Classified as high-risk are AI systems intended to be used for the recruitment or selection of natural persons, including:
- Publishing targeted job ads — algorithms that direct advertisements to specific demographic groups or platforms.
- Analysing and filtering applications — automated CV screening, CV parsing, ranking of candidates by position-fit.
- Assessing candidates — AI tests, game-based assessments, interview-response analysis, verbal and non-verbal markers.
(b) Systems affecting employment relationships
The second sub-category covers AI systems used to make decisions, or materially influence decisions, affecting the terms of employment relationships. This includes:
- Promotion and termination of employment contracts;
- Allocation of tasks based on individual behaviour or personal traits;
- Monitoring and evaluating the performance and behaviour of workers.
As a result, even a tool that merely “supports” a managerial decision — without fully automating it — falls within the high-risk regime if it significantly influences the outcome.
Prohibited practices (Art. 5 AI Act)
Regardless of the high-risk classification, Article 5 of the AI Act establishes a list of AI practices that are absolutely prohibited. These prohibitions have been in force since 2 February 2025 and carry the highest penalty under the regulation.
Art. 5(1)(f) — Emotion recognition in the workplace
The most directly relevant prohibition for recruitment is the ban on placing on the market, putting into service or using AI systems for inferring emotions in the workplace and in educational institutions, except where the use is intended for medical or safety reasons.
This provision has direct and wide-ranging impact: AI tools for video interviews that “analyse emotions, enthusiasm, engagement or personality traits” from facial expressions, voice tone or micro-expressions are UNLAWFUL in recruitment in the EU. Recital 44 of the preamble expressly emphasises the power imbalance between employer and candidate and the unreliability of such technologies.
Other relevant prohibitions
- Art. 5(1)(g) — Biometric categorisation to deduce or infer race, political opinions, trade-union membership, religious or philosophical beliefs, sex life or sexual orientation.
- Art. 5(1)(c) — Social scoring of natural persons based on social behaviour or personal characteristics leading to detrimental treatment.
- Art. 5(1)(a) — Manipulative techniques using subliminal, purposefully manipulative or deceptive techniques.
A breach of any of the prohibitions in Article 5 is the most serious possible infringement of the AI Act and attracts the maximum penalty of EUR 35,000,000 or 7 % of global annual turnover.
Obligations for deployers (employers) — Art. 26
When an employer uses a high-risk AI system for recruitment or HR management, it acts as a “deployer” within the meaning of Article 3(4) of the AI Act. Article 26 establishes the following obligations for deployers:
- Use in accordance with the instructions of the provider — the employer may not use the system outside the purpose for which it has been validated.
- Human oversight within the meaning of Art. 14 — performed by competent persons with appropriate training, authority and resources.
- Relevant and representative input data — where the deployer has control over the input, it must ensure the data are appropriate for the intended purpose.
- Monitoring of the operation and immediate notification of the provider where a risk or a serious incident arises.
- Retention of automatically generated logs for a minimum of 6 months (Art. 26(6)), unless a longer period is prescribed by other legislation.
- Informing workers and their representatives (trade unions, representatives under Art. 7 of the Labour Code) BEFORE putting a high-risk AI system into service in the workplace — Art. 26(7).
- Informing natural persons who are subject to decisions taken with or assisted by high-risk AI systems — Art. 26(11). In the recruitment context this means every candidate must be notified.
- Cooperation with competent authorities during investigations and audits.
- Data Protection Impact Assessment (DPIA) under Art. 35 GDPR — mandatory for systematic and large-scale automated decision-making in recruitment.
- FRIA (Fundamental Rights Impact Assessment) under Art. 27 of the AI Act — mandatory for public-sector bodies and certain categories of private deployers.
The weight of these obligations reflects the risk that high-risk AI systems pose to the fundamental rights of candidates and workers. Failure to comply with any of them can result in administrative penalties and in civil claims for damages.
GDPR Art. 22 — automated decision-making
Running in parallel with the AI Act, Article 22 GDPR governs the data subject’s right not to be subject to automated decision-making. The provision states:
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
The European Data Protection Board (EDPB) and national supervisory authorities consistently hold that recruitment decisions (rejection, shortlisting, job offer, automatic refusal) clearly fall within the category of decisions that significantly affect the data subject.
Exceptions under Art. 22(2)
- Necessary for entering into or performing a contract between the data subject and the controller;
- Authorised by Union or Member State law, with suitable safeguards in place;
- Based on the data subject’s explicit consent.
Even where an exception applies, the controller must ensure suitable safeguards under Art. 22(3), including the data subject’s right to:
- Human intervention on the part of the controller;
- Express his or her point of view;
- Contest the decision.
CJEU SCHUFA case (C-634/21)
In its judgment of 7 December 2023 in case C-634/21 (SCHUFA Holding), the Court of Justice of the EU adopted an expansive reading of Art. 22: even a probabilistic score generated by a third party can amount to a “decision” within the meaning of Art. 22 where a subsequent decision of another controller is based on it. This interpretation is directly relevant to recruitment scoring tools — for instance, AI platforms generating a “culture fit score” or “employability score” and supplying the result to an employer.
Discrimination risks
The most serious substantive legal risk when using AI in recruitment is the risk of discrimination — direct or indirect — on the basis of protected characteristics. AI systems are not neutral: they reflect the data on which they were trained and the statistical dependencies embedded within them.
| Risk | Description |
|---|---|
| Historical bias | Training on past discriminatory decisions (e.g. Amazon’s abandoned in-house tool that penalised CVs containing the word “women’s”) |
| Proxy discrimination | Facially neutral features correlating with protected characteristics — postcode, name, school attended, hobbies |
| Feedback loops | Biased outputs reinforce bias in input data across subsequent training iterations |
| Language and accent | LLMs and speech-analysis systems that systematically score non-native speakers or regional accents lower |
| Age | CV gaps (linked to maternity, caregiving, disability) interpreted as a negative signal |
| Sex, ethnicity, disability | Particularly affected protected categories, historically present in data on past decisions |
The Protection Against Discrimination Act prohibits both direct and indirect discrimination on more than 19 protected grounds, including sex, race, nationality, ethnicity, citizenship, origin, religion, disability, age, sexual orientation, property status and others. The Commission for Protection Against Discrimination is the competent authority for complaint proceedings.
Reversed burden of proof: under Art. 9 of the Protection Against Discrimination Act, once the claimant establishes facts from which discrimination may be inferred, the burden shifts to the respondent (employer) to prove that the principle of equal treatment has not been breached. This makes AI systems particularly risky: in the absence of transparency and explainability, the employer may find itself unable to justify the model’s decisions.
Candidate rights
Candidates subject to AI-based selection enjoy a range of rights stemming from the AI Act, the GDPR and national legislation:
- Right to be informed that AI is being used — AI Act Art. 26(11) and GDPR Arts. 13–14 (information at or before data collection);
- Right to meaningful human review — GDPR Art. 22(3) and AI Act Art. 14 (human oversight);
- Right to object to processing based on legitimate interests — GDPR Art. 21;
- Right of access to personal data, including the logic involved, the significance and the envisaged consequences — GDPR Art. 15(1)(h);
- Right to rectification and erasure — GDPR Arts. 16 and 17;
- Right to lodge a complaint with the CPDP (GDPR), the Commission for Protection Against Discrimination, or the General Labour Inspectorate;
- Right to compensation for pecuniary and non-pecuniary damage under GDPR Art. 82 and general Bulgarian civil law.
Penalties under the AI Act (Art. 99)
Article 99 of the AI Act establishes a three-tier structure of administrative fines, with the maximum amount depending on the severity of the infringement:
| Infringement | Penalty |
|---|---|
| Prohibited practices (Art. 5) | Up to EUR 35,000,000 or 7 % of total worldwide annual turnover (whichever is higher) |
| Breach of high-risk AI obligations (incl. Art. 26 for deployers) | Up to EUR 15,000,000 or 3 % of total worldwide annual turnover |
| Supplying incorrect, incomplete or misleading information to authorities | Up to EUR 7,500,000 or 1 % of total worldwide annual turnover |
| SMEs and start-ups | The lower of the two amounts (Art. 99(6)) |
GDPR penalties remain separate and apply cumulatively: up to EUR 20 million or 4 % of annual worldwide turnover for breaches of Art. 22 and other core provisions. An employer using a prohibited emotion-recognition tool in breach of both the AI Act and the GDPR may face penalties under both regimes.
Practical recommendations
Given the proximity of 2 August 2026, we recommend the following action plan for Bulgarian employers:
- Inventory all AI tools used in the HR process — job ad publishing, screening, interviews, assessments, onboarding.
- Classify each tool against the AI Act categories — prohibited / high-risk / limited risk / minimal risk.
- Immediately remove any tools for emotion recognition or personality-trait inference from video or audio — these have been prohibited since 2 February 2025.
- Review supplier contracts — require declarations of conformity with the AI Act, CE marking, technical documentation and cooperation during audits.
- Prepare a DPIA under Art. 35 GDPR for every high-risk processing of candidate data — mandatory.
- Prepare a transparency notice — clear information to candidates at the application stage that AI is used, what the logic is and what the consequences are.
- Ensure human-in-the-loop — no rejection should be purely automatic; every decision passes through a competent human.
- Conduct regular bias testing — independent audits, statistical parity tests, disparate-impact analyses.
- Consult workers and their representatives under Art. 26(7) of the AI Act — BEFORE deployment.
- Retain logs for a minimum of 6 months — verify that your infrastructure supports this.
- Prepare an internal policy on the use of AI in recruitment, including procedures, responsibilities and control points.
- Train the HR team — managers and recruiters must know the legal framework and its limits.
- Prepare an incident plan — response to bias, wrongful rejection, candidate complaints or CPDP/Discrimination Commission inspections.
Frequently asked questions
Need a legal review of AI tools in recruitment?
The Innovires team can assist you with classifying your AI tools under the AI Act, preparing DPIAs and FRIAs, reviewing supplier contracts, drafting internal policies for the use of AI in HR and preparing for the entry into force of the obligations on 2 August 2026.