In this article you will learn
- How GDPR and the AI Act interact when processing personal data through AI
- Which GDPR principles apply to AI systems and what the “black box” problem means
- When a DPIA is mandatory and when an FRIA (Fundamental Rights Impact Assessment) is required
- How to choose the correct legal basis for processing data through AI
- What rights data subjects have regarding automated decision-making (Art. 22 GDPR)
- What sanctions apply for violations under both regulations
The Dual Regulatory Regime — GDPR + AI Act
GDPR (in force since 25 May 2018)
The General Data Protection Regulation (GDPR) governs the processing of personal data of natural persons across the entire EU. It applies ALWAYS when an AI system processes personal data — regardless of whether the system is classified as “high-risk” under the AI Act or not. For a detailed guide to GDPR compliance, see our separate article.
AI Act (phased entry into force)
The Artificial Intelligence Act introduces specific obligations depending on the risk level of the AI system:
| Date | What takes effect |
|---|---|
| 02.02.2025 | Prohibited AI practices (Art. 5) |
| 02.08.2025 | Rules for GPAI models, Chapter V |
| 02.08.2026 | Majority of provisions — incl. high-risk AI under Annex III |
| 02.08.2027 | Full application (incl. Annex I) |
Key difference
- GDPR regulates the data — how it is collected, processed, stored and shared
- AI Act regulates the systems — how they are designed, deployed, used and monitored
Important: The AI Act explicitly states that it does NOT affect the application of GDPR (Art. 2(7)). The two regulations apply cumulatively — compliance with one does not exempt from compliance with the other.
In practice — for a company using an AI system to process personal data (e.g. HR scoring, credit assessment, medical diagnostics), both regimes apply simultaneously.
GDPR Principles When Deploying AI
AI systems must comply with all 7 GDPR principles (Art. 5). If you are planning to deploy AI in your business, also review our GDPR handbook for businesses.
1. Lawfulness, fairness and transparency
- AI decisions must have a valid legal basis (see the section below)
- Data subjects must be informed that their data is being processed by an AI system
- The “black box” problem — the lack of explainability of AI decisions directly conflicts with the transparency principle
2. Purpose limitation
- Data collected for one purpose cannot be reused for training an AI model without a new legal basis
- EDPB Opinion 28/2024: examines when and how AI models can be considered “anonymous” and therefore outside the scope of GDPR
3. Data minimisation
- AI systems by nature require large volumes of data — this is in direct tension with the minimisation principle
- Solutions: privacy-enhancing technologies (PETs), federated learning, synthetic data
4. Accuracy
- AI models can generate inaccurate or biased results
- GDPR requires data to be “accurate and, where necessary, kept up to date”
- Practical challenge: when training a model with historical data, past biases are reproduced
5. Storage limitation
- Training data — how long can it be retained?
- Data processed by the model in real time — different retention period
- EDPB Opinion 28/2024: if the model is truly anonymous (passes the 3 tests), GDPR does not apply to the model, but to the input data — yes
6. Integrity and confidentiality (security)
- AI systems as a new attack surface: model poisoning, data extraction, adversarial attacks
- GDPR Art. 32 requires appropriate technical and organisational measures
7. Accountability
- The controller must be able to demonstrate compliance — documentation, audits, logs
- The AI Act adds: mandatory logs for a minimum of 6 months for high-risk systems
The Data Lifecycle in AI Systems
Phase 1: Collecting training data
- Legal basis for the collection?
- Informing data subjects about future use for AI training?
- If data was collected for another purpose → compatibility test under Art. 6(4) GDPR
Phase 2: Model training
- Processing personal data for training = an independent processing operation
- A legal basis is required (different from or identical to phase 1)
- If the model “memorises” personal data → the model contains personal data → GDPR continues to apply
Phase 3: Deployment and operation
- Input data → AI processing → output data (decision/prediction/recommendation)
- If input data is personal → GDPR
- If the output affects a specific natural person → GDPR (even if the input is aggregated)
Phase 4: Monitoring and improvement
- Feedback, re-training, fine-tuning
- New processing operations → new compliance assessments
Our GDPR & AI team can help
Our specialists at gdprbg.com have experience with over 300 clients in the field of data protection and AI compliance.
View services at gdprbg.com →Legal Basis for Processing Data Through AI
The choice of legal basis (Art. 6 GDPR) is critical and must be made BEFORE deploying the AI system:
| Legal basis | Applicability for AI |
|---|---|
| Consent (Art. 6(1)(a)) | Difficult to apply — must be specific, informed and freely given; the “black box” complicates informing |
| Performance of a contract (Art. 6(1)(b)) | Possible if AI processing is necessary for the contract (e.g. insurance, credit) |
| Legal obligation (Art. 6(1)(c)) | Limited — when law requires the use of AI (e.g. AML screening) |
| Legitimate interest (Art. 6(1)(f)) | Most commonly used — requires a balancing test; EDPB Opinion 28/2024 provides guidance |
EDPB Opinion 28/2024 on legitimate interest in AI
- The controller must identify a specific, real and present interest
- The processing must be necessary (not merely convenient)
- The balance must favour the controller, taking into account the reasonable expectations of data subjects
- For unlawfully trained models: the EDPB acknowledges that it is not automatically necessary to destroy the entire model — assessed case-by-case
DPIA vs. FRIA — The Two Impact Assessments
DPIA (Data Protection Impact Assessment) — GDPR Art. 35
- Mandatory when processing is likely to result in a high risk to the rights and freedoms of natural persons
- Systematic and extensive profiling → DPIA mandatory
- AI-based decision-making → DPIA mandatory
- CPDP (the Bulgarian DPA) has published a list of activities for which DPIA is mandatory
- Content: description of the processing, assessment of necessity and proportionality, risks, mitigation measures
FRIA (Fundamental Rights Impact Assessment) — AI Act Art. 27
- Mandatory for deployers of high-risk AI who are:
- Public bodies
- Private operators providing public services
- Scope: broader than DPIA — covers all fundamental rights (not only data protection)
- Deadline: before putting into operation
When are BOTH required?
If the AI system is high-risk under Annex III AND processes personal data → DPIA + FRIA simultaneously. Different scope, methodology and documentation, but they can share common elements.
| Criterion | DPIA (GDPR) | FRIA (AI Act) |
|---|---|---|
| Legal basis | Art. 35 GDPR | Art. 27 AI Act |
| Mandatory for | High-risk processing of personal data | Deployers of high-risk AI (public sector) |
| Scope | Data protection | All fundamental rights |
| Deadline | Before processing | Before deployment |
| Supervisory authority | CPDP | National AI authority (not yet designated) |
Profiling and Automated Decision-Making — Art. 22 GDPR + AI Act
Art. 22 GDPR — the right not to be subject to automated decisions
Exceptions (Art. 22(2)): necessary for a contract, authorised by EU/Member State law, explicit consent.
CJEU case SCHUFA (C-634/21, 07.12.2023): even probabilistic scores used by third parties can constitute a “decision” under Art. 22 → directly relevant for AI scoring systems. If you use AI for recruitment and staff selection, familiarise yourself with the legal risks.
AI Act — additional rights (Art. 86)
The AI Act introduces a new individual right: any person affected by a decision made/assisted by a high-risk AI system has the right to a clear explanation. This GOES BEYOND the right under Art. 22 GDPR, which applies only to “solely automated” decisions.
Practical implications
- AI recommendation + human approval → Art. 22 GDPR may not apply (not “solely automated”)
- BUT AI Act Art. 86 → the right to explanation DOES APPLY (even with human-in-the-loop)
- Conclusion: human-in-the-loop is no longer sufficient protection — meaningful human oversight is required
The “Black Box” Problem — Transparency and Explainability
What is the “black box”?
Many AI models (especially deep learning) make decisions in a way that is practically impossible to explain to humans. This is in direct tension with:
- GDPR Art. 13-14 — the right to information about “the logic involved in the processing”
- GDPR Art. 15 — the right of access to “meaningful information about the logic involved”
- AI Act Art. 13 — transparency obligation for high-risk AI systems
- AI Act Art. 86 — right to explanation
Practical approaches
- Explainable AI (XAI) — techniques such as SHAP, LIME, attention maps
- Model cards — standardised model documentation
- Layered transparency — explanations at different levels for different audiences
- Algorithmic audits — independent audits of AI systems
Sharing Data with AI Platforms (Third Parties)
When is the company a controller, when a processor?
| Scenario | Role of the company | Role of the AI provider |
|---|---|---|
| The AI platform processes data on our instructions | Controller | Processor (Art. 28 GDPR) |
| The AI platform determines purposes itself | Controller | Controller (joint) |
| Data uploaded for training the provider’s model | Controller | Controller (new) |
Obligations when sharing with AI providers
- Art. 28 GDPR agreement if the AI provider is a processor
- International transfer — if AI servers are outside the EU (e.g. OpenAI, Google AI) → Standard Contractual Clauses (SCCs) or adequacy
- Regular checks on what happens with the data: is it retained for training? is it shared?
- AI Act Art. 26(5) — contractual conditions for deployers of high-risk AI
For more information on EU data regulation, see our analysis of the EU Data Act and its application in Bulgaria.
Local vs. cloud processing
| Criterion | Local (on-premise) | Cloud |
|---|---|---|
| Control | Full | Limited |
| International transfer | None | Likely → SCC/adequacy |
| Security | Your responsibility | Shared |
| GDPR complexity | Lower | Higher |
| AI Act — logs | Internal | Via provider |
Sanctions — Double Accumulation
AI systems processing personal data can be sanctioned under BOTH regulations for the same violation:
| Regulation | Maximum sanction |
|---|---|
| GDPR | EUR 20,000,000 or 4% of global annual turnover |
| AI Act — prohibited practices | EUR 35,000,000 or 7% |
| AI Act — high-risk violations | EUR 15,000,000 or 3% |
| AI Act — false information | EUR 7,500,000 or 1% |
Cumulative effect: a violation by an AI recruitment system could lead to a GDPR sanction (up to 4% of turnover) + AI Act sanction (up to 3%) = up to 7% of global turnover.
For information on EU digital services regulation, see also our article on the Digital Services Act in Bulgaria.
The Role of CPDP and National AI Supervision
CPDP (Commission for Personal Data Protection)
- Supervisory authority under GDPR and the Bulgarian Personal Data Protection Act
- The EDPB has clearly stated that DPAs should be designated as Market Surveillance Authorities under the AI Act in many cases
- CPDP is already participating in pan-European initiatives on AI and personal data (Joint Statement from 2025 on AI-generated images)
National AI supervision
- As of April 2026: Bulgaria has not yet designated a national competent authority for the AI Act
- Candidates under discussion: CPDP, CPC (competition authority), SEGA (e-governance agency)
- Practical significance: GDPR enforcement (through CPDP) will be the first line of enforcement for AI violations involving personal data
For more information on anonymisation and pseudonymisation under GDPR, see our analysis of EDPB guidelines.
10 Practical Steps for Compliance
- Inventory your AI systems — which ones process personal data?
- Classify under the AI Act — prohibited, high-risk, limited, minimal risk
- Determine the legal basis under GDPR for each AI operation
- Conduct a DPIA (GDPR Art. 35) for each AI system with high-risk processing
- Conduct an FRIA (AI Act Art. 27) if you are a deployer of high-risk AI in the public sector
- Update data subject information — notify them about the use of AI
- Review contracts with AI providers — Art. 28 GDPR + international transfer
- Ensure human-in-the-loop for decisions with legal effects (meaningful, not formal)
- Document everything — logs (min. 6 months under AI Act), DPIA, decisions
- Train your staff — HR, marketing, IT, management — everyone working with AI
Frequently Asked Questions
Conclusion
The intersection between GDPR and the AI Act creates a new, more complex regulatory landscape for any company using artificial intelligence to process personal data. Key takeaways:
- The two regulations apply cumulatively — they do not exclude each other
- A DPIA is almost always mandatory for AI processing of personal data
- Transparency is critical — the “black box” is no excuse
- Sanctions accumulate — up to 7% of global turnover
- 02.08.2026 is the deadline for high-risk AI under Annex III
Sources
Need a GDPR audit of your AI systems?
Innovires Legal has a specialised GDPR and AI team with experience serving over 300 clients. For full support — visit our dedicated site gdprbg.com (site in Bulgarian).