Personal Data and Artificial Intelligence — GDPR and AI Act at the Intersection (2026)

Published: 14 April 2026 | Last updated: 14 April 2026

Artificial intelligence is transforming the way businesses process personal data — from automated recruitment and credit scoring to personalised marketing and medical diagnostics. With the majority of the AI Act (Regulation (EU) 2024/1689) taking effect on 2 August 2026, companies face a dual regulatory regime: simultaneous compliance with GDPR (Regulation (EU) 2016/679) and the AI Act. This article analyses the intersection points, practical challenges and concrete compliance steps.

In this article you will learn

  • How GDPR and the AI Act interact when processing personal data through AI
  • Which GDPR principles apply to AI systems and what the “black box” problem means
  • When a DPIA is mandatory and when an FRIA (Fundamental Rights Impact Assessment) is required
  • How to choose the correct legal basis for processing data through AI
  • What rights data subjects have regarding automated decision-making (Art. 22 GDPR)
  • What sanctions apply for violations under both regulations

The Dual Regulatory Regime — GDPR + AI Act

GDPR (in force since 25 May 2018)

The General Data Protection Regulation (GDPR) governs the processing of personal data of natural persons across the entire EU. It applies ALWAYS when an AI system processes personal data — regardless of whether the system is classified as “high-risk” under the AI Act or not. For a detailed guide to GDPR compliance, see our separate article.

AI Act (phased entry into force)

The Artificial Intelligence Act introduces specific obligations depending on the risk level of the AI system:

DateWhat takes effect
02.02.2025Prohibited AI practices (Art. 5)
02.08.2025Rules for GPAI models, Chapter V
02.08.2026Majority of provisions — incl. high-risk AI under Annex III
02.08.2027Full application (incl. Annex I)

Key difference

  • GDPR regulates the data — how it is collected, processed, stored and shared
  • AI Act regulates the systems — how they are designed, deployed, used and monitored

Important: The AI Act explicitly states that it does NOT affect the application of GDPR (Art. 2(7)). The two regulations apply cumulatively — compliance with one does not exempt from compliance with the other.

In practice — for a company using an AI system to process personal data (e.g. HR scoring, credit assessment, medical diagnostics), both regimes apply simultaneously.

For a detailed analysis of the interaction between GDPR and the AI Act, visit our dedicated article on gdprbg.com (site in Bulgarian).

GDPR Principles When Deploying AI

AI systems must comply with all 7 GDPR principles (Art. 5). If you are planning to deploy AI in your business, also review our GDPR handbook for businesses.

1. Lawfulness, fairness and transparency

  • AI decisions must have a valid legal basis (see the section below)
  • Data subjects must be informed that their data is being processed by an AI system
  • The “black box” problem — the lack of explainability of AI decisions directly conflicts with the transparency principle

2. Purpose limitation

  • Data collected for one purpose cannot be reused for training an AI model without a new legal basis
  • EDPB Opinion 28/2024: examines when and how AI models can be considered “anonymous” and therefore outside the scope of GDPR

3. Data minimisation

  • AI systems by nature require large volumes of data — this is in direct tension with the minimisation principle
  • Solutions: privacy-enhancing technologies (PETs), federated learning, synthetic data

4. Accuracy

  • AI models can generate inaccurate or biased results
  • GDPR requires data to be “accurate and, where necessary, kept up to date”
  • Practical challenge: when training a model with historical data, past biases are reproduced

5. Storage limitation

  • Training data — how long can it be retained?
  • Data processed by the model in real time — different retention period
  • EDPB Opinion 28/2024: if the model is truly anonymous (passes the 3 tests), GDPR does not apply to the model, but to the input data — yes

6. Integrity and confidentiality (security)

  • AI systems as a new attack surface: model poisoning, data extraction, adversarial attacks
  • GDPR Art. 32 requires appropriate technical and organisational measures

7. Accountability

  • The controller must be able to demonstrate compliance — documentation, audits, logs
  • The AI Act adds: mandatory logs for a minimum of 6 months for high-risk systems
Tip from our practice: Maintaining complete documentation is the most common weakness in AI projects. Our team at gdprbg.com (site in Bulgarian) helps build a compliance framework from day 1.

The Data Lifecycle in AI Systems

Phase 1: Collecting training data

  • Legal basis for the collection?
  • Informing data subjects about future use for AI training?
  • If data was collected for another purpose → compatibility test under Art. 6(4) GDPR

Phase 2: Model training

  • Processing personal data for training = an independent processing operation
  • A legal basis is required (different from or identical to phase 1)
  • If the model “memorises” personal data → the model contains personal data → GDPR continues to apply

Phase 3: Deployment and operation

  • Input data → AI processing → output data (decision/prediction/recommendation)
  • If input data is personal → GDPR
  • If the output affects a specific natural person → GDPR (even if the input is aggregated)

Phase 4: Monitoring and improvement

  • Feedback, re-training, fine-tuning
  • New processing operations → new compliance assessments

Our GDPR & AI team can help

Our specialists at gdprbg.com have experience with over 300 clients in the field of data protection and AI compliance.

View services at gdprbg.com →

Legal Basis for Processing Data Through AI

The choice of legal basis (Art. 6 GDPR) is critical and must be made BEFORE deploying the AI system:

Legal basisApplicability for AI
Consent (Art. 6(1)(a))Difficult to apply — must be specific, informed and freely given; the “black box” complicates informing
Performance of a contract (Art. 6(1)(b))Possible if AI processing is necessary for the contract (e.g. insurance, credit)
Legal obligation (Art. 6(1)(c))Limited — when law requires the use of AI (e.g. AML screening)
Legitimate interest (Art. 6(1)(f))Most commonly used — requires a balancing test; EDPB Opinion 28/2024 provides guidance

EDPB Opinion 28/2024 on legitimate interest in AI

  • The controller must identify a specific, real and present interest
  • The processing must be necessary (not merely convenient)
  • The balance must favour the controller, taking into account the reasonable expectations of data subjects
  • For unlawfully trained models: the EDPB acknowledges that it is not automatically necessary to destroy the entire model — assessed case-by-case

DPIA vs. FRIA — The Two Impact Assessments

DPIA (Data Protection Impact Assessment) — GDPR Art. 35

  • Mandatory when processing is likely to result in a high risk to the rights and freedoms of natural persons
  • Systematic and extensive profiling → DPIA mandatory
  • AI-based decision-making → DPIA mandatory
  • CPDP (the Bulgarian DPA) has published a list of activities for which DPIA is mandatory
  • Content: description of the processing, assessment of necessity and proportionality, risks, mitigation measures
Our team at gdprbg.com (site in Bulgarian) conducts DPIAs for AI systems — with ready-made templates and real case studies.

FRIA (Fundamental Rights Impact Assessment) — AI Act Art. 27

  • Mandatory for deployers of high-risk AI who are:
    • Public bodies
    • Private operators providing public services
  • Scope: broader than DPIA — covers all fundamental rights (not only data protection)
  • Deadline: before putting into operation

When are BOTH required?

If the AI system is high-risk under Annex III AND processes personal data → DPIA + FRIA simultaneously. Different scope, methodology and documentation, but they can share common elements.

CriterionDPIA (GDPR)FRIA (AI Act)
Legal basisArt. 35 GDPRArt. 27 AI Act
Mandatory forHigh-risk processing of personal dataDeployers of high-risk AI (public sector)
ScopeData protectionAll fundamental rights
DeadlineBefore processingBefore deployment
Supervisory authorityCPDPNational AI authority (not yet designated)

Profiling and Automated Decision-Making — Art. 22 GDPR + AI Act

Art. 22 GDPR — the right not to be subject to automated decisions

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

Exceptions (Art. 22(2)): necessary for a contract, authorised by EU/Member State law, explicit consent.

CJEU case SCHUFA (C-634/21, 07.12.2023): even probabilistic scores used by third parties can constitute a “decision” under Art. 22 → directly relevant for AI scoring systems. If you use AI for recruitment and staff selection, familiarise yourself with the legal risks.

AI Act — additional rights (Art. 86)

The AI Act introduces a new individual right: any person affected by a decision made/assisted by a high-risk AI system has the right to a clear explanation. This GOES BEYOND the right under Art. 22 GDPR, which applies only to “solely automated” decisions.

Practical implications

  • AI recommendation + human approval → Art. 22 GDPR may not apply (not “solely automated”)
  • BUT AI Act Art. 86 → the right to explanation DOES APPLY (even with human-in-the-loop)
  • Conclusion: human-in-the-loop is no longer sufficient protection — meaningful human oversight is required

The “Black Box” Problem — Transparency and Explainability

What is the “black box”?

Many AI models (especially deep learning) make decisions in a way that is practically impossible to explain to humans. This is in direct tension with:

  • GDPR Art. 13-14 — the right to information about “the logic involved in the processing”
  • GDPR Art. 15 — the right of access to “meaningful information about the logic involved”
  • AI Act Art. 13 — transparency obligation for high-risk AI systems
  • AI Act Art. 86 — right to explanation

Practical approaches

  1. Explainable AI (XAI) — techniques such as SHAP, LIME, attention maps
  2. Model cards — standardised model documentation
  3. Layered transparency — explanations at different levels for different audiences
  4. Algorithmic audits — independent audits of AI systems

Sharing Data with AI Platforms (Third Parties)

When is the company a controller, when a processor?

ScenarioRole of the companyRole of the AI provider
The AI platform processes data on our instructionsControllerProcessor (Art. 28 GDPR)
The AI platform determines purposes itselfControllerController (joint)
Data uploaded for training the provider’s modelControllerController (new)

Obligations when sharing with AI providers

  • Art. 28 GDPR agreement if the AI provider is a processor
  • International transfer — if AI servers are outside the EU (e.g. OpenAI, Google AI) → Standard Contractual Clauses (SCCs) or adequacy
  • Regular checks on what happens with the data: is it retained for training? is it shared?
  • AI Act Art. 26(5) — contractual conditions for deployers of high-risk AI

For more information on EU data regulation, see our analysis of the EU Data Act and its application in Bulgaria.

Local vs. cloud processing

CriterionLocal (on-premise)Cloud
ControlFullLimited
International transferNoneLikely → SCC/adequacy
SecurityYour responsibilityShared
GDPR complexityLowerHigher
AI Act — logsInternalVia provider

Sanctions — Double Accumulation

AI systems processing personal data can be sanctioned under BOTH regulations for the same violation:

RegulationMaximum sanction
GDPREUR 20,000,000 or 4% of global annual turnover
AI Act — prohibited practicesEUR 35,000,000 or 7%
AI Act — high-risk violationsEUR 15,000,000 or 3%
AI Act — false informationEUR 7,500,000 or 1%

Cumulative effect: a violation by an AI recruitment system could lead to a GDPR sanction (up to 4% of turnover) + AI Act sanction (up to 3%) = up to 7% of global turnover.

For information on EU digital services regulation, see also our article on the Digital Services Act in Bulgaria.

The Role of CPDP and National AI Supervision

CPDP (Commission for Personal Data Protection)

  • Supervisory authority under GDPR and the Bulgarian Personal Data Protection Act
  • The EDPB has clearly stated that DPAs should be designated as Market Surveillance Authorities under the AI Act in many cases
  • CPDP is already participating in pan-European initiatives on AI and personal data (Joint Statement from 2025 on AI-generated images)

National AI supervision

  • As of April 2026: Bulgaria has not yet designated a national competent authority for the AI Act
  • Candidates under discussion: CPDP, CPC (competition authority), SEGA (e-governance agency)
  • Practical significance: GDPR enforcement (through CPDP) will be the first line of enforcement for AI violations involving personal data

For more information on anonymisation and pseudonymisation under GDPR, see our analysis of EDPB guidelines.

10 Practical Steps for Compliance

  1. Inventory your AI systems — which ones process personal data?
  2. Classify under the AI Act — prohibited, high-risk, limited, minimal risk
  3. Determine the legal basis under GDPR for each AI operation
  4. Conduct a DPIA (GDPR Art. 35) for each AI system with high-risk processing
  5. Conduct an FRIA (AI Act Art. 27) if you are a deployer of high-risk AI in the public sector
  6. Update data subject information — notify them about the use of AI
  7. Review contracts with AI providers — Art. 28 GDPR + international transfer
  8. Ensure human-in-the-loop for decisions with legal effects (meaningful, not formal)
  9. Document everything — logs (min. 6 months under AI Act), DPIA, decisions
  10. Train your staff — HR, marketing, IT, management — everyone working with AI
Need help with GDPR and AI compliance? Our specialised GDPR team at gdprbg.com (site in Bulgarian) offers a full package — audit, DPIA, DPO-as-a-Service, training and ongoing compliance. Request a free consultation.

Frequently Asked Questions

Is a DPIA required for every AI system?
Not automatically, but in practice — almost always. A DPIA is mandatory when processing is likely to result in a high risk to the rights of data subjects — and most AI systems processing personal data fall into this category. DPIA guide on gdprbg.com (site in Bulgarian).
GDPR or AI Act — which is more important?
Both are mandatory and apply in parallel. GDPR regulates data, the AI Act regulates systems. A violation can lead to sanctions under both regulations simultaneously — up to 7% of turnover cumulatively.
Can I use “legitimate interest” for training an AI model?
Yes, under certain conditions. EDPB Opinion 28/2024 provides guidance — a specific interest is required, along with a balancing test and documentation. It is not automatically applicable for all scenarios.
What is a “black box” and why is it a problem for GDPR?
A “black box” describes AI models whose internal logic is opaque. This conflicts with the right to information (Art. 13-14 GDPR) and the right to explanation (AI Act Art. 86). Solutions include Explainable AI techniques (SHAP, LIME, attention maps).
When do the high-risk obligations under the AI Act take effect?
On 2 August 2026 — for AI systems under Annex III (incl. HR, credit scoring, medicine, law enforcement).
Who is the competent authority in Bulgaria?
For GDPR — CPDP (Commission for Personal Data Protection). For the AI Act — as of April 2026, the national authority has not yet been designated. In practice, CPDP will be the first line for AI violations involving personal data.
How can I protect my business?
Start with a GDPR audit of your AI systems, determine the legal basis, conduct a DPIA and ensure meaningful human oversight. For a full package — contact gdprbg.com (site in Bulgarian).

Conclusion

The intersection between GDPR and the AI Act creates a new, more complex regulatory landscape for any company using artificial intelligence to process personal data. Key takeaways:

  • The two regulations apply cumulatively — they do not exclude each other
  • A DPIA is almost always mandatory for AI processing of personal data
  • Transparency is critical — the “black box” is no excuse
  • Sanctions accumulate — up to 7% of global turnover
  • 02.08.2026 is the deadline for high-risk AI under Annex III

Need a GDPR audit of your AI systems?

Innovires Legal has a specialised GDPR and AI team with experience serving over 300 clients. For full support — visit our dedicated site gdprbg.com (site in Bulgarian).

Free consultation at gdprbg.com →