
AI tools are charging into the business world, bringing promises of productivity gains and process improvements. But alongside the hype sits a hefty dose of risk – particularly around data protection. For in-house legal teams, especially those in regulated industries or public-facing sectors, it’s becoming essential to get on the front foot with how the GDPR interacts with AI.
Here’s a breakdown of what matters, what’s changing, and how you can stay ahead.
The regulatory backdrop: GDPR doesn’t act alone
Let’s start with the basics. The GDPR might be the centrepiece, but it’s part of a much broader European data protection landscape:
- Council of Europe: Article 8 of the ECHR, interpreted by the European Court of Human Rights.
- European Union: Articles 7 and 8 of the Charter of Fundamental Rights, interpreted by the CJEU.
- National law: Adds another layer of local complexity.
Together, these systems influence and reinforce each other. And guidance from bodies like the EDPB and WP29 means legal teams need to track not just law, but also evolving “soft law”.
What does GDPR actually apply to?
The GDPR has a broad remit – and that’s before we even get into AI. For AI systems, two types of scope are key:
Material scope: What counts as personal data?
- If data relates to an identifiable person, it’s in scope – even if that identifiability is only “reasonably likely”.
- The concept of identifiability is dynamic, meaning it evolves with technology. A good example is the Breyer case, where a dynamic IP address was deemed personal data because re-identification was legally possible.
- Pseudonymised data – like hashed identifiers – still falls under the GDPR. Only full anonymisation puts you out of scope.
Personal scope: Who’s responsible?
- Controllers determine the why and how of processing.
- Processors act on their behalf.
- But AI complicates this. In many cases, several actors contribute – from model builders to platform hosts. The idea of joint controllership becomes important here, as shown in the Facebook Like button case (Fashion ID).
If your business builds or buys AI tools, it’s worth mapping out who’s responsible at each phase – from training to deployment. This helps avoid the classic “not my job” problem when something goes wrong.
Why AI and GDPR principles clash
AI systems often work in ways that rub up against GDPR fundamentals. Here are the hotspots:
1. Lawfulness and purpose limitation
- AI loves reusing data. GDPR doesn’t.
- Training a model using data collected for unrelated purposes risks “function creep”.
- The Clearview AI case – where internet-scraped photos were used to train facial recognition – is a warning sign.
- Tools like large language models (LLMs) face similar issues, prompting scrutiny (and even temporary bans) in places like Italy.
2. Data minimisation
- AI’s hunger for large datasets sits awkwardly with the need to process only what’s necessary.
- Fixing this means:
- Limiting data collection upfront.
- Using techniques like federated learning, on-device inference, and pseudonymisation.
- Thinking beyond the “collect first, justify later” approach.
3. Transparency and explainability
- GDPR mandates clear explanations for how personal data is used – especially in automated decision-making.
- But AI systems (especially deep learning models) aren’t known for their simplicity.
- Still, the requirement is there: provide “meaningful information about the logic involved”, its significance, and consequences – in plain English.
4. Accountability and risk management
- GDPR expects organisations to be proactive, not reactive.
- Data Protection Impact Assessments (DPIAs) are essential for most AI use cases.
- Think holistically: cover the full lifecycle, from data sourcing to output deployment.
- Algorithmic Impact Assessments (AIAs) are also gaining traction – offering broader ethical oversight alongside data protection.
Key takeaways for in-house teams
- AI models can “leak” data – Techniques like model inversion or membership inference can extract personal data from trained models. This means model training, sharing, or commercialising isn’t risk-free.
- Responsibility shifts over time – A model’s lifecycle often includes multiple controllers and processors. A phase-based responsibility map helps avoid accountability gaps.
- GDPR isn’t the enemy of innovation – Despite the myths, the GDPR allows for nuance. Supervisory authorities have leeway to interpret rules. The goal is to ensure necessity and proportionality – not to kill innovation.
- Do the groundwork early – Engage Legal at the project’s outset. Document lawful bases, assess risks, and build in safeguards. You’ll save yourself a future compliance headache.
Final thought
In-house lawyers don’t need to be AI engineers. But you do need to be the voice of risk and rights – and a bridge between the developers, the business, and the regulator. Getting ahead of AI and GDPR issues now can mean the difference between innovation with confidence and a compliance crisis later.
the plume press
THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS
Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.