The Right to Be Forgotten — But Can AI Forget?
Published 04/11/2025
Written by Olivia Rempe, Community Engagement Manager, CSA.
In today’s AI-powered world, the “Right to be Forgotten”—a principle enshrined in the EU’s General Data Protection Regulation (GDPR)—is facing one of its biggest tests yet.
While traditional databases and web platforms can delete or de-index personal data upon request, AI models, especially large language models (LLMs), present a more complex and troubling question:
Once personal data is used to train an AI model, is deletion even possible?
A Brief Look at the Law
Under Article 17 of the GDPR, individuals have the right to request the erasure of their personal data when:
- It is no longer necessary for the purpose it was collected.
- They withdraw consent.
- The data has been unlawfully processed.
Organizations are expected to honor these requests in a timely and effective manner.
AI’s “Memory” is Not Like a Database
Unlike a spreadsheet or a structured database, LLMs don't store data in tidy, retrievable rows and columns. They use training data to fine-tune probabilistic models across billions of parameters. Once a model has learned from a dataset, the data becomes deeply embedded in its architecture—not easily traceable or deletable.
For instance, if an individual’s name or story was part of a dataset used to train a foundational model, there’s no clear way to isolate and remove the “impression” that data made without retraining the model from scratch.
This is technically challenging, economically expensive, and environmentally wasteful.
Current Workarounds (And Their Limitations)
Researchers are exploring techniques like:
- Data redaction before training
- Differential privacy
- Machine unlearning
However, these are still emerging fields. For models that have already been trained, there are no proven, scalable solutions to guarantee compliance with the right to erasure.
CSA calls this “an open challenge” and warns that organizations may find themselves unintentionally non-compliant—even with the best intentions.
What This Means for AI Governance
This challenge raises key questions:
- Should foundational models be regulated differently?
- Is it feasible (or even desirable) to guarantee full erasure once a model is trained?
- How can we balance privacy rights with the practical limitations of current AI technology?
Governments, regulators, and AI developers will need to collaborate closely to answer these questions.
So What Should Organizations Do Today?
Until clearer guidance emerges, here are some practical steps organizations can take:
- Perform risk assessments before training with potentially sensitive data.
- Maintain strong documentation of training data sources and purposes.
- Avoid including personal data in GenAI training sets whenever possible.
- Be transparent with users and regulators about the limitations of current AI architectures in meeting erasure requests.
Final Thoughts
The right to be forgotten remains a vital part of modern data privacy. But in the world of GenAI, it may not be a right that’s easily honored.
As we continue to innovate, responsible AI development means acknowledging where the law and technology are out of sync—and actively working to close that gap.
CSA’s research publication, Responsible AI in a Dynamic Regulatory Environment, explores this issue in greater depth, alongside many others that will shape the future of AI governance.
Download the full white paper to learn more.
Related Resources



Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Implementing CCM: Enterprise Risk Management Controls
Published: 04/25/2025
Unlocking the Distillation of AI and Threat Intelligence Models
Published: 04/23/2025
Forging Robust Cloud Defenses for Modern Businesses
Published: 04/23/2025
Implementing CCM: Data Protection and Privacy Controls
Published: 04/22/2025