
"Confidential information may not be disclosed to third parties without prior written consent."
Sound familiar?
It's the kind of clause that's appeared in NDAs for years. It's tidy, familiar - and increasingly, not good enough.
Because in a world where generative AI tools can ingest, retain, and regurgitate your confidential data, the definition of "disclosure" is getting a lot murkier.
AI doesn't forget - and that's the problem
Imagine this: you're using ChatGPT to help optimise some source code. You paste in your employer's proprietary script and ask, "How can I make this more efficient?"
You get a useful answer. Job done.
But what if that code - or a close variant - reappears later in someone else's chat, served up by the same model?
Suddenly, you haven't just shared data with a helpful tool. You may have unwittingly disclosed confidential information to the entire internet - permanently.
That's because most large language models (LLMs) don't just see your data. They learn from it. And once it's learned, it's very hard to unlearn.
The legal risk is real - and under-addressed
Even if you haven't "disclosed" confidential information in the traditional sense, you may still be exposing it to risk. Especially if that information:
- Is retained indefinitely inside a third-party system.
- Can't be deleted or isolated once it's ingested.
- Might resurface in response to future prompts.
And that's before you even get into subcontractors, open-source AI models, or unauthorised employee use.
So how do you make sure your NDAs and confidentiality clauses are AI-proof?
Here's where to start.
1. Ban retention by default
Most NDAs allow information to be shared with employees or "representatives" under confidentiality obligations. But AI tools don't follow orders - and they don't forget.
So go beyond banning unauthorised disclosure. Make sure your clauses:
- Prohibit any AI tool from storing or retaining your data.
- Require the recipient to use only tools that don't train on your inputs.
- Explicitly classify AI systems as third parties, unless otherwise agreed.
2. Prohibit use in AI training
This one's simple: make it crystal clear that your confidential data must not be used to train any AI models, LLMs, or algorithms, whether directly or indirectly.
A clear prohibition gives you a stronger contractual footing if anything goes wrong - and sets the tone for responsible data handling.
3. Demand visibility and enforcement rights
It's not enough to set the rules. You need the right to check they're being followed.
Include provisions that allow you to:
- Audit how confidential information is handled.
- Request breach notifications.
- Demand reports on tool usage.
- Enforce with injunctive relief or financial penalties (e.g., indemnities).
4. Address subcontractors and downstream risk
What if your vendor follows your rules, but their subcontractor doesn't?
To close that loophole:
- Demand disclosure of all third-party tools or processors.
- Require equivalent obligations to flow down to subcontractors.
- Ban the use of any third-party AI tools without your written consent.
- Hold your vendor liable for any breaches by their supply chain.
5. Use data classification to limit exposure
Not all confidential information is equal. So build in a system that helps your business classify data appropriately - and restrict AI interaction for the most sensitive types.
For example:
- Personal data.
- Financial data.
- Trade secrets.
- Regulated or sector-specific information (e.g., healthcare or defence).
6. Set technical boundaries
Don't rely on policy alone. Build in requirements for technical safeguards, such as:
- AI containment controls to prevent data persistence.
- Monitoring tools to flag potential data leaks or outputs.
- Access controls that limit who can use what tool (and how).
It's about creating layers of protection - not just one clause buried in the annex.
7. Create standard AI confidentiality addendums
Updating every contract manually isn't scalable. So build a standard rider that covers AI-specific risks.
Your addendum might include:
- Use and ownership of AI-generated outputs.
- Accuracy and bias standards.
- Limits on open-source or unvetted tools.
- Prohibitions on using generative AI with customer or personal data.
Treat it like your AI seatbelt - it won't stop the journey, but it'll keep everyone safer.
Your NDAs were built for humans - but AI changes everything
When traditional NDAs were drafted, no one imagined a world where "disclosure" could happen through an algorithm. But now, it's not just possible - it's already happening.
By updating your templates and embedding AI-specific safeguards, you can protect your confidential information, stay compliant, and futureproof your contracts.
Because in an AI-driven world, the biggest risk isn't saying too much - it's assuming the old rules still apply.
the plume press
THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS
Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.