
Imagine this: your CEO asks you to sense-check a tricky contract. You decide to experiment with ChatGPT, pasting in a few key clauses to see if it spots any red flags. Ten minutes later, a chatbot has given you a neat summary and some suggestions.
Convenient? Absolutely. But there is a catch - and it is a big one: legal privilege.
Recent headlines have thrown this issue into the spotlight. They describe how parties are starting to ask in court whether the use of ChatGPT to draft or review legal documents can undermine privilege. In one case, a judge questioned whether an internal memo prepared with AI assistance should still be protected. Even Sam Altman has recently warned that users should not assume any confidentiality when using public AI tools, whether for therapy or legal queries.
There is also growing concern that providers of AI tools could be required to disclose user chats to regulators, investigators or courts. If that happens, anything typed into ChatGPT could be retrieved and examined, with obvious risks for confidential legal advice.
Can using ChatGPT waive privilege?
Legal privilege protects confidential communications between a lawyer and their client, but it is fragile. Sharing legal advice with a third party - like an AI tool - risks breaking that confidentiality. Once privilege is lost, it cannot be clawed back.
In the UK, privilege only applies if:
- The communication is confidential.
- It is between a client and a qualified lawyer.
- It is for the purpose of giving or receiving legal advice.
AI tools are not part of that relationship. If you paste sensitive text into ChatGPT or similar tools, you are disclosing it to a third party. This could mean that:
- You lose privilege on that document.
- Opposing parties could request disclosure in future litigation.
What are the risks for in-house lawyers?
In-house lawyers are under increasing pressure to do more with less. AI can feel like a lifeline - a way to draft faster, analyse documents and get quick answers. But the risks are real:
- Data security: Some AI providers store user prompts and use them to train models. This means your data could end up on external servers outside your control.
- Confidentiality: Even if a provider promises privacy, entering client advice into a public AI tool is, in effect, publishing it to a third party.
- Privilege challenges: If a regulator, court or other party asks, you may have to disclose that you used AI to prepare your advice - and that could open the door to privilege disputes.
- Potential disclosure of chat history: Future legal or regulatory changes could mean that AI companies are compelled to hand over chat histories, creating an additional layer of risk.
How are legal teams responding?
Forward-thinking legal teams are tackling this head-on. We are seeing:
- Clear AI policies: Many companies are writing internal policies that ban or limit the use of generative AI tools for privileged or sensitive matters.
- Secure AI pilots: Some teams are trialling secure, enterprise versions of AI tools that promise not to store or share data.
- Training and awareness: Educating business colleagues about the privilege risks of casually dropping contracts into a chatbot.
Practical steps to protect privilege
Here are some simple steps you can take:
- Never paste privileged or confidential content into public AI tools.
- Treat AI tools as third parties for the purposes of privilege.
- Work with your IT and InfoSec teams to explore secure alternatives.
- If AI is used in preparing advice, document your process clearly.
The bottom line
AI tools like ChatGPT are powerful - but they are not confidential colleagues. For now, if you want to protect privilege, keep your sensitive documents and advice out of public AI tools. Instead, consider building internal guidance and guardrails so your team can balance efficiency with safety.
the plume press
THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS
Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.