The EU’s AI Code of Practice: What in-house lawyers need to do before the rules kick in

AI is moving fast. Regulation is trying to keep up. And in the meantime, in-house lawyers are being asked to navigate the gap.

That’s where the EU Code of Practice for General-Purpose AI Models comes in. It’s not law – yet. But it’s the clearest signal so far of what future compliance under the AI Act will look like.

If your business builds, integrates, or procures AI tools in the EU, this “voluntary” code is a warning shot. Here’s what it means – and what legal teams should be doing now to stay ahead of the curve.

A blueprint for AI compliance – before it’s mandatory

The Code is an official, Commission-backed framework. It sets out how developers and deployers of general-purpose AI (GPAI) models should operate during the transition period before the AI Act becomes enforceable.

What makes it different?

  • It’s detailed: 51 pages of expectations covering transparency, risk management, incident response, IP safeguards and more.
  • It’s directional: The AI Office will use it to assess whether companies are acting in good faith ahead of full enforcement.
  • It’s non-negotiable in the long term: Once Article 53 of the AI Act takes effect, these principles will become obligations.

So, while it’s voluntary for now, the message is clear: adopt early or fall behind.

What’s in the Code – and what it means for your business

Here’s what in-house lawyers should have on their radar:

1. Model transparency is no longer optional

Organisations deploying GPAI models (or building tools that rely on them) must provide extensive documentation – covering model architecture, training procedures, capabilities, limitations, and intended uses.

This isn’t just a one-off compliance exercise. Providers are expected to:

  • Maintain documentation for 10 years.
  • Share information with downstream users.
  • Flag potential misuse and dual-use risks up front.

Action for Legal: Review your vendors’ documentation policies. Build contract clauses requiring them to maintain and share transparency records.

2. Risk management is a live, ongoing duty

The Code treats AI like a product with a life cycle. That means risks don’t stop at deployment – they evolve.

Providers of high-impact models are expected to:

  • Develop a Safety and Security Framework.
  • Monitor post-market performance and incident reports.
  • Implement trigger-based risk reviews for model updates or misuse.

Action for Legal: Make sure your internal governance teams understand these expectations – and that procurement contracts reflect post-deployment obligations.

3. Copyright isn’t the only IP concern

Your existing blog covers the TDM exemption and opt-out risks well – but the Code goes further.

It places the burden on providers to:

  • Keep records of how data was gathered and the legal basis used.
  • Implement technical safeguards to reduce copyright exposure.
  • Proactively explain any limitations on training data transparency (e.g. trade secrets, NDAs).

Action for Legal: If you’re integrating third-party AI, assess whether you’d be comfortable defending their data practices in front of a regulator. If not, push for more information.

4. Governance is getting formal

A key theme is accountability. The Code recommends:

  • Clear internal ownership for compliance.
  • Senior-level oversight.
  • Written policies and audit trails to support regulatory engagement.

Action for Legal: AI governance shouldn’t be a side project. Make it part of your risk register. Collaborate with IT, data, security and compliance teams to formalise internal ownership.

So what now?

Here’s a practical to-do list for in-house legal teams:

  • Identify where GPAI is in use – whether standalone or embedded in third-party tools.
  • Map responsibilities – who’s accountable for AI governance, documentation, and vendor oversight?
  • Review contracts – especially IP warranties, audit rights, and incident reporting obligations.
  • Monitor regulatory timelines – enforcement of the AI Act is on the horizon, and the voluntary phase won’t last.

Final thought

The Code of Practice is more than a preview – it’s a pressure test. For legal teams, it’s a chance to shape how AI is used and governed before the hard rules arrive.

Getting ahead of it now could mean fewer surprises – and fewer firefights – later.

the plume press

THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS

Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.

sign up today