AI’s risk problem – and what legal leaders can do about it

If you’ve ever asked ChatGPT to draft a clause or watched your business adopt AI faster than you can draft a policy, you’re not alone.

AI is the shiny new toy – and it’s rolling out faster than most legal teams can keep up with. But alongside the excitement sits a swelling unease. Security. Bias. Hallucinations. Accountability. These aren’t theoretical risks – they’re happening now, and GCs are on the hook when they do.

So, what’s a legal leader to do?

Enter MIT’s draft AI Risk Mitigation Taxonomy – a structured look at the 831 mitigations proposed in recent frameworks and standards to help organisations reduce AI-related risk. It might sound academic, but for GCs managing risk without a blueprint, it’s a timely and surprisingly practical toolkit.

Here’s what in-house counsel need to know.

AI risk management is chaotic – this helps bring order

One of the big takeaways from the MIT report is that everyone’s winging it. Different frameworks take different approaches, and definitions vary wildly. What counts as “risk management” to one regulator might be “governance” or “oversight” to another.

MIT’s team combed through 13 global frameworks – from NIST to the EU AI Act – and extracted every concrete action they could find. Then they categorised them into four areas:

  1. Governance & oversight – policy structures, board accountability, safety frameworks.
  2. Technical & security controls – engineering safeguards, alignment checks, content filters.
  3. Operational process controls – audits, deployment protocols, incident response.
  4. Transparency & accountability – documentation, risk disclosures, user rights.

For time-poor in-house lawyers, it’s a sanity-saving way to map the chaos and spot blind spots.

Three areas where legal leaders can drive real impact

You don’t need a PhD in AI safety to use this. Many of the mitigations flagged are actions GCs already influence or lead. Here are three that stood out:

1. Testing & auditing (127 mitigations – the most common theme)

Think: red teaming, model audits, bug bounties. The taxonomy urges systematic internal and external evaluations to identify risks and check compliance.

GCs can lead here by:

  • Ensuring AI vendors and internal teams commit to pre- and post-deployment testing.
  • Demanding audit trails and logs – especially for decisions with legal or regulatory implications.
  • Advocating for red team exercises to uncover hidden risks.

2. Risk disclosure (44 mitigations)

Surprisingly few organisations are disclosing the risks of their AI systems. But for regulated industries, that’s a ticking time bomb.

Legal’s role:

  • Building disclosure into deployment checklists – e.g. notifying stakeholders before rolling out customer-facing AI.
  • Setting policies around transparency thresholds: what gets disclosed, when, and to whom.

3. Governance & oversight (248 mitigations)

This is legal’s bread and butter. MIT highlights the importance of clear board-level oversight and formal frameworks to guide safe AI use.

Steps to take:

  • Push for AI risks to be a standing board agenda item.
  • Help craft governance structures (e.g. risk committees or AI oversight boards).
  • Use familiar tools – like conflicts registers or whistleblowing policies – to build accountability into AI workflows.

Where gaps remain – and what to watch

Despite its breadth, the taxonomy flagged areas that are still being overlooked – notably:

  • Model alignment: i.e. how well AI systems reflect human values and intentions.
  • Environmental impact: very few frameworks address the carbon cost of AI.
  • Conflict of interest: especially for leadership in AI companies under pressure to scale quickly.

Legal teams should keep these on the radar – especially as ESG and AI regulations evolve.

Why this matters now

Generative AI is not going away. From customer service bots to internal decision engines, it’s becoming deeply embedded in operations. That means the risks – and responsibilities – are becoming embedded too.

For in-house lawyers, the taxonomy offers more than just a list – it’s a way to structure internal conversations, audit readiness, and prioritise where to act first.

If you’re wondering how to make sense of AI risk without drowning in buzzwords, MIT’s approach offers a helpful path forward.

the plume press

THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS

Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.

sign up today