Can AI think for itself? Why in-house lawyers need to engage with AI’s ethical grey zones

Imagine this: You’re reviewing a contract drafted entirely by an AI tool. It’s fast, accurate – even uses your team’s preferred clause language. But then your colleague asks: “What assumptions did it make? Can we trust it?”

Cue the mental pause.

AI isn’t just another tech tool. It’s a new kind of decision-maker – one that raises big, thorny questions about trust, fairness, and accountability. And for in-house lawyers, that means more than curiosity. It means responsibility.

Why this matters to in-house legal teams

You’re not being asked to build AI systems from scratch. But chances are, you’re being asked to green-light them – or at least advise the business on how to use them. Whether it’s marketing using a chatbot, HR screening candidates, or your ops team testing a forecasting tool, you’re the one expected to say: “Is this okay?”

To do that well, you need more than a risk register. You need to understand what AI is doing – and what it’s not.

Let’s unpack the philosophical and ethical undercurrents shaping how AI is built and used – and what they mean for in-house lawyers in 2025.

Intelligence or imitation? Why the AI vs AGI debate matters

AI today is largely narrow and technical – brilliant at pattern recognition, not so great at independent thought. But some developers are chasing artificial general intelligence (AGI) – machines that think like humans. While this might sound sci-fi, the implications are very real.

If we assume machines can “understand,” we might hand them responsibilities they’re not equipped to bear. Your business might treat an AI tool’s output as gospel – when in fact it’s a well-polished guess.

As the legal voice in the room, it helps to challenge that assumption: Is this tool truly intelligent – or just simulating it?

Consciousness, accountability, and the myth of the machine mind

Some argue that a sophisticated enough machine could become conscious – that it might genuinely “feel” or “choose.” But the science isn’t there yet. And more importantly for you, the law isn’t either.

Responsibility for AI-driven decisions still rests with humans – and that includes your company. If an AI system makes a biased hiring decision or mishandles customer data, it’s not the machine that ends up in front of regulators. It’s your organisation.

So while tech teams debate the “hard problem” of consciousness, legal teams need to stay focused on who’s accountable – and how to prove it.

Perception is action: What AI can (and can’t) do well

AI systems process information in ways that mimic perception – scanning, recognising, reacting. But unlike humans, they don’t understand context or consequences.

This becomes tricky when AI systems are used in high-stakes areas like fraud detection, recruitment, or healthcare triage. The danger isn’t just poor decisions – it’s opaque ones.

As a lawyer, that means asking sharp questions:

  • What data was used to train the system?
  • Are there feedback loops that could reinforce bias?
  • Can we explain this outcome if challenged?

These aren’t tech questions. They’re ethical, legal, and reputational ones.

Rational-ish: AI’s decision-making under pressure

In theory, AI should make decisions that maximise value – what philosophers call “expected utility.” But in practice, AI operates with limits: incomplete data, time constraints, and imperfect models.

That’s where “bounded rationality” comes in – the idea that decision-makers (human or machine) operate under pressure and uncertainty. And that’s where legal risks often emerge.

For in-house teams, it’s worth digging into:

  • How much confidence does the system have in its outputs?
  • Does it offer options, or only one “right” answer?
  • Are there thresholds or flags for human review?

A little scepticism can go a long way.

Ethics by design – or after the fact?

Many companies are now embracing “Design for Values” (DfV) – a framework for embedding ethics into the development of AI systems. That includes identifying stakeholder values (like fairness or transparency), translating them into design features, and assessing whether they’re actually realised in use.

This isn’t just nice to have. It’s increasingly expected by regulators – and essential for public trust.

If your business is deploying or buying AI tools, consider these checkpoints:

  • Were ethical principles built in, or bolted on?
  • Are the values of affected users (customers, employees, suppliers) reflected?
  • Is there a process for ongoing review – or just a one-off sign-off?

What all this means for you

AI ethics isn’t just a theoretical exercise. It’s the practical reality of making sure your business stays compliant, competitive, and credible.

In-house lawyers don’t need to solve AI’s biggest mysteries. But we do need to:

  • Ask good questions.
  • Spot hidden risks.
  • Push for explainability and accountability.
  • Help the business balance innovation with integrity.

You’re the voice that says, “Let’s pause and think this through” – before the headlines do it for you.

the plume press

THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS

Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.

sign up today