
AI is everywhere. It’s the hot topic in every boardroom. For in-house lawyers, it’s not just a technology conversation – it’s about risk, governance, and how the business plans to use these tools responsibly.
According to MIT Technology Review, 95% of companies are experimenting with AI but 76% have only rolled it out in one to three use cases. Few have managed to make the leap from pilot projects to real, organisation-wide value.
You can read the full MIT Technology Review Insights report here.
The report shows that a few companies are starting to see results. Klarna has used generative AI to replace the equivalent of 700 customer service agents, allowing its human team to focus on complex issues. Motorola has built detailed frameworks to track productivity gains, comparing how long tasks take with and without AI support. These examples show that when companies commit to good data and governance, AI can make a real difference.
So what’s holding businesses back? And what lessons can in-house lawyers take from this trend as their own companies dive into AI?
Why scaling AI is so hard
Rolling out AI across a whole organisation isn’t as simple as buying a tool. The report highlights three big blockers:
- Data quality – Half of respondents say poor data quality is the single biggest brake on AI progress. For large companies, legacy systems make this even worse.
- Governance and risk – 98% of executives say they’d rather be slow and safe than rush to be first. Security, privacy and regulation are slowing things down – and for good reason.
- Costs – From talent to tech, scaling AI isn’t cheap. Only the very biggest companies have been able to significantly grow AI budgets so far, but that’s changing fast.
These blockers often overlap. For example, a company with poor-quality data also faces extra costs when cleaning and migrating that data, and governance concerns multiply if the company doesn’t know where its data lives.
Sound familiar? Legal teams face exactly the same challenges when introducing new technology: messy data, risk concerns and tight budgets.
Practical steps: what business leaders are doing now
Here are some of the strategies companies are using to bridge the gap between ambition and action:
1. Fix the data first
You can’t do good AI with bad data. The smartest organisations are focusing on:
- cleaning up their data so it’s accurate and structured.
- modernising systems so data can flow across the business.
- documenting where data comes from and how it’s used.
As Matt McLarty, CTO at Boomi, says: “Companies that have good hygiene and rigour around their data are going to be way better positioned for the AI landscape”.
2. Be selective about use cases
Forget trying to do everything at once. The report shows that companies are focusing on AI projects that give a real competitive edge. That means:
- choosing use cases that solve business-specific problems (rather than generic experiments).
- looking for repetitive, high-volume tasks where AI can save time or improve consistency.
- setting clear goals so they can measure success – for example, reducing contract review times by a certain percentage or improving customer response times.
3. Partner wisely
Most companies aren’t building their own large language models. Instead, they’re fine-tuning existing tools and working with trusted vendors. This helps avoid reinventing the wheel while keeping control over data.
The trend is towards a “multi-AI” environment, where different tools are chosen for different problems – some for document review, some for customer support, others for analytics.
4. Balance speed with safety
With regulations evolving fast, businesses are deliberately taking a measured approach to AI. That means governance, transparency and human oversight are essential for every AI initiative.
A strong governance structure also helps with board confidence: executives are far more likely to approve AI projects if they see a clear process for managing risk.
What this means for in-house lawyers
If you’re an in-house lawyer, AI is coming for your business whether you’re ready or not. You’ll be pulled into conversations about risk, contracts, data, IP and employment sooner than you think.
This report is a timely reminder to:
- Get ahead of governance – Be proactive in shaping AI policies. The businesses in the survey that are most confident about AI are the ones with strong governance in place.
- Understand the data issues – Poor data quality isn’t just an IT headache. It’s also a compliance and risk issue.
- Focus on practical outcomes – AI projects that succeed are those that link clearly to commercial goals. Keep asking: what’s the real business value?
- Watch contract workflows – AI tools are already being used for document review and drafting, so you’ll need to understand how these fit into your risk frameworks.
These steps will also help legal teams think about their own tech investments. AI won’t remove the need for lawyers, but it will reshape what they spend time on.
The bottom line
AI isn’t a magic bullet. It’s a tool. And like any tool, the results depend on the preparation and the person using it.
For in-house lawyers, this isn’t about becoming an AI engineer. It’s about helping your business build the right foundations so that when AI does scale, it scales safely and smartly.
the plume press
THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS
Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.