It's not just another SaaS deal: How to approach AI vendor contracts

You've been handed yet another tech contract. At first glance, it looks like any other SaaS agreement. But dig a little deeper and you realise this one's powered by AI.

Welcome to the new frontier of commercial contracting.

From generative tools that auto-draft content to machine learning models that support decision-making, AI is now baked into a growing number of software solutions. But while the tech may be sophisticated, many of the contracts aren't. And that can leave you - the in-house lawyer - exposed to unexpected risk.

Here's your practical guide to drafting and negotiating AI vendor contracts. Built from plenty of in-the-trenches insight, this checklist is designed to help you spot issues fast and ask the right questions.

1. Is it really AI (and what kind)?

  • Confirm what technology is being used: is it machine learning, a large language model, or something else? Vendors may stretch the term "AI" to cover simple automation or decision trees.
  • Understand if the AI functionality is core to the product or an optional add-on. This affects how you prioritise clauses and assess value.
  • Check if you can switch off AI features or revert to non-AI alternatives, especially if performance declines or compliance concerns arise.

2. Who owns the inputs, outputs and underlying data?

  • Be clear on what the customer is providing and how it can be used. Does your data include personal, confidential or proprietary information?
  • Set boundaries on whether your data can be used to train the vendor's models. Even "anonymised" data may be risky if combined with other sources.
  • Define who owns the outputs generated by the AI - including documents, summaries, or analysis - and whether they're exclusive to you.
  • Watch out for terms that grant the vendor broad rights to reuse, adapt or commercialise your data or outputs. Push for limitations and opt-outs.

3. What accuracy or performance can you expect?

  • Ask the vendor to define the expected outputs, metrics or benchmarks - such as precision, recall, or latency.
  • Consider service levels for accuracy, speed and availability - especially if the AI is business-critical (e.g. used in compliance or finance).
  • Explore warranties or performance guarantees around output quality, update frequency, and retraining obligations.
  • Build in the right to audit model performance and request remediation if outputs fall below agreed thresholds.

4. Who carries the can if things go wrong?

  • Push for IP indemnities covering claims linked to AI outputs, including third-party content or data scraped by the model.
  • Clarify liability caps and whether there are carve-outs for specific risks such as data breaches, personal data misuse, or regulatory fines.
  • Consider how disputes over AI decisions will be handled - especially if decisions impact customers, employees or financial outcomes.
  • Ensure responsibilities are clearly divided between vendor and customer, including where AI outputs are used as a basis for action.
  • Acknowledge that your leverage may be limited, particularly with dominant vendors or standard-form contracts. In many cases, the negotiation may be less about securing gold-standard terms and more about understanding the risk you're taking on. If you can’t move the clause, document your assessment and agree internal mitigation steps.

5. How secure - and compliant - is the system?

  • Ask about security certifications (e.g. SOC 2, ISO 27001) and review any security white papers or audits.
  • Ensure robust data protection terms are included in the DPA. This includes data subject rights, international transfers, and subprocessor controls.
  • Check for safeguards against bias and discrimination. Ask how the vendor tests for fairness and mitigates known risks.
  • Require the vendor to notify you of any incidents, data breaches, or material changes to how the AI functions.

6. Can you actually get your data out?

  • Assess how easily you can migrate your data and models. Are outputs portable and in a usable format?
  • Confirm if APIs are open, well-documented, and maintained. This matters if you need to switch tools or run your own integrations.
  • Avoid vendor lock-in by negotiating exit assistance, data portability, and transition support clauses.
  • Include a clear exit strategy covering data return, deletion, and destruction.

7. Is the vendor aligned with regulatory frameworks?

  • Check if they comply with (or track) emerging frameworks like the EU AI Act, UK AI White Paper, or NIST AI Risk Management Framework.
  • Ask for transparency around how the model works - including training data sources, architecture, and update logs.
  • Ask for a model card. This should provide a plain-language summary of what the model is, how it was trained, what data it used, known limitations, and suitable use cases. If the vendor can't or won't provide one, that’s a red flag.
  • Ensure human oversight is possible where outputs have material impact. Build this into internal governance too.
  • Consider how explainable the AI is - can your team understand and justify decisions made by the model?

8. Who supports and monitors the AI?

  • Identify your point of contact and understand how support is structured - including availability and escalation paths.
  • Define how often the model is retrained, updated or monitored. Who decides when an update is needed?
  • Agree a governance cadence - e.g. quarterly service reviews, risk assessments, or technical audits.
  • Ensure you can request performance reports, audit logs, or change documentation if concerns arise.

Make it part of your playbook

This isn't about reinventing the wheel every time. You can bake these questions into your procurement processes, triage tools or contract review playbooks. The aim is simple: to help you spot the red flags before your business commits to a tool it doesn't fully understand.

Because when it comes to AI, it's not just about clever code. It's about risk, responsibility, and staying in control.

the plume press

THE NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS

Get the lowdown on legal news, regulatory changes and top tips – all in our newsletter made especially for in-house lawyers.

sign up today