Knowledge library

AI governance

An AI governance baseline procurement teams will accept

What procurement actually want to see — a policy, a classification rule, an audit log and a refusal path. Practical, not theatrical.

5 min readPublished 4 April 2026

What procurement teams ask for

Procurement teams in regulated and quasi-regulated sectors are no longer asking 'do you use AI'. They are asking 'how do you control it'. The good news: the answer they want is short, specific and answerable in a one-page policy.

The baseline

  • Written policy that names which staff may use AI, on which data classifications, for which purposes.
  • Data classification rule — at minimum a Red / Amber / Green split, with a hard block on Red data reaching any AI service.
  • Audit log of every AI invocation — task, model, classification, redaction status, approver.
  • Disclosure path — when AI is used in a deliverable, the deliverable says so, with a human reviewer named.
  • Refusal path — staff have a clear escalation when a request hits a Red gate or an unapproved use case.

What it gets you

A baseline like this passes most enterprise procurement reviews on first read. It does not require enterprise tooling — a spreadsheet and a policy document are enough to start. What matters is that the rules are real, named, dated and reviewed.

Want this turned into a workflow?

BuiltAI installs the practice as a pack. Audit, pilot or retainer.

Book a discovery call

Related reading