Readiness Scorecard

Find the right starting point for your BuiltAI journey.

Eight categories, 24 short questions. You'll receive an indicative recommendation — audit, pilot or readiness work — and we'll email a copy to your inbox.

  • 5–7 minutes
  • No sensitive data needed
  • Result emailed to you

About you

We need basic contact details so we can send your result and follow up with the recommended next step.

Business email preferred

Optional

Your role or title

Optional — the area you most want to improve.

Readiness questions

24 questions across eight categories. Each takes a few seconds. There are no right answers — be candid.

Category 1 of 8

Commercial leakage

Notices, instructions, variations and evidence discipline.

Pain
We regularly miss contractual notices or send them late.
Our variation evidence is inconsistent or scattered across people and inboxes.
We lose recovery on changes because instruction and approval trails are weak.

Category 2 of 8

Tender efficiency

Tender intake, scope, clarifications and bid QA.

Pain
Tender submissions go through last-minute rework that hurts quality and price.
Clarifications, assumptions and exclusions are managed informally rather than in a register.
Scope skeletons and requirement traceability vary tender to tender.

Category 3 of 8

RAMS consistency

Safety documentation, hazard libraries and approval gates.

Pain
Different teams produce RAMS in different formats, leading to rework.
We struggle to evidence competence, controls and reviews against task-based hazards.
RAMS approval is informal — there is no consistent gate before issue.

Category 4 of 8

Service desk / SLA control

Triage, ticket capture, escalation and SLA evidence.

Pain
Triage decisions vary by operator — priority and category are inconsistent.
Tickets are often incomplete, slowing dispatch and weakening SLA evidence.
Escalation triggers are unclear, so SLA breaches surface late.

Category 5 of 8

Reporting visibility

Pipeline, delivery, QA and finance reporting at board level.

Pain
We cannot quickly see pipeline, delivery, QA and margin in one place.
Monthly reporting is rebuilt from scratch rather than running off a controlled rhythm.
Reports describe the past but rarely drive owners and next-period actions.

Category 6 of 8

AI governance readiness

Acceptable use, classification, approvals, disclosure and audit trail.

Pain
We have no acceptable-use policy or DPIA process for AI tools.
Staff are unclear what data is safe to put into AI tools.
AI use is not logged or audit-trailed in a way procurement would accept.

Category 7 of 8

Data readiness

Whether documents and operational data are organised enough for an audit / pilot.

Readiness
Key contracts, variation registers, RAMS examples and reports are findable in known locations.
Operational records (tickets, variations, KPIs) are exportable in usable formats.
We can pull at least 3-6 months of history to review trends.

Category 8 of 8

Sponsor readiness

Whether decision-making and implementation capacity are in place.

Readiness
An accountable decision-maker is engaged and willing to sponsor an audit / pilot.
We have time available to participate in interviews and reviews.
There is budget and willingness to act on findings, not just receive a report.

Results display immediately. We will also email you a summary.