We promise 3–5 weeks from approved use case to governed production. Enterprises hear that number and either dismiss it as marketing or assume we're cutting corners on governance. Neither is right.
What's actually going on is a shift in how the work is packaged. Bespoke AI programs take 12–24 weeks because every component is designed from scratch. A structured, pre-built operating model compresses that to weeks because nothing novel is being designed — it's being configured.
What has to be pre-built, not designed
Policy templates per risk tier. Trace schemas and scorers. Cost ledger structure and chargeback logic. Evidence retention format agreed with Audit. Scorecard template with six KPIs. Runbooks for common incidents. Go-live checklist.
If any of those have to be designed from a blank page on your engagement, you're not going to hit 3–5 weeks, and anyone who claims otherwise is either very fast or misrepresenting the scope.
What has to be configured, not pre-built
Your data sources and access boundaries. Your risk tolerance and approval workflows. Your budget caps and cost allocation. Your named owners at each phase gate. Your use-case-specific quality thresholds.
These can't be pre-built — they're specific to your organisation. The 3–5 week timeline assumes the pre-built artefacts are loaded and the configuration work is focused on only these inputs.
The Activate timeline, honestly
Week 1: configuration session, integration access confirmed, pre-built methodology pack loaded for your use case type. Week 2: Policy Hub and Trace Lens instrumented, quality scorers active, exec dashboard live. Week 3: Cost Lens wired to your budget, Evidence Vault retention active, human-in-the-loop decisions captured. Weeks 4–5: quality gate assessment, pilot staff briefed, go-live pack signed.
If one week slips because of access delays or an integration surprise, the whole timeline slips. The compression is real but it's tight — which is why Activate has hard entry criteria from Qualify.
The failure modes
Three things reliably blow the timeline: unclear data ownership (who can approve access to the retrieval index?), no named business owner (who signs the go-live pack?), and scope creep (adding a second use case mid-engagement). The first two are fixable during Qualify. The third has to be held as a line.
When engagements run to 8+ weeks, it's almost always one of these — not a failure of the methodology.
What 3–5 weeks does not mean
It doesn't mean a pilot. It means a use case running in governed production with policy, traces, cost, and evidence instrumented — with a named owner and a scorecard delivering value and risk signals from go-live day.
It also doesn't mean finished. Control is where AI compounds — the Activate output is a baseline, not a destination. The 3–5 weeks gets you to the starting line of operated AI. The rest is the race.