The mid-market AI rollout playbook
Assess → Pilot → Expand → Operate. The four phases of a 174 rollout, expanded into a working playbook with artifacts, decision criteria, failure modes, and a sample 12-week timeline.
Most AI rollouts fail in one of two ways. They start too big — committee, charter, vendor evaluation, year-long timeline — and lose momentum before anyone touches the work. Or they start too small — one curious manager downloading ChatGPT for their team — and never add the structure that lets the wins survive a leadership change.
The 174 rollout is built to avoid both. Four phases. Each phase produces an artifact that justifies the next. No phase commits you to the next one. You can stop at any point and have something durable to show for it.
Phase 01 — Assess
What it is. A baseline you can put in front of leadership before any rollout work begins.
Who’s involved. Whoever is going to own the program — typically L&D, People Ops, Chief of Staff, or a Head of AI Enablement. Not IT (yet), not procurement (yet), not the line managers (yet).
Time. A morning, including the meeting where you read the report.
Artifacts produced.
- A leadership-shareable AI literacy assessment report (the URL itself; the report is dated and stays at that URL).
- A short internal note — three bullets — from you to the program owner: what surprised you, what didn’t, and what you’d do first.
What success looks like at end-of-phase. You can name your overall maturity, the weakest dimension, and the recommended rollout shape without looking it up. Your CHRO has read the report.
Failure modes.
- Treating the assessment as homework that leadership will read someday. Run the report by leadership in week one, not week six.
- Picking aspirational answers because the score feels low. The point of the assessment is to see clearly. A 25 in Adoption is a clear instruction; a 50 you fudged into existence is misleading.
- Treating the assessment as a one-time event. The same URL will work in six months — measure lift, not just baseline.
Decision criteria for moving to Pilot.
- The recommended rollout shape isn’t “do nothing” — i.e. you’re not already at full Mature in all dimensions.
- A program owner exists and has the capacity to run a 4-week pilot.
- One team has been identified that’s enthusiastic and ready to move.
If any of those is no, stay in Assess. Run the assessment with a wider set of stakeholders, or wait for the right team to volunteer.
Phase 02 — Pilot
What it is. A 4-week, 1–10 seat program in one team. Curriculum, live cohort access, and the assessment in one place. No procurement marathon, no annual contract.
Who’s involved. The program owner (you), one line manager who’s enthusiastic, and 1–10 of their direct reports. If the engagement is Concierge, a 174 enablement partner runs weekly check-ins with you; if it’s Self-paced, the program owner runs them.
Time. Four weeks of program time. One week of setup before. One week of synthesis after.
Artifacts produced.
- An internal Slack channel or working group where the cohort lives.
- 3–5 artifacts produced by cohort members during the program — actual prompts, working agents, reviewed outputs. These belong to the team, not to 174.
- A week-8 measurement: re-run the assessment on the pilot team in isolation. Compare to the org-wide baseline from Phase 01.
What success looks like at end-of-phase. Three things are true:
- Adoption on the pilot team has shifted at least one bucket above the org baseline (e.g., from Emerging to Developing).
- At least one capability artifact has shipped into actual use — a prompt the team uses weekly, an agent in production, a workflow that replaced a manual step.
- The line manager wants to expand. (Not “is willing to expand” — wants to.)
Failure modes.
- Picking the wrong team. The most common mistake is picking the team that needs AI literacy most rather than the team most ready to move. Pick the manager’s most enthusiastic team. Need will reveal itself elsewhere.
- Letting the pilot drift into “training as it happens.” Every week needs a working session, an artifact in flight, a checkpoint. Without that rhythm, the program becomes an opt-in webinar nobody opts into.
- Skipping the week-8 reassessment. Without it, you can’t show leadership that anything actually changed.
Decision criteria for moving to Expand.
- All three success conditions met.
- A second team has been identified that’s ready.
- Leadership has seen the week-8 numbers and signed off on expansion.
Phase 03 — Expand
What it is. Rolling the program from one team to a department, a function, or the whole company. The pilot is the proof; expansion is the rollout.
Who’s involved. Now you bring in IT, procurement, legal, and senior leadership. The program owner stays in the centre — but you’re no longer doing it alone.
Time. 2–6 months depending on the rollout shape and your security review timeline.
Artifacts produced.
- A written AI usage policy. (If you don’t have one yet, this is when it lands. If you have one, this is when it gets revised against what the pilot taught you.)
- A per-cohort dashboard your CFO will read: adoption percentile, capability lift, ROI estimate.
- Manager enablement. Line managers running cohorts in their teams need a 90-minute kickoff and a monthly check-in with the program owner.
What success looks like at end-of-phase. Three things are true:
- Adoption across the expanded scope has shifted at least one bucket from baseline.
- The written policy is published, not drafted, and the cohort knows it exists.
- At least one cross-team artifact exists — a prompt library, an agent everyone uses, a shared evaluation rubric. Capability is no longer team-local.
Failure modes.
- Trying to expand too fast. A pilot of 8 succeeded because the manager and the program owner could meet weekly. Expanding to 80 without adding capacity replicates none of what made the pilot work.
- Letting the policy lag. Pilots can run without one. Expansions can’t. If you’re at 100 seats with no policy, IT will pull the rip cord eventually.
- Treating expansion as a re-sell to leadership. If the pilot worked, expansion is a procurement formality. If you’re having the same conversation a second time, the pilot didn’t work.
Decision criteria for moving to Operate.
- All three success conditions met.
- The org has more than one cohort actively running concurrently.
- There’s a 12-month plan for the program — funded, staffed, with explicit success criteria.
Phase 04 — Operate
What it is. The program is running. Now you keep it durable.
Who’s involved. The program owner, line managers running cohorts, the exec sponsor, and 174 (in Concierge engagements) on a quarterly cadence. IT and security are routine partners, not gatekeepers.
Time. Ongoing. Quarterly recalibration, monthly internal program reviews.
Artifacts produced.
- Quarterly assessment re-runs. Year-over-year lift is the metric leadership cares about.
- A curriculum that evolves. New tools, new use cases, new failure modes — the curriculum is updated quarterly, not annually.
- Manager training on a continuous loop — every new manager joining the company gets enablement.
- A public (internal) prompt + agent library that anyone in the org can contribute to.
What success looks like. AI literacy is no longer a program. It’s the default expectation for new hires, the baseline for performance reviews, and the lens through which the org approaches new tools.
Failure modes.
- Letting the program become a museum. If the curriculum hasn’t been updated in 6 months, neither has anyone’s skill.
- Letting governance drift. The policy that worked at 100 seats may not survive 1,000 — review it as part of every quarterly recalibration.
- Confusing operating with running on autopilot. The program needs an owner, always.
A 12-week sample timeline
| Weeks | Phase | What’s happening |
|---|---|---|
| 0 | Assess | Take the assessment. Read the report with leadership. Identify pilot team. |
| 1 | Pilot setup | Kickoff workshop. Cohort lands. Slack channel opens. |
| 2–5 | Pilot | Curriculum runs. Weekly check-ins. Artifacts ship. |
| 6 | Pilot synthesis | Reassess pilot team. Compare to baseline. Decide on expansion. |
| 7 | Expand prep | Identify second cohort. Begin policy drafting. Loop in IT. |
| 8–11 | Expand | Second cohort runs. Policy lands. Manager enablement begins. |
| 12 | Operate | First quarterly cadence kicks off. Dashboards land. Year-12 plan is published. |
This is a sample, not a contract. Real timelines slip — usually because procurement or security review takes longer than expected, occasionally because a pilot reveals a deeper need that warrants a second pilot before expansion. The shape is the point: every phase produces an artifact, every phase has a decision criterion, every phase can stop the program without wasting prior work.
If you haven’t taken the assessment yet, that’s Phase 01.
Where does your org actually stand?
Ten minutes. Three dimensions. A leadership-shareable baseline.