Skip to content
— / Field note

Land-and-expand for AI literacy

· 9 min read

Why a 1-seat pilot beats procurement marathons. How to size cohort one. What to measure at week 8 to greenlight expansion. The GTM logic of the 174 program.

The single most consequential decision in any AI literacy rollout is how it begins. Most rollouts begin with procurement. A vendor evaluation; a security review; a 24-month commitment; a charter. By the time the first cohort sits down, the program has spent six months on activities that produced no capability lift. The team is exhausted before the work begins.

There is a different shape. Start with one seat. Run a 4-week pilot in one team. Measure the actual lift. Earn the right to expand. The program never commits to its next phase before its current phase has worked. The entire model only earns its budget by demonstrating value, not by promising it.

This is the shape 174 is built for. The pricing supports it ($99 per seat per month, monthly billing, no minimum), the curriculum supports it (the program runs in cohorts of as few as one), and the methodology supports it (Assess → Pilot → Expand → Operate, four phases each with its own decision criterion). This essay is the GTM logic that makes the model work.

Why a 1-seat pilot beats a procurement marathon

A traditional enterprise procurement cycle for an L&D program runs 4–9 months: vendor evaluation, security review, legal review, MSA negotiation, internal approval at multiple levels, kick-off planning, finally the first cohort. The work that creates capability — running the program — happens at the end. Before that, the company has spent staff time and money on activities that produce no capability lift at all.

A 1-seat pilot inverts this. The first cohort happens immediately. Procurement and security review happen later, after the program has demonstrated something worth procuring. The conversation with legal goes faster because there’s evidence to point at; the conversation with the CFO goes faster because there’s a measured lift; the conversation with the team goes faster because someone in the room has already taken the program.

This is sometimes called the “consumer-led enterprise” pattern, popularized by Slack and Zoom in the 2010s. It works for AI literacy for the same reason it worked for collaboration tools: the cost of trying is low enough that a single curious person can validate the program before any committee touches it.

It also fails in predictable ways, and we’ve designed the 174 model to avoid them.

How to size cohort one

The most important rule of cohort one: pick the team most ready to move, not the team most in need.

This is counter to most L&D instinct. The natural impulse is to identify the team with the worst AI literacy and start there — the people who would benefit most. That impulse is wrong. The team with the worst literacy is also the team with the lowest energy for the program. They didn’t ask for it. Their manager didn’t ask for it. The program will be perceived as remediation, and remediation programs don’t propagate.

The team most ready to move is usually one of three:

  1. A team whose work has visibly changed because of AI in the last 12 months. Engineering teams adopting Cursor, marketing teams adopting AI for content production, ops teams automating recurring workflows. They’ve already started; they need structure for what they’ve started.

  2. A team with a manager who’s curious and visible. A manager who has personally been using AI, who reads the relevant essays, who has opinions about it. This manager will pull the program through the team — and the rest of the org will see them doing it.

  3. A team with a clear, measurable use case. A customer support team measuring response time and quality; a sales team measuring proposal turnaround; an engineering team measuring PR cycle time. The presence of a measurable outcome lets the pilot demonstrate lift unambiguously.

Cohort one should be small. Five to ten people. Larger pilots dilute the manager’s attention; smaller pilots are too small to produce the kind of evidence that travels.

What to measure at week 8

The pilot runs for four weeks. The synthesis happens in week 5. Week 8 is the point at which the program has had time to produce both immediate artifacts (during the pilot) and downstream effects (after the pilot). It’s the right window for the “should we expand?” decision.

Three measurements:

1. Adoption shift, on the pilot team only, against the org baseline. Re-run the assessment on the pilot team in isolation. Compare the team’s Adoption percentile to the org-wide percentile from the original baseline. A meaningful pilot moves the team at least one full bucket — typically Emerging to Developing, sometimes Developing to Mature. Less than that is a signal the pilot didn’t take.

2. Capability artifact lift, on a fixed rubric. During the pilot, the team produces three to five artifacts — prompts, agents, workflows, evaluation rubrics. Grade each one against a public rubric (Cluster 2 of the curriculum provides one). Less than 70% of artifacts hitting the rubric is a signal the depth isn’t yet there. More than that is a green light.

3. Governance artifact: the AI usage policy. If the company didn’t have a written AI policy at week zero, did one get drafted by week 8? The drafting process is the proof. A pilot can run without a policy, but expansion can’t — and the pilot is the natural moment to ship it. The governance template is the starting point.

If all three pass, expansion is a procurement formality. If one or two pass, run a second pilot in another team before expanding — the asymmetry will tell you something. If none pass, the pilot didn’t work; figure out why before doing anything else.

What expansion actually looks like

Expansion is the second-most-consequential moment in the rollout. It’s the point at which the program stops being one team’s enthusiasm and starts being a company-level commitment.

Three things happen at expansion:

Procurement gets involved. Now there’s an MSA, a multi-seat contract, a real security review. The good news is that all of those conversations are easier with a working pilot in the room. The bad news is that they still take time. The first 90 days of expansion typically include a one-time burst of procurement activity that the pilot didn’t have to deal with.

A second team enters the program. The natural choice is a different department from cohort one — if cohort one was customer support, cohort two might be marketing. Two teams running concurrently start to surface the cross-team dynamics: shared prompt libraries, evaluation rubric standardization, governance debates that pilot one didn’t trigger.

The program owner stops doing the program alone. Manager enablement begins. The 174 partner, if Concierge, runs a 90-minute kickoff with the line managers in the new cohorts. The program owner moves from running the program to running the program-runners.

Expansion is the moment most rollouts dilute. The pilot worked because the manager and the program owner could meet weekly. Expanding to four cohorts simultaneously without adding capacity replicates none of what made the pilot work. The 174 recommendation, baked into the rollout playbook, is to expand in waves of two to three cohorts at a time, three to four months apart, so the program operations can keep up.

Why this beats the alternative

The alternative is the procurement-first model. Sign the contract for 200 seats; spend the next year trying to operationalize the program; learn at scale rather than at a pilot.

The cost of getting it wrong at scale is enormous. Two hundred seats of a program that doesn’t land is two hundred people who now associate AI literacy programs with cringe. Internal credibility for the next program — there will be a next one — drops by an order of magnitude.

The cost of getting it wrong at one seat is one cohort that didn’t take. The program owner adjusts and runs cohort two. The total cost is a month of one team’s time. That’s a survivable cost; the procurement-first model often isn’t.

If you’re sizing your first AI literacy program, the smallest credible size is the right size. The assessment will tell you which team is ready to move; the rollout playbook will tell you how to run the pilot; the pricing will tell you exactly what the first month costs ($99, for one seat).

The whole point of the model is that you can stop after that first month and have something durable to show. Most companies don’t stop. But they could, and they know they could, and that knowledge is what makes the rollout actually work.

— / Next move

Where does your org actually stand?

Ten minutes. Three dimensions. A leadership-shareable baseline.