Big bang: "let's buy 100 licenses"
The big-bang license purchase is the most common first move in enterprise AI adoption, and the most reliable predictor of failure.
- ·AI tool licenses have been purchased but there is no structured rollout plan
- ·No adoption metrics are tracked
- ·At least some developers are experimenting with AI tools
- ·Organization has not banned AI tool usage outright
Evidence
- ·License purchase records without associated rollout plan
- ·No adoption tracking dashboard or reports
What It Is
The big-bang license purchase is the most common first move in enterprise AI adoption, and the most reliable predictor of failure. Leadership sees competitors adopting AI, attends a vendor demo, and makes a decision: buy 100 GitHub Copilot or Cursor licenses, send an announcement email, and declare that the organization is now "doing AI." The logic seems sound - get everyone access, let adoption happen naturally, measure results in a quarter.
What actually happens is a predictable arc. The announcement generates real enthusiasm. Early adopters start experimenting. A few developers have genuinely good experiences and share them in Slack. Then, quietly, usage drops. Developers who tried it a few times and didn't get value stop using it. The enthusiasts keep going, but they're a small minority. By month three, 80% of licenses are unused. By month six, the program is effectively dead - but the licenses are still being paid for.
The failure mode is not the tool. Modern AI coding assistants are genuinely powerful. The failure mode is the assumption that access equals adoption. Buying licenses solves the access problem. It does nothing about the workflow integration problem, the skill development problem, the social proof problem, or the measurement problem. Organizations that succeed at AI adoption treat it as a change management initiative, not a procurement exercise.
The "100 licenses" pattern also tends to buy the wrong thing for the stage. Organizations at L1 often purchase enterprise-tier tools with features that require L3-L4 maturity to use effectively - agent orchestration, custom context injection, team-level configuration. These features sit unused because the team hasn't built the foundational practices that make them valuable. You don't need a Ferrari to learn to drive.
Why It Matters
- Wasted budget without foundation - license costs compound monthly while adoption flatlines; organizations spend $50-200K/year on tools that 20% of developers use sporadically
- Creates organizational cynicism - failed big-bang deployments make the next AI initiative harder; engineers who tried it and didn't get value become skeptics who slow future rollouts
- Misallocates the real cost - the tool license is 10% of the total cost of adoption; the other 90% is workflow integration, training, and organizational change - none of which a license purchase addresses
- Signals the wrong model to the organization - announcing "we bought AI tools" frames AI as a product rather than a capability, which shapes how engineers think about the investment required
- Delays the real work - every month spent on a stalled big-bang deployment is a month not spent building the structured adoption infrastructure that actually works
Getting Started
6 steps to get from here to the next level
Common Pitfalls
Mistakes teams actually make at this stage - and how to avoid them
How Different Roles See It
Bob is getting pressure from the CTO to "move faster on AI." Three of Bob's peers at other companies have announced company-wide Copilot deployments. The board is asking about AI strategy. Bob is tempted to just buy the licenses and declare victory - at least it signals momentum.
What Bob should do - role-specific action plan
Sarah has been asked to report on AI tool adoption after a big-bang license deployment six months ago. She pulls the usage data: 23% weekly active usage, concentrated in 4-5 developers. The other 75 license holders haven't logged in in two months. Leadership is asking whether to renew.
What Sarah should do - role-specific action plan
Victor is one of the 4-5 active users from the big-bang deployment. He's gotten real value from the tool - his PR throughput is up, he's using it for code review prep and test generation. But he's isolated. Nobody else on the team is using it consistently, there's no shared knowledge of what works, and he feels like he's doing something niche rather than something the org is investing in.
What Victor should do - role-specific action plan
Further Reading
4 resources worth reading - hand-picked, not scraped
From the Field
Recent releases, projects, and discussions relevant to this maturity level.
AI Adoption Model