The AI Velocity Assessment: Finding Where Your Pipeline Is Really Stuck
By Dave O'Dell & Dan McAulay
Here’s a conversation we keep having with engineering leaders:
“We rolled out AI coding tools six months ago. Our developers are using them. But honestly? I can’t tell you we’re shipping any faster.”
This isn’t a failure of AI adoption. It’s a failure of diagnosis. These teams did the right thing — they invested in AI tools, got engineers using them, and expected velocity gains. The tools delivered on their promise: code generation is genuinely faster. But the overall pipeline didn’t speed up.
Why? Because nobody looked at the whole system first.
The Problem Nobody Diagnoses
When you give engineers a tool that writes code 2-3x faster, you’re accelerating roughly 20% of the journey from idea to production. The other 80% — builds, tests, code review, CI, staging, security checks, deployment — stays exactly the same speed.
What you’ve actually done is created a traffic jam. More code hitting the same slow pipeline. More PRs waiting for review. More CI runs queuing up. More deploy windows filling up.
But here’s the thing: every organization’s traffic jam is different. Some teams are bottlenecked on CI that takes 45 minutes. Some are blocked by security review processes designed for a pre-AI world. Some have the technical infrastructure but no internal champions driving real adoption — engineers accepted the licenses but never changed how they work.
You can’t fix what you haven’t diagnosed. And most teams are guessing.
What a Real Assessment Looks Like
We built the AI Velocity Assessment because we kept seeing the same pattern: teams jumping straight from “buy AI tools” to “run a transformation program” without ever understanding where their specific bottlenecks actually are.
The assessment is four weeks. Not four months. Not a vague “discovery phase” that bleeds into a longer engagement. Four weeks with a clear deliverable at the end.
Week 1: Getting In the Door
Enterprise onboarding takes time. Getting access to repos, documentation, Slack channels, and deployment tooling doesn’t happen overnight — not because anyone is slow, but because that’s how organizations work. We learned the hard way that trying to compress this into the background while starting interviews is how engagements start chaotic.
So Week 1 is dedicated to access and setup. We kick off with your engineering leader, define what success looks like, and get connected to the systems and people we need to talk to. No interviews yet. Just preparation.
Weeks 2-3: Talking to Your Engineers
This is where the real work happens. We sit down with 8-12 engineers across different roles, tenures, and levels of AI enthusiasm. Not just the early adopters — the skeptics too. Often the skeptics tell you more about what’s actually broken.
We’re listening for specific signals:
- How are they actually using AI? Daily? Occasionally? Never? And why?
- Where does their work wait? Not where they think it waits — where it actually waits.
- Who’s sharing what they learn? The best engineers on your team are already experimenting. Are those insights spreading or dying in isolation?
- What does the whole SDLC look like? Not the diagram on Confluence — the real path from code to production, with all the human and technical bottlenecks.
We’re also identifying your internal champions — the engineers who have the energy and credibility to lead adoption from inside the team. No external consultant can drive lasting change alone. Champions are how transformation actually sticks.
Week 4: The Deliverable
We write the assessment. Not a 60-page slide deck — a clear, actionable report:
- Where you are on the AI adoption maturity arc
- What’s working — the momentum signals we found
- What’s blocking — technical, human, and process bottlenecks
- Prioritized recommendations — specific, scoped projects with clear outcomes
- Adoption readiness — an honest assessment of whether your team is ready for org-wide rollout, or what needs to happen first
We present this to your engineering leader. The report is designed to work as an internal briefing — something you can share with your CTO or board to show exactly what you found and what you’re proposing.
Why Not Just Start Fixing Things?
Because fixing the wrong thing is worse than fixing nothing.
We’ve seen teams spend six months on an AI adoption program that focused entirely on tool training — while the real bottleneck was a CI pipeline that took 45 minutes per run. We’ve seen teams invest in deployment automation when the actual problem was that three engineers controlled all the institutional knowledge and review took a week.
The assessment prevents that. Four weeks of diagnosis saves months of wasted effort.
And here’s the other thing: the assessment has a natural off-ramp. If we do the four weeks and you decide to execute the recommendations with your own team, you walk away with a prioritized roadmap. No lock-in. No open-ended commitment. The report pays for itself even if the engagement stops here.
Who This Is For
The AI Velocity Assessment is designed for engineering organizations of 30-100 people who have already tried adopting AI tools and aren’t seeing the velocity gains they expected.
You’ve already bought the tools. You’ve already gotten engineers using them. Now you need someone who’s seen this pattern across multiple organizations to tell you what’s actually going on — and what to do about it.
If that sounds familiar, let’s talk.
Dave O’Dell and Dan McAulay are the co-founders of App Vitals. They practice velocity engineering — accelerating the entire software development lifecycle, not just the code generation step. They’ve spent 20+ years building and scaling engineering platforms and now help teams ship 2-3x faster.
Want to accelerate your engineering team?
Book a 30-minute discovery call to discuss your team's AI adoption strategy.
Get in Touch