The other day, a team I was working with had a bad sprint. Velocity dropped, stakeholders were upset, but guess who panicked the most? The CTO — who happened to join the sprint review that time. He doesn't come to every one. Next thing you know, the team is spending three hours in a meeting trying to explain it all.
And then it dawned on me — the question nobody was asking: was that velocity actually bad, or was it just a normal Tuesday?
The Real Problem: Reacting to Everything
The scenario I just described is a trap I see most organizations fall into. They react to everything — every spike, every dip, every data point that looks different from last week. And in doing that, they burn all this time explaining noise instead of fixing actual problems. The team is fatigued. We talked about change fatigue a couple of weeks ago. They're fatigued at corporate theater.
So this week I want to talk about metrics. Not adding more of them — actually cutting through them. Specifically, how to tell the difference between a signal that demands action versus the noise that just demands attention.
I want to cover signal versus noise, present five metrics I think most teams track that are actually wasting their time, give you five better replacements, and then end with a simple three-question test you can use on any metric in your organization right now.
Writing Fiction: Why Leaders Chase Normal Variation
Here's a story I've seen play out more times than I can count. A VP of engineering or a CTO walks into the weekly leadership review. The deployment frequency metric is down from last week — down by two, maybe three. Someone in the room says, "Well, what happened?" And just like that, the next 45 minutes disappear into investigation mode. Corporate theater.
Teams get asked for explanations. We play the telephone game. Explanations get written, documents get created. The VP feels like they've done something useful by challenging the team, but they really haven't.
I really enjoy what I've read from Mark Graban on this. He's a lean leadership coach, and he wrote a brilliant book called Measures of Success. He calls this pattern — and I believe he borrowed the phrase from a statistician named Donald Wheeler — "writing fiction." Because when you ask for an explanation of normal variation, you're not going to get root cause analysis. You're going to get theater. You're getting a story that somebody made up just to satisfy the question and move on. It's a proximate cause at best.
The numbers went down because of the system, not necessarily because of any specific decision anybody made. The truth is that most of the movement you see in those weekly metrics is not a signal. It's just noise — the natural variation of a complex adaptive system doing what complex adaptive systems do. And every hour you spend chasing that noise is an hour you're not spending on actual improvements or building the product.
Signal vs. Noise: The Concept That Makes Everything Click
Don't get me wrong, metrics are important. But we need to desensitize our reactions to them, especially as leaders. Our job is not to react to every change in a metric. Our job is to know which signals actually mean something.
So what's the difference? A signal is a data point that indicates something real has changed in the system. It's not just movement — it's meaningful movement. The kind of change where if you investigated it, you'd actually find a root cause. Noise, on the other hand, is the variation that exists because the system is imperfect and complex. It doesn't have a root cause you can identify and fix. It's merely the cost of complexity.
Mark describes a simple way to think about this in Measures of Success. In a stable, predictable system, data points will naturally cluster between some upper and lower control limits. When a data point goes outside of those limits, that's more of a signal. When it stays inside of them, even if it goes up or down, that's noise.
Most teams have never drawn those upper and lower limits. They're just staring at the numbers week over week, reacting to the arrows and their direction. Up equals good. Down equals "explain yourself in the next 45-minute meeting." That is exhausting and mostly useless. It's a useless way to manage anything complex.
Why DORA Metrics Get It Right
This is where it gets really practical for product and engineering leaders. The DORA metrics — DevOps Research and Assessment — are backed by what I believe is probably the most credible, research-backed measurement framework in our industry.
The 2024 report continues to focus on four core metrics that predict a software team's delivery performance: deployment frequency, lead time for changes, change failure rate, and time to restore a service.
What I love about these four is that they're balanced. They're not just asking how fast are you going — which is often all we worry about as leaders. They ask how fast, how safely, and how well do you recover when things break. That balance is true agility. I have a blog post called "Make Haste Slowly" because that's what I think agile really is. Go carefully, and the more efficient you get, the sooner you deliver.
The practical implication is this: if you're only tracking deployment frequency and lead time, you might be rewarding speed while quietly degrading stability. The DORA metrics were designed to be watched together. One metric without the others tells you a story, but it might be the wrong one.
5 Vanity Metrics to Stop Chasing (and What to Track Instead)
What often gets glossed over is the problem of vanity metrics. A vanity metric gives you the rosiest possible picture without actually telling you whether your product or system is healthy. I was watching a video the other day on Warren Buffett and Charlie Munger, and to them, this would be like EBITDA — the most ridiculous metric they've ever seen in a business. For product teams, vanity metrics look like website visits, number of features shipped, story points completed, or number of certifications your teams hold. Those numbers might make us feel like things are happening. They're measurable, they go up and down, and we can push them up and to the right if we try hard enough. But they don't drive decisions. And if a metric doesn't drive a decision, it's not really a metric — it's decoration.
1. Story Points Completed → Cycle Time
Story points completed per sprint sounds useful, and it is at the team level. But it almost never changes a decision at the organizational level. Replace it with cycle time — how long does it take for a piece of work to move from started to shipped? Cycle time tells you about overall system health, not just your team's busyness.
2. Number of Features Shipped → Adoption Rate
This one's everywhere. It rewards output over outcome, which is the opposite of what agile product development is about. Replace it with adoption rate — what percent of your users are actually using whatever you shipped? If nobody's using the feature, the shipping was just motion.
3. Test Coverage Percentage → Change Failure Rate
I've seen teams game this one terribly — high coverage with low-quality tests, and we still have fragile software. Replace it with what DORA calls the change failure rate: how often do the deployments you make cause incidents? That tells you about your team's actual quality.
4. Velocity → Lead Time for Changes
Velocity is practically sacred in some teams, but it's really more internal to the team. It's not meaningful to the business, and it fluctuates by design. Replace it with lead time for changes — how long does it take to go from a commit to code in production? That's a real number your stakeholders can actually use.
5. Lines of Code / Commits / PRs → Deployment Frequency
Lines of code written, number of commits, pull requests opened — these are activity metrics masquerading as productivity metrics. Replace them with deployment frequency: how often are you safely delivering working software to your users? That's the actual output that matters.
The Three-Question Decision Test
So what do you do with all of this? I'd like to share what I call the decision test. It's three questions you can run on any metric, whether someone's proposing a new one or you're reviewing an existing dashboard.
Question 1: What decision does this metric change in the next 30 days?
Not eventually. Not in theory. Specifically, what will someone do differently based on this number? If you can't answer that concretely, the metric is probably decoration.
Question 2: Can this metric go bad before it's too late to respond?
Some metrics are just lagging indicators. Revenue is a great example — it tells you how healthy your last quarter was. You can use it to predict the next quarter, but by the time revenue is trending the wrong way, the problems that caused it happened months ago. You want metrics that warn you earlier, not ones that simply confirm things are bad.
Question 3: Does this metric measure behavior or activity?
Activity is what people do. Behavior is what your users or your systems do as a result. Activity metrics are easy to game. Behavior metrics are a lot more honest.
How to Run This With Your Team
Here's how I use this in real leadership team sessions. I put every metric we track on the whiteboard, and for each one, I ask those three questions. Then the team votes: keep, investigate, or cut.
Most teams discover they're tracking somewhere between 20 and 40 metrics. After this exercise, we typically end up with maybe six to eight, sometimes twelve. And those six to eight actually get used.
The goal isn't a smaller dashboard for the sake of a smaller dashboard. The goal is a dashboard that a busy leader can review in under three minutes and know what to do next.
Set Decision Thresholds
One more piece that I think is underused: decision thresholds. Most teams track metrics but never define when a metric actually triggers action. They just look at it, have a conversation, and wait until next week.
A decision threshold is a rule you and your leadership team set in advance. It's not a target — it's a rule. Something like, "If our change failure rate goes above 10% for two consecutive weeks, we pause new deployments and investigate." That's a decision threshold.
The reason it matters is that it takes the politics out of the conversation. You're not debating whether something is a problem — you defined that in advance. When it happens, you simply act. It's like a working agreement for your metrics.
Try This Right Now
Pull up whatever dashboard or reporting system you use and pick the three metrics you and your teams review most often. For each one, finish this sentence in writing:
The last time this metric changed a decision I made was ___.
If you can fill in a specific example, that metric is earning its spot on your dashboard. If you're staring at a blank, that metric is likely costing you attention and giving you nothing back — or worse, creating problems in the organization.
Don't delete them yet. Just flag them. The conversation that comes next — about why you're tracking something with no clear decision link — is the conversation that actually improves your measurement culture.
Then bring those flagged metrics to your next team or leadership review and ask the group: "What decision would we make differently if this number went up by 20%? What if it went down?" If nobody can answer, you have your answer.
Metrics Are for Deciding, Not Just Knowing
Here's what I want you to leave with: metrics are not about knowing things. They're about deciding things. Leadership is decision-making. Product management is the same way. A metric that informs a decision is a valuable metric. A metric that generates a meeting that generates a document that answers a question nobody will remember asking? That's noise — and you created it inside your own organization.
Stop managing the metric. We've always been told you can't manage what you can't measure. That's management dogma. Manage the system that the metrics describe. Let the metrics be the language. And when a metric spikes, ask yourself first: is this a signal, or is it just Tuesday?
If your team is struggling to cut through the noise and focus on the metrics that actually matter, that's exactly the kind of thing we work on in our training and coaching. Check out our upcoming classes and workshops — we'd love to help you build a measurement culture that drives real decisions.