Boosting Success with DORA Metrics: A Key to High-Performing Software Teams

When I work with teams in any capacity (well, when assisting them in becoming more "agile"), I often find that the bottleneck that has the best "low-hanging fruit" is the delivery pipeline. Yes, we need to improve collaboration, communication, and understanding with the team. When we start designing our processes and focus on low-friction delivery, the branching/merging/deployment process always has room for improvement. I often reiterate to the teams that "we will only be as agile as our delivery pipeline."

Software teams have a unique problem in that even if we get really good at understanding the "requirements," writing code, testing code, merging code, and validating the feature; inevitably, we have to find some way to get it in front of the actual end-user without having to ship them our local machine or our dev server branch. While it doesn't have to be in "production" for the user to validate it, getting it to an environment that closely mimics production is in our best interests and, better yet, should only be a small effort and hop to actually get it to production.

In the past, however, it has been difficult to assess the level of friction experienced in our software delivery and the effect of our changes in the live environment (did our change help or hurt the current base of the system). In the last few years, however, I have witnessed the rise of many great tools that are helping teams assess and address this issue, and I wanted to spend some time talking about DORA metrics in this post.

DORA (DevOps Research and Assessment)

DORA metrics have established themselves as the gold standard for evaluating the performance of software delivery teams. These metrics—Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Mean Time to Recovery (MTTR)—offer actionable insights regarding a team's operational efficiency and the stability of its software delivery processes. When integrated with the Scrum framework and supplemented by tools such as Jellyfish, DORA metrics can significantly enhance an organization's capacity to deliver value promptly and reliably by being able to dissect outcomes desired vs. reality, which might help them pinpoint issues connected to continuous improvement.

Nicole Forsgren, Jez Humble, and Gene Kim, reputable figures in the DevOps and Agile domains, introduced DORA metrics. These metrics gained widespread recognition through the annual State of DevOps Reports, which commenced in 2014 and were collaboratively authored by the DORA research team.

Nicole Forsgren, a lead researcher and co-founder of DORA, played a crucial role in the empirical studies that underlie these metrics. The research team conducted extensive surveys and data analyses across global organizations to identify the predictors and essential indicators of high-performing software delivery teams.

Their findings, detailed in the book Accelerate: The Science of Lean Software and DevOps, emphasize four core metrics—Deployment Frequency, Lead Time for Change, Change Failure Rate, and Mean Time to Recovery—as fundamental to understanding and enhancing DevOps performance. Consequently, these metrics have become industry benchmarks for assessing the effectiveness of software delivery processes.

Definition of DORA Metrics

  • Deployment Frequency: This metric measures how a team releases code to production. A higher frequency indicates a consistent and effective delivery pipeline.
  • Lead Time for Changes: This metric tracks the duration from code commit to deployment. Shorter lead times indicate an efficient development pipeline.
  • Change Failure Rate: This refers to the percentage of deployments that result in production failure, such as service outages. A lower rate suggests greater stability.
  • Mean Time to Recovery (MTTR): This metric indicates the average time required to recover from a production failure. A reduced MTTR reflects a team's resilience.

These metrics are consistent with agile approaches, which advocate for the delivery of small, incremental changes at frequent intervals and the capacity to adapt swiftly to feedback.

DORA Metrics in the Context of Scrum

Some will likely argue that DORA and Scrum are two different things. While they are, I believe Scrum provides a structured framework that facilitates the seamless integration of DORA metrics, especially for teams hoping to use them to analyze friction points and outcomes of their actual delivery.

For instance (and just a very short example):

  • Sprint Planning: Teams can forecast work to improve deployment pipelines, such as refactoring code or addressing technical debt, by assigning backlog items that align with DORA objectives. Tools like Jellyfish can assist in visualizing metrics in real-time. Product Owners typically don't have insight into these issues, so let's bring them to the backlog.
  • Daily Scrums: The team should address any blockers impacting Deployment Frequency or Lead Time for Changes (hopefully daily) and monitor incidents affecting MTTR, engaging in collaborative brainstorming for immediate solutions.
  • Sprint Reviews: Progress on DORA metrics should be communicated to stakeholders. Teams should demonstrate improvements in both stability and the speed of deployment cycles through functional software. These outcomes are typically not very prevalent to the stakeholders; the work we have done to minimize deployment issues is a tiresome amount of effort; let's celebrate it!
  • Sprint Retrospectives: DORA metrics should be utilized to identify bottlenecks. For instance, a high Change Failure Rate may prompt discussions on enhancing continuous integration and continuous deployment (CI/CD) practices or improving testing procedures that focus more on the capability to deliver software than our processes. Remember, in agile, we want to focus on processes AND tools.
  • Impediment Backlog: Issues affecting DORA metrics should be incorporated into our team's backlog, with prioritization given to those that have a direct impact on delivery speed or stability. Who is going to do that prioritization? The Developers! Yes, we have to gain consensus with our product owner and stakeholders, but that should focus more on reserving capacity for every cycle (sprint, quarterly, roadmap planning, etc.) to allow the teams to address them.

Enhancing DORA Metrics with Jellyfish

There are more tools available than Jellyfish, but I wanted to focus on this one for the purpose of this article. Jellyfish, a tool designed for engineering management, provides advanced analytics and visibility into team performance. When integrated with Scrum, Jellyfish can visualize trends in DORA metrics across multiple sprints, thereby assisting Scrum teams in aligning their practices with measurable outcomes.

This also allows teams or milestones to be compared, facilitating the identification of high-performing areas and potential opportunities for improvement. In addition, Jellyfish can forecast how alterations in team structure or workflows may influence metrics through predictive analytics, something we have always longed for in our teams.

Considerations for the Implementation of DORA Metrics

Considerations for any metrics are vital to ensure your team focuses on the right "thing". Thoughtful adoption prevents the trap of being data-rich but information-poor by focusing on actionable insights rather than just collecting numbers.

  • Emphasis on Outcomes Over Numbers: Organizations should strive to improve delivery speed and reliability rather than fixate solely on numerical targets.
  • Holistic Approach: DORA metrics should be analyzed in conjunction with team health indicators, such as the Happiness Metric referenced in "Scrumming the Scrum," to ensure sustainable productivity.
  • Avoiding Team Overburdening: Pursuing a high Deployment Frequency without considering the team's capacity may result in burnout and other unpleasant consequences.
  • Contextual Relevance: A high Change Failure Rate may be acceptable during a significant overhaul but concerning during routine updates.
  • Utilization of AI and Automation: Adoption of AI tools can automate routine tasks and enhance metrics such as MTTR.

The iterative processes of Scrum can really enhance stable metrics gathering. Leveraging the analytical capabilities of tools like Jellyfish, organizations can optimize their software delivery lifecycle or at least start working on a pathway of improvement. I will post in the future about how to use tools like Jira or ADO to track these metrics (that's another ball of wax).

If used correctly, metrics enhance productivity and foster a resilient, adaptable, and high-performing agile organization. Metrics should be viewed as tools for continuous improvement rather than rigid benchmarks and best practices, ensuring they promote collaboration, innovation, and an ongoing commitment to improvement. I wrote a post a long time ago (in a galaxy far, far away) regarding metrics. I love metrics if used correctly.

Join us in a public workshop to learn more!

Interested in learning more about how these tools and others can help your teams? Join Lance in an upcoming public workshop or schedule a customized private training for your teams. We can also just spend a bit of time coaching on specific issues if that is more your style. Let us know!

Register