Scrum Teams Must Set Goals for Their Agile Metrics

Knowing what you want to find before you start looking seems like common sense. Yet many agile and scrum teams blindly gather metrics without really knowing what they want to learn from the data.

Erika | Target | Flickr

Erika | Target | Flickr

The way that most scrum teams approach agile metrics reminds me of the exchange between Alice and The Cat in Lewis Carroll’s classic Alice in Wonderland.

In this scene, Alice has come to a fork in the road and is unsure which way to go:

Alice: Would you tell me, please, which way I ought to go from here?
The Cat: That depends a good deal on where you want to get to
Alice: I don’t much care where.
The Cat: Then it doesn’t much matter which way you go.

Alice did not have a goal in mind while she was traveling along the road, and neither do most scrum teams when they set out to capture agile metrics.

Scrum teams can quickly become enamored with the amount of data that can be collected during a sprint. Story points, velocity, cost per story, cost per story point, throughput, number of stories per sprint, escaped defects, team health, and customer satisfaction are just a few examples off the top of my head.

Give the wide range and complexity of agile metrics that we could capture, we should accept a simple truth about these calculations:

The agile metrics that scrum teams collect are meaningless unless they are driven by a goal.

Why do you track velocity sprint after sprint? Because a training class said to? Or because you read about it in a blog post? No way!

There should be a problem that the scrum team is seeking to either solve or understand with every agile metric that they are collecting. With velocity, a scrum team could be trying to gauge their sustainable pace. Or perhaps they want to provide a means to help the product owner with release planning.

The actual goal is not as important as making sure that the team has one in the first place.

During a particularly difficult sprint retrospective meeting, one of my scrum teams noticed that they were struggling to deliver all of the stories that they committed to during consecutive sprints.

After some discussion, the team decided to track estimated story points per sprint and velocity. The difference between the two numbers would be called “found work”.

The scrum team decided that the goal of tracking these agile metrics – and particularly “found work” – was to measure changes in effort over the course of a sprint. The team felt that effort increased over time, which cause them to miss their sprint goals.

After a few sprints, the scrum team learned that the estimated effort was significantly increasing. But the numbers cannot tell you where the problems actually are. The root cause could have been one of many things:

  • Perhaps the scrum team was not properly grooming their product backlog items which led to “found work” during the sprint.
  • Maybe the scrum team’s definition of done was too stringent given their level of engineering practices which caused an increase in effort to deliver the stories included in the sprints.
  • It’s even possible that after digging deeper the scrum team learned that the relationship between the scrum product owner and the stakeholders was strained which led to poor communication and incomplete product stories.
  • Progressive elaboration could also have been taking place. The scrum team could simply be learning throughout the sprint and everything could be ok.

With the goal of their agile metric work realized, the scrum team focused on the problem during their next sprint retrospective. They were able to realize the high “found work” metric meant that too many epics were being added to the sprint backlog.

Not grooming their stories and slicing them in to smaller pieces hid much of the complexity that the scrum team was agreeing to take on sprint after sprint.

This particular team added a “3 touch rule” for stories to their definition of done. In order for a story to be eligible to be added to a sprint backlog, it had to be groomed and estimated at least 3 times by the team. This change led to the scrum team being more consistent at hitting their sprint goals.

Metrics are alluring, but they can also be deceptive and more importantly abused. Scrum teams should be intentional about how they collect agile metrics, the goal of the data, and what decisions will ultimately be made. Otherwise, why go to all the trouble of collecting the data in the first place?

Question: Does your scrum team collect metrics that do not have clear goals assigned to them? How are these agile metrics used? You can leave a comment by clicking here.

Please note: I reserve the right to delete comments that are offensive or off-topic.

  • devblock

    So I think we should definitely understand the metric before we bother collecting it. Understand what it is we are measuring, what good might look like and what changes in trends may indicate. However, I’m a bit hesitant to start setting goals on metrics as that is when I feel teams start trying to game the metric. I usually track lots of metrics. We often look at the trends in retrospectives or I will look at them on my own. I use the trends to see patterns and/or changes and ask questions. “Hmm…that’s interesting, why did our velocity do that?” These can drive some good discussions with the team. However, as soon as I put a goal on that number, I feel the team starting to game it. “Our goal is X, we need to make sure we hit it”. By definition, not hitting a goal is “bad” and teams don’t want to be “bad”. Not to mention that as soon as some manager sees a goal on a number you can bet it’s going to end up on an annual review form. Watching trends without having a goal help increase transparency. As soon as a team starts gaming a metric it becomes useless, so I try to avoid that all costs.

    That being said, I have put goals on metrics before. It has worked, but it took quite a bit of coaching with the team to keep them from gaming the numbers and it took a lot of work on finding the right goal (hint, my goals actually encouraged “failure” on occasion).

    • Hi Matt, I 100% agree with your comments about gaming the metrics. I’m not advocating that teams manage to a metric. The “goal” in this post refers to an observation or outcome that the team wants to test out. The metric adds information, but is only one part of a larger experiment.

      Thanks for the feedback, I appreciate it!

  • Bob Galen

    Nice post Ryan. There’s an “old” metrics definition/creation approach called GQM or Goal – Question – Metric. You start with the goal. Then you craft questions you’d like to answer in support of the goal. The at the end, you design the metric to answer the questions in support of the goal. I’m not sure who originally came up with it, but it’s been around for awhile. Nonetheless, far too often I see us “diving into” metrics without understanding or establishing goals first. Thanks for reminding us of that!

    • Zach Bonaker

      Well done, Ryan! PDCA at the core of the article and I agree 100%!

      • Glad you liked the article Zach. PDCA is interesting and is something that I’m looking in to further. PDCA and GQM all seem very logical which makes it even more baffling that teams ignore them and do odd things with their metrics.
        Thanks for the comment!

    • Hi Bob – thank you for the feedback and for the GQM pointer. I’ll check it out and see what else I can mine from this “old” approach. Thanks again!