Knowing what you want to find before you start looking seems like common sense. Yet many agile and scrum teams blindly gather metrics without really knowing what they want to learn from the data.

[featured-image single_newwindow=”false” alt=”Erika | Target | Flickr” title=”Erika | Target | Flickr”]Erika | Target | Flickr[/featured-image]

The way that most scrum teams approach agile metrics reminds me of the exchange between Alice and The Cat in Lewis Carroll’s classic Alice in Wonderland.

In this scene, Alice has come to a fork in the road and is unsure which way to go:

Alice: Would you tell me, please, which way I ought to go from here?
The Cat: That depends a good deal on where you want to get to
Alice: I don’t much care where.
The Cat: Then it doesn’t much matter which way you go.

Alice did not have a goal in mind while she was traveling along the road, and neither do most scrum teams when they set out to capture agile metrics.

Scrum teams can quickly become enamored with the amount of data that can be collected during a sprint. Story points, velocity, cost per story, cost per story point, throughput, number of stories per sprint, escaped defects, team health, and customer satisfaction are just a few examples off the top of my head.

Give the wide range and complexity of agile metrics that we could capture, we should accept a simple truth about these calculations:

The agile metrics that scrum teams collect are meaningless unless they are driven by a goal.

Why do you track velocity sprint after sprint? Because a training class said to? Or because you read about it in a blog post? No way!

There should be a problem that the scrum team is seeking to either solve or understand with every agile metric that they are collecting. With velocity, a scrum team could be trying to gauge their sustainable pace. Or perhaps they want to provide a means to help the product owner with release planning.

The actual goal is not as important as making sure that the team has one in the first place.

During a particularly difficult sprint retrospective meeting, one of my scrum teams noticed that they were struggling to deliver all of the stories that they committed to during consecutive sprints.

After some discussion, the team decided to track estimated story points per sprint and velocity. The difference between the two numbers would be called “found work”.

The scrum team decided that the goal of tracking these agile metrics – and particularly “found work” – was to measure changes in effort over the course of a sprint. The team felt that effort increased over time, which cause them to miss their sprint goals.

After a few sprints, the scrum team learned that the estimated effort was significantly increasing. But the numbers cannot tell you where the problems actually are. The root cause could have been one of many things:

With the goal of their agile metric work realized, the scrum team focused on the problem during their next sprint retrospective. They were able to realize the high “found work” metric meant that too many epics were being added to the sprint backlog.

Not grooming their stories and slicing them in to smaller pieces hid much of the complexity that the scrum team was agreeing to take on sprint after sprint.

This particular team added a “3 touch rule” for stories to their definition of done. In order for a story to be eligible to be added to a sprint backlog, it had to be groomed and estimated at least 3 times by the team. This change led to the scrum team being more consistent at hitting their sprint goals.

Metrics are alluring, but they can also be deceptive and more importantly abused. Scrum teams should be intentional about how they collect agile metrics, the goal of the data, and what decisions will ultimately be made. Otherwise, why go to all the trouble of collecting the data in the first place?

[reminder]Does your scrum team collect metrics that do not have clear goals assigned to them? How are these agile metrics used?[/reminder]