Actually measuring improvement in practice

Loren Anderson, who took part in a couple of Coaching Conversations, asked the following question in a Facebook coaching group.

Do competitive cauldrons or other forms of practice stats measure meaningful improvement?

If you’re not familiar with the competitive cauldron concept, basically it involves taking a bunch of stats in practice so players can be ranked. The idea has been around for a while. I attended a session at the 2019 AVCA Convention on the topic.

Lauren’s core question in his query is if we’re measuring things which actually indicate improvement. The snap response a lot of people probably have is yes, but is that really true? I would argue – and I suspect Loren would as well – that it isn’t.

There are a couple of big issues with stats when it comes to measuring improvement.

Big focus on outcomes

Most of the common stats we track in volleyball are outcomes. That was a 2-pass. The attack was a kill. The serve missed the target. You lost the rally or game. I think you get the idea.

Outcomes, though, often don’t do a great job of capturing improvement. Sometimes you can do everything right and it doesn’t work out. Other times you can do everything totally wrong and get lucky with the result.

Admittedly, sometimes the outcome is exactly how you can measure improvement. Passing accuracy is something that probably immediately comes to mind. But now we bring in the second issue.

Multiple factors involved

A confounding aspect of taking practice stats (and match stats too) is that usually there are multiple moving parts involved. In other words, you very rarely get to look at something in strict isolation, particularly where the ball is involved.

In the example of looking at passing accuracy there is a serve. Presumably, you’re working on your servers getting better just like you are with your passers. What happens if both your servers and passers develop at the same rate? Most likely, you won’t see any change in the passers’ ratings because tougher serves offset their improved skill.

This is something I’ve actually had to address with my teams. The passers would be worried they weren’t seeing improvement in their numbers in practice. When I asked them if our serving was getting tougher, they would admit as much. They could then see that passing at the same accuracy against harder serves equals improvement.

What can we do?

So we’re left with a question. How can we actually measure improvement?

This then requires two follow-up questions. First, what is the improvement (or development) we’re after? This could be some aspect of a skill that the player is working on, or the whole skill. It could even be something collective.

Second, how can we capture that? In terms of aspects of a skill, this may simply be a question of counting good executions of that element. For example, how many times in a row can a server give themselves a good toss? If it’s the whole skill, however, now you have to find a way to control for the input (e.g. the quality of the serve when looking at passing).

When looking at collective action, there are again aspects that can be measured fairly simply. Cooperative drills like various peppers or the hard drill can use counts, as an example. When you start getting to the level of game tactics, however, you again have to somehow control for the input (e.g. how you initiate the ball into a drill working on an offensive play).

And sometimes it’s about the outcome

At a certain point you do have to focus on outcomes because at the end of the day that’s what we’re after. Just be aware, though, that when judging outcomes you have to account for influencing factors just as you do with the input.

For example, a hitter is likely to have very different numbers when going against a single block than against a well-formed double block. You don’t really learn much about their development with respect to outcomes if you’re putting them in two very different situations.

The bottom line

The bottom line in all this is that if you want to use stats to measure improvementv you need three things:

  1. A clear indicator of the improvement you’re looking to measure
  2. A specific way to measure that indicator
  3. A consistent set of controls by which you capture that measure so the stats are comparable across evaluation samples

I’d love to hear examples of how you’re doing this with your players/team.

You may also find What should I stat for my young volleyball team? useful in this consideration.

6 Steps to Better Practices - Free Guide

Subscribe to my weekly newsletter today and get this free guide to making your practices the best, along with loads more coaching tips and information.

No spam ever. Unsubscribe at any time. Powered by Kit

John Forman

John is currently the Strategic Manager for Talent (oversees the national teams) and Indoor Performance Director for Volleyball England. His 20+ years of volleyball coaching experience includes all three NCAA divisions, plus Junior College, in the US; university and club teams in the UK; professional coaching in Sweden; and both coaching and club management at the Juniors level. He's also been a visiting coach at national team, professional club, and juniors programs in several countries.

One Response

  1. Thanks for giving me the shoutout John…

    I would add that for stats to have any practical application, there must be enough data points to actually produce meaningful information. I don’t think one season provides enough data points to elicit any kind of meaningful information from the stats.
    As you know, I agree completely with Mark’s belief that the fundamentals of the game are in the interactions. Therefore, I believe that any stats that don’t incorporate those interactions to be fairly worthless.

Please share your own ideas and opinions.

Latest Posts

Volleyball Team Building Drills

Volleyball team building drills that boost communication, collaboration, and problem-solving to help your team play better together.