Loren Anderson, who took part in a couple of Coaching Conversations, asked the following question in a Facebook coaching group.
Do competitive cauldrons or other forms of practice stats measure meaningful improvement?
If you’re not familiar with the competitive cauldron concept, basically it involves taking a bunch of stats in practice so players can be ranked. The idea has been around for a while. I attended a session at the 2019 AVCA Convention on the topic.
Lauren’s core question in his query is if we’re measuring things which actually indicate improvement. The snap response a lot of people probably have is yes, but is that really true? I would argue – and I suspect Loren would as well – that it isn’t.
There are a couple of big issues with stats when it comes to measuring improvement.
Big focus on outcomes
Most of the common stats we track in volleyball are outcomes. That was a 2-pass. The attack was a kill. The serve missed the target. You lost the rally or game. I think you get the idea.
Outcomes, though, often don’t do a great job of capturing improvement. Sometimes you can do everything right and it doesn’t work out. Other times you can do everything totally wrong and get lucky with the result.
Admittedly, sometimes the outcome is exactly how you can measure improvement. Passing accuracy is something that probably immediately comes to mind. But now we bring in the second issue.
Multiple factors involved
A confounding aspect of taking practice stats is that usually there are multiple moving parts involved. In other words, you very rarely get to look at something in strict isolation, particularly where the ball is involved.
In the example of looking at passing accuracy there is a serve. Presumably, you’re working on your servers getting better just like you are with your passers. What happens if both your servers and passers develop at the same rate? Most likely, you won’t see any change in the passers’ ratings because tougher serves offset their improved skill.
This is something I’ve actually had to address with my teams. The passers would be worried they weren’t seeing improvement in their numbers in practice. When I asked them if our serving was getting tougher, they would admit as much. They could then see that passing at the same accuracy against harder serves equals improvement.
What can we do?
So we’re left with a question. How can we actually measure improvement?
This then requires two follow-up questions. First, what is the improvement (or development) we’re after? This could be some aspect of a skill that the player is working on, or the whole skill. It could even be something collective.
Second, how can we capture that? In terms of aspects of a skill, this may simply be a question of counting good executions of that element. For example, how many times in a row can a server give themselves a good toss? If it’s the whole skill, however, now you have to find a way to control for the input (e.g. the quality of the serve when looking at passing).
When looking at collective action, there are again aspects that can be measured fairly simply. Cooperative drills like various peppers or the hard drill can use counts, as an example. When you start getting to the level of game tactics, however, you again have to somehow control for the input (e.g. how you initiate the ball into a drill working on an offensive play).
And sometimes it’s about the outcome
At a certain point you do have to focus on outcomes because at the end of the day that’s what we’re after. Just be aware, though, that when judging outcomes you have to account for influencing factors just as you do with the input.
For example, a hitter is likely to have very different numbers when going against a single block than against a well-formed double block. You don’t really learn much about their development with respect to outcomes if you’re putting them in two very different situations.
The bottom line
The bottom line in all this is that if you want to use stats to measure improvementv you need three things:
- A clear indicator of the improvement you’re looking to measure
- A specific way to measure that indicator
- A consistent set of controls by which you capture that measure so the stats are comparable across evaluation samples
I’d love to hear examples of how you’re doing this with your players/team.