A while back I came upon a 3-part series of blog posts. It conceptualizes a unified way of looking at volleyball statistics across all categories. It’s dubbed the Volleyball Player Efficiency Rating by its creator. The three installations are here, here, and here respectively. The second refers to a paper on setter-specific ratings I address separately in another post. A number of folks came up with different variations on the player efficiency theme in the recent years.They are interesting conceptual exercises. At least the stat geek side of me thinks so!
When I coached at Brown I even developed one myself. I dubbed it the Point Contribution Ratio. Basically, I took the basic match stats for kills, blocks, digs, and assists. I then added in the 0-3 ranking we did for serve reception and the 0-5 score we used for serving into the mix. Each stat was weighted based on how directly it contributed to points scored or conceded. The calculation looked something like this:
PCR = Kills + Blocks + Aces + 0.5 x Assists + 0.5 x 3-passes + 0.25 x 2-passes + 0.5 x 4-serves + 0.25 x 3-serves – Hitting Errors – Service Errors – Ball-Handling Errors – Block Errors
That’s not exactly it, but I think you probably get the idea. Comparison was made on a positional basis because of the way different positions scored. Setters, for example, had the highest PCR because of their assists.
I never did actually test the PCR out statistically, though. That means I can’t give you an idea of how useful it might have been given the right weightings. Therein lies the problem with many of these volleyball statistical measures. We don’t know if they are meaningful when it comes to winning and losing points and matches. Jim Coleman actually did the statistical work on passing. He showed that how a team passed on the 0-3 scale related to their probability of scoring points (see his chapter in The Volleyball Coaching Bible). Those who propose new statting methods must do the same. Those who use statistical techniques to evaluate teams and players need to know that they actually have a measurable relationship to what we’re using them for. They can’t just sound good. Otherwise, we’re just spinning our wheels to no real purpose.