Mark Lebedew posed a question via Facebook with respect to expected performance relative to actual performance. Here’s how he presented it.

Thought experiment about reception.
Player A and Player B have equal quality reception (by expected SO%)
Player A has actual SO% after his reception 7% higher than Player B.
Can someone propose an explanation?

Expected SO%

Let me start by explaining Expected Sideout %. This is a serve receive passing metric often seen at the top end of the sport. It looks to address the problem with the standard 3-2-1-0 rating system I talked about in a previous post that brings up Expected SO%, after a fashion. Namely, averaging the ratings of a bunch of passes presents a skewed sense of performance. For example, is a 3-pass actually 50% better than a 2-pass? Is a 2-pass twice as good as a 1-pass? It might be 5x as good!

For Expected SO% you work out the sideout rate for each pass quality. You do this by looking at all the passes of a given rating (e.g. all 3s) and seeing the percentage of time the team gets a sideout. This is done across a very large sample. In Mark’s case, he’s using league-wide figures. You might end up with something like this:

3 = 75%
2 = 60%
1 = 25%

I just made those figures up, so don’t rely on them. You need to use data appropriate to your level of play.

Then, once you have those percentages worked out, you sub them into your average for your pass rating values. Lets say we have the following passes for a receiver: 3,1,2,3,2. Normally, we’d average that out and come up with 2.2. For Expected SO% we would replace those ratings with their average sideout rates above, so 75%, 25%, 60%, 75%, and 60% respectively. Then, we average those out and come up with 59%.

If you really want to take an in-depth look into all of this, Chad Gordon at VolleyDork has a deep dive.

Back to Mark’s question

Now that we have the basics out of the way, let’s turn back to Mark’s scenario. To clarify, Players A and B are on the same team. Imagine them being the two OHs for that squad. Thus, they have the same setter and the same other players around them (keeping in mind that Mark’s point of reference is teams operating under FIVB substitution rules).

So what Mark is presenting is a situation where two players have the same Expected SO%. You may find it easier to think of them having the same average pass rating (e.g. both are 2.2), though that may not actually be the case. Regardless, the point is that based on the two players having the same Expected SO% we’d expect the team to side-out at the same rate (actual SO%) when they each pass. In Mark’s scenario, however, the team actually sides-out better when Player A passes than when Player B does so. His question is why that would be.

The simple answer

It’s important to remember that the figures used to derive Expected SO% are from a large sample. They are the average of all kinds of different situations. We know, however, that there’s variability – even when you’re talking about one team. All you need to do is look at the stats by rotation.

Mark once posted some figures for different leagues. Since those are league-wide aggregates, they tend of smooth things out, but you can still see that there are different rates. No team has the same SO% across all rotations. That means unless our two passers have exactly the same distribution of passes across the rotations – which is highly unlikely – then one of them has the advantage of passing more often in the better rotation(s) and/or less often in the worse one(s).

Thus, we must expect our two players to have different actual team SO% when they pass.

What do we do with that?

There might be a tendency to wonder why bother with Expected SO% if the actual SO% is different – or, as in the case here, two players with the same Expected SO% exhibit different actual SO%. Keep in mind, though, that Expected SO% takes out the variable of what happens after the pass. As such, it is a good metric for evaluating serve receive. We can use it very similarly to using the old ratings (e.g. 2.2), with the advantages we’ve already discussed.

If we have reason to want to do a comparison based on actual SO% we’d want to control for known sources of variation – like rotation. This is something I talked about a bit in this post.

6 Steps to Better Practices - Free Guide

Subscribe to my weekly newsletter today and get this free guide to making your practices the best, along with loads more coaching tips and information.

No spam ever. Unsubscribe at any time. Powered by ConvertKit

John Forman
John Forman

John is currently the Talent Strategy Manager (oversees the national teams) and Indoor Performance Director for Volleyball England, as well as Global Director for Volleyball for Nation Academy. His volleyball coaching experience includes all three NCAA divisions, plus Junior College, in the US; university and club teams in the UK; professional coaching in Sweden; and both coaching and club management at the Juniors level. He's also been a visiting coach at national team, professional club, and juniors programs in several countries. Learn more on his bio page.

Please share your own ideas and opinions.