Tag Archive for volleyball statistics

Looking at scoring streaks, odds, and service strategy

I like to engage with other coaching bloggers when I can. It’s a form of networking, which is never a bad idea. It also helps to foster communication and the exchange of ideas in our coaching community.

To that end, I want to address a post I came across. The author is Jim Dietz, though his name isn’t actually anywhere on the site (at least as I write this). Jim coaches at the junior college and club levels in the US. Like me, he’s also a book author.

In Jim’s post he takes on the subject of scoring points in streaks in volleyball, having looked at some numbers about bunch scoring in baseball. The conclusion that the post eventually leads to is that once you’ve scored two points in a row there’s an argument to be made for focusing on getting the next serve in to sustain the streak.

Baseball first

A quick thought on the baseball comparison Jim makes before shifting to volleyball. My immediate question about teams scoring in bunches being more successful is whether it’s a function of getting more players on base. I don’t know the statistics, but that’s the first thing I’d want to look at. If that’s the case, it means they are giving themselves more opportunities to score in general. As such, it’s not a case of winning because of scoring in bunches, but rather scoring in bunches because that’s what happens when you get a lot of runners on base, which produces more runs generally.

Scoring streaks required

Now, let me address the volleyball side of scoring streaks.

First, you simply cannot win a set in volleyball without scoring points in a row. This is the no-brainer aspect of Jim’s analysis. At the end of the day, points scored when you serve are what decides the game. In order to score a service point – aside from when you have the first serve of the set – you must first side out. That’s two points in a row – a mini streak.

If teams could only score at most one point from serve, then the team that had the greater number of mini streaks would win the set. Of course, it doesn’t work like that. Teams can run off multiple points when they have serve.

But with regards to Jim’s analysis, there’s a major causality question – as there could be in the case of the baseball stats. Does a team win because they score in bunches? Or does the better team just tend to score in bunches?

Looking at the odds

Jim’s analysis of randomly generated scores are actually very predictable. Even when you have something like a coin toss you will get streaks. Given enough tosses, it’s guaranteed. And when a streak does happen, the odds aren’t in favor of a comeback, so to speak.

Let me drill down on that. Say it’s 15-15 and one side has a 3-point streak to get it to 18-15. If the odds of siding out are 50% for both teams, then you’d expect them to score basically the same number of service points over the rest of the set. That would mean something like a 25-22 score line. Is there a chance that the losing team gets a streak, or even more than one? Yes, but it’s just as likely the leading time does so. That means the odds of the leading team winning are quite high.

The above provides a good basis for going along with what Jim says about trying to go for streaks. The problem, though, is that this is all based on the idea that point scoring is entirely a random process with fixed odds. If that’s the case, then nothing we do really changes things. But do we really believe the odds are fixed? Maybe when looking at large numbers of observations. At the set level, though, there are a lot of factors that can alter the odds.

Impact on serving strategy

And that brings up the big point I want to make.

Jim talks about not being as aggressive on your 2nd serve as on your 1st to increase the odds of getting that 3-point streak (counting the original side out). There’s a major flaw in Jim’s thinking, though. In theory, if you dial down the aggressiveness on your serve you lower your chances of scoring. Jim speaks as if doing so increases the odds of scoring. If that were the case, wouldn’t you just use that less aggressive serve all the time?

That said, I do think it can be the case that you want to focus on getting your serve in in certain circumstances. These, though, are situations where you believe the odds have shifted more in your favor – going back to the earlier point of odds changing. For example, the other team is looking error prone and you don’t want to let them off the hook by making your own. Again, though, if you believe that’s the case already then your 1st service strategy should already reflect this view.

Looking at offensive performance by set and pass quality

In this previous post I shared some offensive and defensive numbers for the 2017 Midwestern State season. Part of what we shared with the team was the table below. It breaks our offensive performance down by set location/type and pass quality.

For the sake of clarity, let me explain the table.

The column labelled “Set to OH” includes any sets to the OH, including high balls, go’s, and 3s. For visual reference, those are the 4, hut, and 3/Rip on this set diagram (relatively few of the latter). They are broken down by pass or dig quality using the 3-point grading system. The first line for each group is Hitting Efficency, which is (Kills – Errors) / Total Attempts. The second and third lines are Kill % and Error % respectively. Basically, that just breaks the Hitting Efficiency number into its component parts.

I followed the same process for the right side attacks (“Set to RS”). For the middle and back row attacks, however, I did not use the pass rating splits. In the latter case there just weren’t enough observations to make it matter, especially since we did not have a focused back row attack. They were mainly out-of-system swings, which is probably pretty easy to guess from the poor numbers.

Collecting this information is relatively simple, if you have someone dedicated to doing so. I did it myself on the bench during matches using pen and paper. It would have been much more efficient and easier to manage if we had something like DataVolley, but we didn’t. So we made do.

Analysis

As a staff we were quite surprised by some of the numbers above. For example, we would not have predicted that slides were our most efficient middle hitter attack. It was something that really ran hot and cold. It would go from unstoppable to you can’t buy a kill. No doubt the latter skewed our perceptions. That’s a real risk, which is why stats are so important for good analysis.

The other surprising thing was the effectiveness of our in-system outside sets, especially compared to our middle attacks. You expect middles to generally go for a higher Kill % than pin hitters. That was clearly not the case for us. Even worse, our middles had a higher Error % in places. On reason for the high OH effectiveness on the good passes and digs is that we did a good job running the Shoot-Go combination. That gave our OHs some really good, open swings.

The anemic performance of our RS attack came as no surprise. We had a tall player in that position who was a blocking force, but just didn’t generate the power in her swings she needed to be really effective in our conference. Plus, her confidence wasn’t very high, so she wasn’t as aggressive as she really needed to be.

Putting it to use

The MSU attack just didn’t produce kills at a high enough rate last season. The table above lets us identify the specific areas where improvement is needed. The right side is near the top of the list. We will likely continue to struggle if we can’t get at least to a Kill % comparable to that of our OHs.

The second big thing is the middle attack. The error rates for Slides and Shoots were too high, which is likely a combination of poor attacking and poor sets. The error rate for 1s was more reasonable, but the Kill % was relatively low. That should be north of 40% for a team like ours.

Address those two parts of our game, while keeping our OHs performing at about the same level, and we would be a very competitive team in our conference.

Looking at jump count

In 2014 when I spent three weeks with a pair of German professional teams, I had a conversation with one coach about player jump counts. He was starting to use the VERT device to track jumps in training. It gave him a guideline as to when to shut things down. I had a similar conversation during one of the Volleyball Coaching Wizards interviews. It became the basis for a podcast.

All of this came after Volleywood posted something which suggested what I saw as a ridiculously high average player jump count. They said, “Most volleyball players jump about 300 times a match.” With no supporting evidence, I should note. I posted a comment contesting that idea. As this article shows, however, that idea somehow spread.

So what’s the truth?

The folks at VERT published a set of figures based on NCAA women’s volleyball. The following comes from an email they sent out which I received.

So setters jump the most, followed by setters, then outside hitters (probably including right sides). Notice none of them are anywhere close to 300. Yes, these are averages, but I’m hard-pressed to imagine any player in even the longest match getting to 300. Maybe, maybe hitters got that high back in the sideout scoring days when matches could go very long. Even then, that would be on the very high end, not the norm.

And according to the article I linked above, research indicates the average is significantly lower for beach players than indoor ones. Though for them you have to factor in playing multiple matches per day.

Training implications

So what does this mean for us as coaches?

It means it doesn’t make a whole lot of sense to have players do 150 or 200 jumps a day in practice when they will do far fewer in matches. If we do, then we are likely over-training, which puts us at risk of injury as a result of either fatigue or overuse. And we shouldn’t just think about jumps in practice here. We also have to consider jumps from strength training as well. It all adds up.

Is offense or defense more important in volleyball?

Which do you think is more important to the success of a volleyball team – offense or defense?

Generally speaking, the answer could very well depend on whether you’re from the men’s or the women’s side of the sport. In my experience, women’s coaches tend to prioritize defense more than men’s coaches.

I probably would have been one of those women’s coaches who said “defense” once upon a time. I had a very clear demonstration of the limitations of that thinking, though.

The Exeter experience.

My first year coaching the Exeter women we had a pretty good defense. This was demonstrated in our playoff match against Loughborough, which was a good team at that time. We got into some really long rallies with them and constantly foiled their attackers. The problem was we couldn’t get kills going back the other way. Just didn’t have the fire power. We were good enough to compete, but not good enough to win.

That changed the following year. We had a major offensive upgrade. Now we could win those rallies we couldn’t the year before. The result was a trip to the national semifinal.

A more detailed example

In a moment I will share some figures with you. Let me first set the stage, though.

In the 2016 season, the Midwestern State (MSU) team I coached finished 8th in the Lone Star Conference, out of 11 teams. It was a meaningful improvement over the performance the year before (winless in conference). Our defense was really poor, though. We ranked 9th in opponent hitting efficiency (.221) and 10th in blocks/set (1.27).

Naturally, we made defense a big focus for improvement in the off-season. It paid off. In 2017 we moved up to 6th in opponent hitting efficiency (.183) and jump all the way to 4th in blocks/set (2.20). That means we moved up the standings, right?

Nope. In fact, we dropped a spot and finished 9th.

Were we more competitive? Absolutely! We took sets off teams in 2017 – including nationally ranked opponents – we didn’t get close to in 2016. We even had one more match win in conference.

What did not improve was our offense, and that made all the difference.

Here’s a look at the conference statistical rankings for key offensive and defensive areas.

Offense

We’ll start with the attacking side of things. Take note of how closely the final standing of each team matches its rank in terms of hitting efficiency. Only in the case of Kingsville and West Texas is there a variation. The two of them were reversed in terms of their offensive rank, though we can really say they tied. Kingsville had one more match victory than West Texas.

Of course hitting efficiency is calculated as (Kills – Errors)/Total Attempts. Thus, we can break it down and look at Kill % and Error % separately. Compare the Kill% and Error % ranks in the table above and you’ll notice something interesting.

Tarleton is clearly well ahead of everyone else with a Kill% about 39. Commerce and Angelo are very close in the 2 and 3 spots in the 37s. Then the next four teams are also very tightly bunched together in the 34s. After that you have a steady progression lower as you move down the ranks. Overall, there is about an 11% difference between top and bottom.

Things aren’t nearly so orderly when it comes to Error %. First of all, the spread from best (Commerce) to worst (Permian) is only about 4%. Most teams are in the 14%-15% range. The 10th worst team in terms of errors actually finished 7th in the standings.

When you see this it seems pretty clear that the kill side of things weighs more heavily on performance than the error side. That’s not to say errors don’t matter. Obviously, they do. But kills seem to matter more when it comes to winning and losing, and there’s a lot more variation.

Defense

Now let’s look at how teams did stopping their opposition from scoring. The opponent hitting efficiency gives us a general measure of that. The top three teams in the standings were also the top three teams in terms of defending. No doubt the strength of their offense is a factor there. After all, if you’re hitting is strong it makes for more difficult transition opportunities coming back the other way when you don’t get a kill.

Below the top three the rankings and the final standings position deviate quite a bit. MSU is a prime example. We had the 6th best opponent efficiency, but only finished 9th. Western New Mexico was four places below us in the defensive rankings, but won two more matches than we did.

Now compare the opponent Kill% rankings to those for the hitting efficiency ones. They are almost identical. That tells us that opponent hitting errors don’t really matter much. This really bears out when we look at the Block % figures. That’s the percentage of times that team blocked an opposing attack. They are all over the place! The bottom two blocking teams finished right in the middle of the standings, while the second best blocking side ended up 10th.

The edge to the offense

Based on the figures in the table above, it looks like offense correlates more closely to final league standings than defense. This, of course, is a narrow study. It features teams from one of the stronger conferences in NCAA Division II volleyball for just a single season. As such, it might not be fully representative. Even so, it at least gives us something to think about.

Here’s some further analysis along these lines.

Looking back on the 2017 season

The NCAA women’s volleyball season is official over. Champions at all levels have been crowned. Seems like a good time to look back on the 2017 season with respect to MSU Volleyball to see how we did.

You can look back to my last in-season log entry to see how we ended the year in the Lone Star Conference (LSC). In this post I’ll take a look at things in more detailed fashion, and also look at the historical context of our performance.

The Rankings

We finished 16th in the NCAA Division II South Central Region’s RPI rankings, out of 33 teams. That’s up from 20th in 2016. On the Pablo ranking (available at Rick Kern), we ended the year at 115 out of 297 in Division II, a 12 spot improvement. In case you’re interested, we came in at 469 out of 1297 in the Pablo composite NCAA/NAIA all divisions ranking. We landed at 98 in the Massey Ratings, up from 143 in 2016.

2017 Team Statistical Performance

Let’s first look at how MSU compared to the rest of the LSC statistically. Here are the final team conference-only stats for 2017.

Our offensive performance lines up really well with where we finished in the league. We simply did not score enough in attack. We were a solid team on defense, and quite good when it came to serving and blocking. Unfortunately, that only gets you so far. At the end of the day, you have to put the ball away when you get the opportunity.

The biggest issue there was our low kill rate at just 31.5%. Could we have made fewer errors? Sure, but at 15.4% our error rate was not particularly high. It was within 1% of most of the teams above us, and better than some. By comparison, the Kill % for Tarleton was 39.4, Angelo and Kingsville were in the 37s, and everyone else other than Western NM was in the 34s. As you can see from our standing in terms of Opponent Digs and Opponent Blocks, we simply hit the ball at their defenders too often.

Year-over-Year Comparison

Offensively, we were basically at the same level in 2017 as we were in 2016 when our Hitting Efficiency was .163. Our 9th in that category this year is the same as it was last year, though we did move up one place in Kills/Set.

Looking at our offensive positions, it’s a mixed bag. We definitely got more production out of our middles – 3.7 k/s as compared to 2.9 k/s – and they hit for a little better efficiency. Our pins were less productive, however. The OHs might have had a slightly higher hitting percentage, but were a down a fraction in kills/set. The big drop was in the OPP position. We went from 2.35 kills and .174 efficiency to 1.03 and .069.

Our defense was where we really got better. We massively improved in Opponent Hitting Efficiency, going from .221 to .183. Our block was a huge factor there, as we increased our Blocks/Set by nearly 1 whole block. We jumped from 10th to 4th in that category. We also were better in digs, improving to 16.17 from 13.76 and moved up to 7th from 9th.

At the individual level, the first thing that really jumps out is the production at our libero position. In 2016 we didn’t have anyone above 2.63 digs/set. This season our libero finished at 4.81. Not surprisingly, there are also some dramatic improvements in blocking. In the OPP position we went from 0.48 to 1.02. Our MBs in 2016 were at 0.61 and 0.54. This year it was 1.21 and 0.86.

Historical perspective

While the program still has a way to go in becoming what we all think it could be, and this season didn’t meet expectations in some ways, it still had some good things happen with respect to the history of MSU Volleyball.

  • First ever foreign trip.
  • First time beating West Texas after more than 30 failed attempts.
  • Most overall and conference wins since 2013.
  • The 4-match win streak we had early in the season was the longest since 2013, and the longest away from home since 2011.
  • This was the first season since at least 2008, when national rankings started to be noted on the schedule, that our only non-conference losses were to ranked teams.
  • The set we took off of Central Oklahoma was the first we’d taken from a ranked team since 2014 and the first against a non-conference ranked team since 2011.
  • Season Blocks/Set were 6th highest on record, Total Blocks the 8th highest, and our 2.20 Blocks/Set in the LSC were the most since 2010.
  • Our 2nd place position in the LSC in Aces/Set was our best position since 2007
  • The 4th place our top OH held in the LSC Kills/Set ranking was highest for an MSU attacker on record (2004 the first available).
  • Our setter’s 3rd place in conference Assists/Set was the best ranking for an MSU player since 2008.
  • Our freshman MB’s 1.21 Blocks/Set in the LSC was the most for an MSU player since 2005.

We can add in the fact that our combined total of 27 wins over the last two seasons is the most since the 2010 and 2011 campaigns. We need 14 wins in 2018 for the best 3-year total since 2008 to 2010.

Thoughts on the season – big picture

Generally speaking, I am satisfied with the season. Was it disappointing to miss out on the conference tournament? Of course.That fact that we did so is a good lesson in how things you have no control over can decide your fate. We had more wins this year than last, but finished one place lower in the standings.

At the same time, though, it’s also a lesson in how you need to perform every time out. Had we won a couple of those matches we lost early in the season due to really poor performances, our season could have ended very differently.

I think one of the issues we had early in the conference season is that we were too focused on outcome.In particular, I think there was too much pressure to win. That may sound a bit odd on the face of it, but stick with me.

The idea of reaching the NCAA tournament had taken hold in a lot of minds. It’s something the program hasn’t done since 2007, so obviously it’s a major goal. The problem, though, is only 3 or 4 teams from the conference make the NCAA tournament. We were a team that barely made it into the top 8 of the LSC in 2016. It’s not such an easy thing in a competitive conference to move up 4-5 spots from one year to the next.

So there was all this internal pressure to win at the start of the LSC season. This was in a group of players with no history of being in that kind of situation, and thus no real tools to handle it. It’s something we’re working on, but it takes time and experience. On top of that, the players are sick of losing – especially in conference. That can lead to playing not to lose rather than playing to win. I think we definitely had issues with that over the course of the season.

The combination of those two things made for some notable ups and downs in mentality. This wasn’t helped at all by the death of an MSU football player early in the season. That threw everyone for an emotional loop. These are young people who haven’t had to deal much with that sort of thing yet in their lives.

All in all, though, I think the season represented pretty good progress. We finished #16 in the NCAA South Central Region rankings, out of 33 teams. That’s up from #20 in 2016, and #25 in 2015. Importantly, we kept improving – and wanting to improve – right up to the end. That was definitely not the case in 2016 where we basically just survived the last couple of weeks of the season.

Thoughts on the season – the details

In any season there are areas which go well and those that don’t. The 2017 was no different in that regard.

From a playing perspective, the major objective we had coming out of the 2016 season was better defense. Our block was poor and we didn’t dig nearly as many balls as we felt we should. We made defense the top priority for our off-season development. We definitely were much better in that arena this year. The one area we persistently struggled in, though, was defending against the right side attack.

The offense for me was a disappointment. We just never could get that going the way we wanted. Part of it was a decided lack of any real right side threat. We might have been able to get more there with a personnel change, but it would have meant significantly reducing our blocking presence. In any case, that’s a change we really couldn’t have made until later in the season given who was available and the progression of player development.

The other trouble area was the second OH position. The two players who took turns there struggled with their consistency and made far too many errors in attack. We were not helped by losing our freshman OH early on to a knee injury. She would have at least challenged for playing time.

One thing I like a lot is that our senior players went out on their best season at MSU. I mean that both in terms of team and personal performance. Our attacking players had their most kills and their best hitting percentages this year. Our defensive players had their most digs this year. And our setter had her most assists (and digs) this season. You expect that to be the case, but it doesn’t always work that way.

Looking forward

It will be an interesting situation for the program moving forward. Next season we will only have two players with more than a single season’s experience at MSU. Everyone else will either be 2017 or 2018 freshmen or transfers. One the one hand that means little in the way of experience at our level of competition. On the other hand, though, it also means none of the baggage left over from the teams that finished last in the LSC in 2014 and 2015. In a way, now is when the real future for MSU Volleyball is shaped. That’s pretty exciting.

Considerations in serve reception ratings

In the article Scoring Serving and Passing Effectiveness I talk about the common usage of a 0-3 type of scale for rating serve reception. In this post, fellow volleyball blogger Hai-Binh Ly discusses how he progressed defining these ratings. Basically, he’s reached the point of using very defined zones to judge a pass’s rating. These are the zones defined within the commonly used DataVolley statistical program. Ly outlines them in his post.

I have my concerns with rigid definitions. Ly mentions some of them with respect to grey areas, but I would focus more on the fact that they fail to account for setter athleticism. Simply stated, a pass that might only be a 1 for a given setter might be a 2 for a quicker one. It could even be a 3. Think about a tight pass that a short setter cannot handle, but a taller one has no problem with.

The thing we have to keep in mind is the underlying idea behind these pass ratings.

The intention was to speak to the probability of earning the sideout. This is what Dr. Jim Coleman had in mind when he developed the rating system. The premise is that a 3-pass results in a sideout some percentage of the time. A 2-pass, on average, sees a team sideout at some other frequency – most likely lower. And so on down the line. From this perspective, a team’s average pass rating indicates its approximate sideout rate.

If pass ratings are going to approximate sideout success rates, then it makes sense to use a more discretionary rating approach. By that I mean rating passes based on the circumstances of the team in question. In other words, what can your setter do with the ball? Rigid definitions for each pass rating do not make sense in that context.

If, however, we want to compare serve reception across teams, or between players, then a more fixed system is more appropriate. In that case, we need a common system of measurement. That removes setter variability from the equation.

So which is best?

As a coach, it depends on your setters. Are they of similar quality? If so, you can use the more discretionary approach. If they are noticeably different, though, you probably have to go with a more rigid system. This is especially true if your passers do not work with each setter basically the same amount of time. It’s the only fair way to compare them.

Statistical analysis in volleyball recruiting

An article about Daryl Morey, the General Manager of the Houston Rockets in the NBA got me thinking about Moneyball for Volleyball. Should I trademark that phrase?

Using statistics in player evaluation

For those who don’t know, the “Moneyball” concept is where a sports organization uses statistical metrics to evaluate potential signings. This is in contrast to the old school eyeball analysis of scouts. The term Moneyball comes from the Michael Lewis book of that title about how baseball’s Oakland A’s used statistical methods to evaluate players and built a highly competitive roster with limited resources. There is also a movie based on the the book staring Brad Pitt. I recommend the book. It provides a bit more insight.

Before going on too far, I should say the Morey article got my attention because of it’s link to behavioral economics. My PhD work was in a closely related field. The article’s focus is largely on the interview process teams use. It’s a long one, so give yourself a block of time to read it.

Anyway, back to the Moneyball idea. Statistics have long been part of volleyball. In recent years it’s gotten a lot more focus thanks to improved applications and data. Joe Trinsey, who worked with the USA women’s team, has been one of the leaders in that regard. Have a listen to the Coach Your Brains Out podcast he’s on (Part 1, Part 2) for a bit of what he’s looked at.

That stuff is all about analyzing our players and teams. And there’s also the scouting element. How are we most effective? What is the other team’s weakness? That sort of stuff.

Stats in volleyball recruiting

What we don’t see much, if anything, about is using stats in the recruiting process. I have no doubt they get used by professional coaches. When I evaluated American players to sign for Svedala I definitely looked at their college stats, though I don’t know how far others take it. One day maybe I will get Mark to talk about it on a Volleyball Coaching Wizards podcast.

But what about college recruiting?

How many college coaches evaluate recruit statistics? My guess is few, if any. I say that in part because of how much time they spend watching video and attending Juniors tournaments. That’s basically the definition of old school scouting as described in Moneyball. The question, though, is whether they could actually go with analytics. I think most will argue that they can’t.

Why? Lack of useful data.

Issues with statistical data in volleyball recruiting

Yes, it is true that lots of high school teams keep stats these days. And much of that information is public. Juniors clubs, though, don’t really publish that information. That’s assuming they even collect it in the first place. My guess is most don’t in any comprehensive fashion. Though a few probably do.

Even if a high school or Juniors team does collect and publish stats, there is the question of reliability. Who is recording the stats and do they know what they’re doing? Even at the college and professional level there are issues regarding the quality and accuracy of the stats we get. Imagine a bunch of junior varsity kids taking them!

Finally, there is the question of comparability. What can you ascertain from a given player’s high school stats? What do they really say about that player? We want to gauge how a player will do at our level. I think, however, most college coaches don’t know how high school and/or Juniors translate. Juniors stats are probably a bit better as college coaches very often understand levels of play across the clubs.It can be a lot harder with high school stats. Unless you recruit in a very small area, you struggle to know the caliber of the schools your recruits play against, and more importantly how that compares to a recruit from a different part of the country.

One exception

The exception to the above is transfer prospects. Since those are college players, it is easier to draw a comparison. True, at the junior college level you often have the same statistics issues as you have in high school in terms of quality. It is easier there, though, to know the relative level of play the stats come from. And of course a player transferring within your own level of four-year school play is even more straightforward.

I would say the junior college to four-year college transfer process is most akin to the college-to-professional evaluation process. It provides an opportunity to make better use of statistics.

Are we doing enough?

Those are, I suspect, the reasons college coaches would put forward as to why they don’t use stats in recruiting. Are they valid reasons, though? Should high school and/or Juniors stats get more use? Or should we perhaps base things most heavily on something like the VPI developed by the AVCA?

I am not suggesting we shift completely to an analytic approach. I think most, if not all of us, agree that there is a personality element which must be considered. After all, we’re talking about a sport where one individual’s success is highly dependent on the performance of their teammates. Still, it does seem like some work on what statistics are predictive of success at the next level is worth doing.

Tracking block and defense improvement

During the 2016 season, one of the things we focused on with the Midwestern State team as the Lone Star Conference season progressed was improvement in our block and defense. Our block timing was poor. That meant not only few blocks, but also few digs. Though we also needed improvement in defensive position and actual digging. We were bottom of the league standings in both categories at one point, I believe.

Per set figures

Toward the end of October I ran some numbers to gauge our progress. I first started with blocks/set and digs/set. Those are the commonly reported figures, so it made sense.

Through the first round of conference matches (10 total), we averaged 1.17 blocks and 11.16 digs per set. Over the course of the first five matches of the second half of the season we averaged 1.57 and 15.47 respectively. That’s pretty good.

Percentages

A coaching friend suggested I look instead at block and dig percentages. Basically, that divides those figures by the total number of non-error attacks (blocked balls excluded from the error count). Since attack numbers can vary from match to match – and five set matches always mess with per set averages – the percentage approach is the better way to go.

For the first half of the season our block percentage was 4.5%. Our dig percentage was 42.1%. That adds up to a total “stop” percentage of 46.6%. For the first five matches of the second half the comparable percentages were 4.9%, 48.3%, and 53.2%. Again, gains across the board.

In each but one of the second half matches our block percentage was higher than against that same team the first time around. The same was true of the dig percentage (different match). Similarly, when looking at the total figure, only one match was worse the second time than the first.

Limitations

While these comparisons tell us the team was more effective in defense for the first five matches of the second half of the conference season, there is a limit as to how far you can take the analysis. What happens on the other side of the net leading to an attack matters. If you do a better job putting a team in difficulty through tough serves and/or good attacks, you will likely find it easier to block or dig their attacks.

Also, ultimately what you want from your defense is it to generate point scoring. That means it’s worth extending the analysis of something like dig percentage to see how many swings you get from those digs and how efficiently they convert into points.

Is a block a hitting error?

A reader asked me the following question about hitting and blocking statistics.

Is a won block counted as a hitting error for the corresponding hitter?

In U.S. volleyball the answer to that question is usually “Yes.” Elsewhere in the world, I think the answer is “No.”

I say that based on my experience as a coach in Sweden, and also from statistics in European leagues. The common practice there is to break out actual hitting errors from blocked balls. This might just be a function of DataVolley reporting, though.

Which is the right way? That is up to the statistics user.

From the perspective of reporting, the trend is to take a positive view. By that I mean they want to report players earning points rather than players giving up points. In that mindset a block is a positive thing for the defensive player. It is a negative for the hitter.

As coaches, however, we must decide which way to count them. It is about which approach provides the best information for us in the context of our own teams. There is definitely value in splitting errors and blocked balls, which standard NCAA box score reporting does not do.

Personally, I like including blocked balls for hitting efficiency [ (kills-errors)/total attempts ]. There is value in more granular reporting, though.