I’m always interested in the things that coaches can control. Unfortunately, there are 100 times more things that they do not control than those that they do. But by looking at the things that they can control, we could get a sense of a team’s strategy and how well that coaching staff puts their players in position to succeed. The problem is that it’s devilishly difficult to separate conscious coaching decisions from the bounce by bounce randomness of a game. One area where I think we do have some clarity is roster depth.
Most teams have roughly the same number of available players. The amount that a player sees the field is mostly driven by coaching decisions (with the rest down to injury and fatigue). So by looking at the distribution of playing time, we have a rare chance to examine coaching strategy.
If you can’t measure it, it doesn’t count
To that end, since last season, we’ve calculated a roster depth metric that I’ve termed “Player Contributions.” The basic idea is to identify how many players on a roster show up in the play-by-play and adjust that number based on how evenly the contributions are spread. The result is a list of each team, sorted by how “deep” their roster is.
For example, let’s say that two teams (A & B) each have 100 play events in the play-by-play data. If Team A had 10 players each make 10 plays, and Team B had 9 players each make 11, then we would say Team A has a deeper roster, and they would have a higher Player Contributions score. In the most simplistic terms, higher scores equal deeper rosters; or put another way, higher scores means that reserves have gotten more experience than teams with lower scores. I think of the actual value like “productive” roster spots (as opposed to players who contribute mostly via practice).
Economists, rise up!
There is an interesting strain of portfolio theory and resource allocation here (those in my audience who happen to be economists just got really excited). On one hand, having a deeper roster has several benefits: mitigates risk of injury, keeps the top players more fresh, helps ensure that reserves are ready to step up in the biggest situations.
On the other hand, in any given time/score situation, you maximize your chance of winning by playing your top players. That means that the goals of A) winning a given game and B) developing a deep roster are fundamentally at odds. An aggressive strategy to absolutely maximize your chances in a given game will inevitably lead to a diminishing of roster depth.
Fortunately, the goal is not either A or B, but C: winning as many games as possible, especially in the post-season. Hence the resource allocation problem. Assuming you are a team that is likely to make the post-season, does sacrificing some win probability here or there in the pursuit of a deeper roster increase your chances of winning in May?
An end in mind, but it’s fuzzy
I was not sure what I would find going into this analysis.
On the one hand, maybe it doesn’t matter. Maybe the gaps between games allow enough rest time that fatigue to the top line is not a huge deal come May. Maybe the effect is small enough to be hidden by the random bounces that can often determine a game. Maybe reserve players who get more reps don’t benefit from the extra experience in a meaningful way.
On the other hand, fresh legs could be such a big deal that saving an extra 15-20 minutes of game action from your top line in Feb/Mar pays dividends in the warmer post-season. Maybe having that end-of-the-bench guy more prepared when he is thrust into the spotlight is the difference more often than not. It really comes down to headwinds and tailwinds. At the end of the day, does the investment in the end of the bench increase your tailwinds enough that it is worth the potential loss of a win here or there earlier in the season?
The best we can hope for is some sort of discernable pattern. You can make assertions with a discernible pattern.
A discernible pattern emerges!
And that is more or less what I found when working through this. The thesis statement is: all things being equal, a team that has greater roster depth will win more games than expected based solely on the difficulty of the schedule.
And so it is…
The chart above contains every game from the past 5 seasons (2,608 to be exact). The x-axis (Depth Diff) represents the difference in final roster depth for the two teams. The y-axis represents the win percentage for the team with greater roster depth. For example, there were 279 games where one team had approximately 2 more productive roster spots than their opponent. The team with more depth won approximately 63% of their games. Conversely, when there was virtually no difference between the two, both teams won roughly 50% of the games.
As you can see from the chart, teams with more productive roster spots have won more games, on average, over the past 5 seasons.
Doesn’t appear to be a selection issue
My first thought was, ok, well then teams with more productive roster spots must be better, right? That would certainly explain the increasing win rates. But that doesn’t appear to be the case.
I plotted every game in our dataset, with the difference between Team A and Team B in terms of depth on the x-axis and ELO rating on the y-axis. If better teams had better depth, then we should see a pretty strong correlation here (it would have to be given the strength of the correlation in the above chart). Anecdotally, when we look at this list of top depth teams, there are some good ones and some not-so-good ones.
Instead, we see a very very weak positive correlation. So weak as to say that there is almost no relationship between being a strong team and having a deep roster. (Note: since our ELO ratings are still stabilizing, I checked what happened if I used only the last 3 years of games; the correlation got worse.)
Still, even though we have anecdotal evidence that ELO strength and roster depth aren’t related, I wanted to check my results to see if I could remove team strength entirely. In theory, if I could adjust each game outcome to remove team strength, I’d be isolating the team depth variable more cleanly. Doing that should make it more obvious whether this is a team-depth thing, or some other factor at play.
Fortunately, ELO ratings can be converted into pre-game win probabilities. Which means that we can calculate an “excess wins” value for each game. If a team’s ELO rating suggests they should win a game 40% of the time, a win would mean that they scored .6 excess wins. In doing this, we can take the pre-game team strengths out of the equation. Given that there is a positive correlation (albeit weak) between ELO and team depth, then empirically, we would expect the relationship between team depth and win rate to weaken after normalization. The key is how much?
Short answer: it’s still a pretty strong positive correlation. The greater your roster depth advantage, the more likely you are to win, even after accounting for team-strength.
Editor’s Note: the y-axis represents excess wins over the course of a season rather than per game. Most teams play between 14-15 games over the course of a season. A team with 4 extra “productive” roster spots than their opponents would be expected to earn an extra win above expectations over the course of a season. Also, I puzzled a bit over the -1.25 value for the no-difference bucket (I expected they’d be roughly at zero excess wins). What happened what that teams that had a negligible edge in depth won 52% of their games, but based on the actual ELO values, they should have won 60%. I was confusing no-difference in roster depth to mean the teams were comparably strong, which doesn’t have to be the case.
To add some context, Virginia led the nation in roster depth (again) with a total of 19.9 “productive” roster spots. The median team, Boston U, had 15.2. That difference of 4.7 productive roster spots works out to something like an extra 2 wins per seasons (because the 0 x-value in terms of roster depth has underperformed by about 1.2 wins per season). Granted, we don’t think about wins in lacrosse the same way you might in baseball or basketball, but imagine a Cavaliers team that had 2 less wins this year. That hypothetical team would almost certainly not be preparing for a first round NCAA game right now.
Of course, the question becomes, did Virginia lose any games that they might have won if they had played fewer reserves? And of course, that is unanswerable, but given that Virginia is always near the top of the player contributions list, Lars Tiffany must feel that the benefits outweigh the risks (his up-tempo offense probably plays a role too).
Is this a fatigue or an experience thing?
This is where things started to get confusing. And maybe you just chalk it up to randomness that comes from splitting the sample in two, but I looked at the Feb/Mar games vs the Apr/May games separately. My thought was this: if a team having a deeper roster means that fatigue is less of an issue, then you’d expect the effect of roster depth to show up more in the latter stages of the season.
Not so much…
In fact, we see exactly the opposite. The advantage that comes from roster depth is much stronger in the first half of the season compared to the second. In fact, there isn’t much of a relationship at all between roster depth and excess wins in Apr/May.
If you are thinking about roster depth as a way to tip the scales come tournament time, you are probably barking up the wrong tree. This would seem to disprove the idea that building roster depth is an antidote to fatigue or some way to build confidence in your 2nd lines. I get that maybe the “confident reserve” myth is overstated, but fatigue should be lower for teams with greater depth; the fact that this doesn’t seem to have an effect on winning percentage is somewhat surprising.
Applying the economic theory is always the hardest part
Ok, so the benefit isn’t higher win rates in the second half of the season. But to see that it shows up early makes me think that I misjudged the concept of depth altogether. This suggests that building roster depth is less of an investment made in the early part of the season, and more of a strategy to start the season strong in the first place. There are two possibilities for why we could see the first half vs second half split.
The first (and I think more likely) is that early-season lacrosse, when teams are still getting into a rhythm and “finding themselves”, is a time when this advantage is more meaningful. Maybe fatigue is more of an issue early in the year because players are not yet in midseason-form? This would mean that playing reserves allows teams to spell their better players at a time when fatigue is more of an issue than it is later in the season. (Would love to hear from someone with some experience on the topic…)
The second possibility is that playing more players is always better, but as teams get into crunch time, they start to rely more heavily on their top lines. This would mean that teams are essentially wasting a potential advantage that they gave themselves by playing a deeper roster early in the year. This is certainly the more intriguing possibility. But it would mean that a team is better off giving their reserves playing time in May instead of riding their stars; that is a hard sell.
Who stands to gain?
With this theoretical exercise more or less complete, let’s turn our attention to the teams on the field. Even though the May effects of roster depth don’t seem especially strong (so maybe nobody gains from here on out), it’s worth knowing which teams have built up their depth most effectively. And among the top 10, we see 6 teams with games left on the schedule:
|Team||Weighted # of Contributors / Gm|
|Mount St Marys||18.1|
After going through this analysis, it seems that Virginia, Albany, Yale, Syracuse, Duke, and Loyola may not benefit from their roster depth going forward. But it seems clear that the successful seasons that got them to this position were driven at least in part by the investments they made in depth. The Terps are the counter-example: they have the 8th thinnest roster of any team in D1, but they managed to snag the #1 overall seed.
One shining moment
So all that is left is to shrug and chalk it up to another analysis that doesn’t fit cleanly into any single narrative. Certainly some more research could be done on the topic to flesh out exactly why the first half-second half split is what it is. I worry that looking at specific games might be too narrow, but you could split the teams into two buckets: same-or-more # of player contributions as 1st half and fewer player contributions than 1st half. That might help to distinguish whether teams that continue to leverage their depth gain an advantage later in the year.
Put it on the off-season list of to-dos.
Regardless, whatever the reason for the effect on win rates, it’s clear that there is a benefit to playing more players and cultivating a deeper roster. Coaches, you can control this; don’t let the opportunity pass you by.