Notre Dame has been riding its stars so far
Lacrosse is tiring. I don’t think I’d surprise anyone by saying that. So teams do what they can to prepare their players, through conditioning primarily, so that they don’t wear down and get injured or ineffective over the course of the season. But there is a second lever at the disposal of coaches that can be equally effective at reducing fatigue, but may not always be used as often (or as enthusiastically) as good old-fashioned running. I’m talking about resting players. Pro sports are generally more cognizant of the need for rest, probably because their players are by definition older. You see it especially in baseball (super long season) and basketball (super condensed season + really hard on the knees).
The challenge with rest is that the players that you need the most down the stretch are usually the ones you need to most in order to get to “the stretch.” This puts coaches in a bind: do you rest your stars when games are in doubt at risk of losing position, or do you ride your horses and get the best postseason slot? No easy answer to be sure. If we knew where we’d end up at the end of a game or at the end of the season, playing time decisions in March would be infinitely easier. The other side of the coin here is that by resting stars, you get more playing time for role players. This can have the double benefit of increasing their capabilities should they be needed later AND you increase the motivation of those players to grind out practices during the week. Each situation is different, but all the factors must be balanced.
Calculating effort by looking at play logs
We aren’t going to dive (in this post anyway) into optimal strategies for rest here and now. But we did want to share some stats for this young 2017 campaign to show which teams have been better at taking pressure off their stars. To get at this, and since we do not have actual minutes played, we’ve used plays as a proxy. In general, if we have a midfielder logging 40 minutes, we’d expect them to be involved in more plays than one that plays 20. And since we do have play by play game logs (tagged with player names) for 119 games so far this season, we have the data to do it. An added benefit of using play contributions rather than minutes is that it helps avoid the simple fact that attackmen and defenders are on the field, doing absolutely nothing for quite a bit of time.
To put this into a simple example, let’s look at the first few minutes of an imaginary game:
- Player A from Team 1 picks up a ground ball of the faceoff
- Player B from Team 2 strips Player C from Team 1 and picks up the ground ball
- Player D from Team 2 shoots wide but it’s backed up
- Player D from Team 2 shoots, but it’s saved by Player E from Team 1
In that stretch, there were 7 attributable plays (attributable means that names are commonly tagged to them in game logs, which means backups are usually lost). 2 shots, 1 save, 1 turnover allowed, 1 turnover forced, and 2 ground balls. 3 for Team 1 and 4 for Team 2. Player B from Team 2 was credited with two plays (forced turnover and ground ball), and since Team 2 has 4 plays, we would say that Player B contributed 50% of their plays (Player D has the other 50%). So Team 2 had two players contribute, and by this logic, Team 1 had 3. So while this is just a snippet of a game, we would say that Team 1 spread the effort more evenly. It’s not a perfect proxy for effort/intensity, but given the data we’ve got, it does provide a reasonable estimation.
Spreading the wealth vs Riding your horses
So for the 119 games so far this season, which teams are spreading it around the most?
Team | Avg Margin | Goal Differential | Avg Contributors |
---|---|---|---|
Marquette | 16.0 | 16.0 | 31 |
Maryland | 7.7 | 7.7 | 25 |
St. John’s | 9.0 | -9.0 | 25 |
Syracuse | 5.0 | 4.3 | 25 |
Boston U | 4.4 | 4.4 | 24 |
Duke | 7.8 | 5.4 | 24 |
Denver | 9.5 | 9.5 | 24 |
Robert Morris | 6.2 | 4.2 | 24 |
Johns Hopkins | 6.5 | 6.5 | 24 |
Quinnipiac | 8.5 | -8.5 | 24 |
Interestingly, we have some powerhouses and some also-rans. There is a positive correlation between goal differential and the number of contributors, so it’s a bit surprising to see a team like Drexel (0-3 on the year) towards the top end of the list. Perhaps you interpret that as the coaching staff making a commitment to get their whole team experience, even if it means losing a winnable game early. An investment in the future of sorts. For a team like Maryland, there is a somewhat nuanced story. Their average number of contributors is high, but that masks the game against Yale, where they only had 18 players attributed with at least one play. In the games they win by 8+ goals, their number of contributors is more like 29. Classic example of getting the end of the bench involved in blowouts while riding the stars in close games. Of course, it comes with the cost of your role players having experience, but only in non-pressure situations; how will they react if shoved into the spotlight in May?
And the teams that have ridden their stars the most?
Team | Avg Margin | Goal Differential | Avg Contributors |
---|---|---|---|
NJIT | 10.2 | -10.2 | 15 |
Binghamton | 3.0 | 0.0 | 16 |
Navy | 4.7 | -0.7 | 16 |
Georgetown | 4.3 | -4.3 | 17 |
Manhattan | 4.8 | -3.8 | 17 |
Stony Brook | 4.5 | 4.5 | 18 |
Hobart and William | 5.0 | 1.0 | 18 |
Sacred Heart | 3.2 | 2.2 | 18 |
Yale | 3.0 | 2.0 | 18 |
VMI | 8.0 | -7.0 | 18 |
At the other end, we see teams like Stony Brook, Notre Dame (53rd out of 70) and Harvard (59th out of 70) (a combined 8-0 with a combined 5.3 goal differential) who have averaged about 18 contributing players per game. Granted, there may be injuries at play here or other factors, but to see these teams riding such a small number of players this early might give you concerns about fatigue later in the season. Will be very interesting to see whether these trends continue.
Investing in the future vs short-termism
Short-termism is a common bogeyman in economic circles: companies chasing EPS in preparation of the next earnings call vs investing in the future of the business. We can apply the same principles here to see which teams are making an effort to spread it around, even in situations where a game could be on the line. To do that, we looked at the average goal differential vs the average number of contributors. Not surprisingly, the teams with the highest and lowest goal differentials are generally the ones with the highest number of contributors. When the game isn’t close, the cost of getting to the end of the bench is nil. When the games are close, the value of the experience for role players goes up, but the risk is much higher; this is where the most interesting examples are.
And in fact, when we plot each team based on the average margin of victory (x-axis) and the average number of contributors (y-axis), we do see some teams that stand out. Syracuse, Providence and Hopkins (to name just a few), seem to be doing a good job spreading the effort around, more than we’d expect given the average margin of their games. If we had to call out a few teams for short-termism, it would be Notre Dame, Navy, and NJIT. Plotting the teams this way allows us to see who might be justified in riding their stars given the closeness of their games: Penn stands out as a great example of this. (Full listing can be found below.)
What does it mean?
To be fair, it’s super early; these numbers will undoubtedly change as the season progresses. But it does give us something to keep an eye on. Will Notre Dame start to spread the effort around more as the season progresses? Will the experience being gained by the Providence bench pay off down the road?
We will also be able to look at specific outcomes as the season progresses. For example, does a team that has fewer contributors loss effectiveness as a unit in the 4th quarter as we get into April and May? Can we see fatigue show up in ground ball stats? At the very least, this type of analysis gives us a peek into the mindset of the coaching staffs around the country.
Appendix
The table show each team as well as the average margin of their games, their own goal differential, and the average number of contributors per game. Note: statistics may not match full team statistics because analysis was performed only on those games with available game logs.
Team | Avg Margin | Goal Differential | Avg Contributors |
---|---|---|---|
Marquette | 16.0 | 16.0 | 31 |
Maryland | 7.7 | 7.7 | 25 |
St. John’s | 9.0 | -9.0 | 25 |
Syracuse | 5.0 | 4.3 | 25 |
Boston U | 4.4 | 4.4 | 24 |
Duke | 7.8 | 5.4 | 24 |
Denver | 9.5 | 9.5 | 24 |
Robert Morris | 6.2 | 4.2 | 24 |
Johns Hopkins | 6.5 | 6.5 | 24 |
Quinnipiac | 8.5 | -8.5 | 24 |
Virginia | 4.5 | 4.0 | 24 |
Drexel | 5.7 | -5.7 | 23 |
Army | 6.2 | 5.2 | 23 |
Loyola MD | 6.0 | 5.3 | 23 |
Brown | 12.0 | 4.0 | 23 |
North Carolina | 7.5 | 3.5 | 23 |
Princeton | 6.7 | 5.3 | 22 |
Penn State | 6.2 | 6.2 | 22 |
High Point | 8.0 | -2.0 | 22 |
Mercer | 4.7 | -3.3 | 22 |
Michigan | 8.0 | 3.6 | 21 |
Albany | 7.0 | 6.3 | 21 |
Cornell | 9.0 | -9.0 | 21 |
Massachusetts-Lowell | 5.2 | -2.8 | 21 |
Saint Joseph’s | 10.5 | -10.5 | 21 |
Bellarmine | 3.7 | -3.0 | 21 |
Providence | 1.3 | 0.7 | 21 |
Lehigh | 6.0 | -0.5 | 21 |
Monmouth | 1.8 | 0.2 | 21 |
Siena | 6.5 | -6.5 | 21 |
Massachusetts | 5.2 | -5.2 | 20 |
Ohio State | 5.2 | 5.2 | 20 |
Richmond | 6.7 | 6.7 | 20 |
Fairfield | 4.8 | -3.8 | 20 |
Detroit | 4.6 | -4.2 | 20 |
Air Force | 7.2 | 0.2 | 20 |
Canisius | 8.5 | -3.5 | 20 |
Hofstra | 6.7 | 6.7 | 20 |
Lafayette | 7.2 | -7.2 | 20 |
Rutgers | 4.7 | 4.7 | 20 |
Wagner | 2.8 | -2.2 | 19 |
Bryant | 3.6 | 1.6 | 19 |
Bucknell | 2.0 | -1.0 | 19 |
Mount St Marys | 7.8 | -7.8 | 19 |
Vermont | 2.4 | 0.8 | 19 |
Dartmouth | 4.7 | -3.3 | 19 |
Holy Cross | 5.0 | -1.0 | 19 |
Jacksonville | 6.5 | -6.5 | 19 |
UMBC | 9.2 | -3.8 | 19 |
Villanova | 3.5 | -2.5 | 19 |
Delaware | 5.2 | 1.8 | 19 |
Hartford | 4.7 | -4.7 | 19 |
Notre Dame | 8.5 | 8.5 | 19 |
Penn | 1.0 | 1.0 | 19 |
Towson | 4.3 | 1.7 | 19 |
Cleveland State | 11.8 | -11.8 | 18 |
Marist | 4.2 | 2.2 | 18 |
Colgate | 4.0 | -0.7 | 18 |
Harvard | 4.0 | 4.0 | 18 |
Furman | 3.8 | -0.6 | 18 |
VMI | 8.0 | -7.0 | 18 |
Yale | 3.0 | 2.0 | 18 |
Sacred Heart | 3.2 | 2.2 | 18 |
Hobart and William | 5.0 | 1.0 | 18 |
Stony Brook | 4.5 | 4.5 | 18 |
Manhattan | 4.8 | -3.8 | 17 |
Georgetown | 4.3 | -4.3 | 17 |
Navy | 4.7 | -0.7 | 16 |
Binghamton | 3.0 | 0.0 | 16 |
NJIT | 10.2 | -10.2 | 15 |
March 3, 2017 @ 4:50 pm
I don’t mean to be critical, but, rather, provide constructive criticism. I like what you guys are doing, but there are severe statistical problems with this analysis. I get it – the season starts, so people want content, like rankings and stats, but an analysis needs to have some meat behind it to mean something. You said it yourself – very few observations. Disparity in opponents is going to be a huge influence, as well. Specifically, about your scatter plot of Avg Contributors/game v Scoring Margin, Marquette is an obvious outlier (maybe NJIT, and ABS(Margin)?), which influences the trendline. My bet is that without Marquette, it’s pretty flat. Is it significant (p)? If it is, how strong of a relationship (r-sq)?
To be a total armchair QB, lol, look at last year’s data (is it significant?), then compare to what teams are doing this year. Particularly, has there been a change of philosophy that has given different results? Navy underperforming – are they using fewer players than last year? PSU, BU over-achieving – are they using more players? Is this something that normally changes throughout the season?
Right now, all you really have is this sentence:
“At the very least, this type of analysis gives us a peek into the mindset of the coaching staffs around the country.”
Yes, I’m bored today, lol. I think you guys are on the right track, but don’t cheapen your product with crappy analyses.
March 3, 2017 @ 5:32 pm
Fantastic points, I really appreciate the comments. (I saw your thoughts on Twitter last night too, but had to get to bed.) I 100% agree with your comments about statistical rigor and that we’ve been a bit lax in adhering to the rules of quality analysis to date. Guilty; my college professors would fail me.
The plan is to update these analyses in two ways: 1) which you suggested, is to look at last year’s data and see how it looks for a full year and 2) update it for 2017 as more game data becomes available. I’m especially interested to see if teams that get contributions from more players hold up better over the course of a season. Does that effect change depending on the leverage of the situations when the role players are making their contributions? I’d love to have a way to compare coaches and really try to isolate their philosophies on this topic. But your main point still stands, the conclusions drawn from any analysis can be actively detrimental if the analysis is flawed or not rigorous enough.
Now that we’ve introduced the analysis and the underlying metrics, I hope you will continue to call us out for lax statistical rigor as future iterations of the analysis are published.