|tziller posted 9/18/2007 from|
Our last baptism of per-minute eroticism surfaced some valid concerns regarding causation; we studied year-to-year jumps in minutes, which served to shroud our evidence players don't get worse when they get more minutes by failing to control for situational changes and summertime improvement. Fair enough. Free Darko's Silverbird5000 offered a suggestion for more lucid study:
One way to do this would be to look at cases where a starter gets injured mid-season and see what happens to the PER of the player who replaces him. A kind of natural experiment for TOIH.
While the reagents bubbled on my own TOIH experiments, Silverbird went another direction to prove his theorem. In his post, he lays out further theory: player performance suffers against better opponents, and more minutes means a greater percentage of time played against better opponents... thus more minutes mean worse performance. The findings are significant (statistically and in terms of our debate), but enough questions remain to leave the cloud of skepticism hovering over both TOIH and its 'opposing' philosophy, what I'll now refer to as The Paul Millsap Doctrine.
No one would question the impact of opponent strength in player performance; of course Kobe Bryant and Ronny Turiaf would rather play the Kings than the Spurs. This is the reason TOIH was rather immediately accepted and the burden of proof placed on us in the 'per-minute camp.' (That Silverbird found an innovative way as well as a motive to formalize this is commendable.) Realistically, however, his study found relatively small production drops. Bench players never play strictly against bench players or vice versa, so the difference in average opponent PER is unlikely to be quite as dramatic as Silverbird hypothesizes (an increase from an average PER of 10 to 20, which would cause the player in question to, on average, lose two points of normalized Win Score).
To satisfy Silverbird's previously requested natural experiment, Kevin Pelton and I (but mostly Kevin Pelton -- he watches way too much basketball) compiled a list of players who got stretches of starter-level playing time due to injuries to starting teammates. We came up with 17 such cases for 2006-07. Note that we were very wary of expected criticisms, thus leaving out possibly viable players like Tyrus Thomas and Matt Carroll (players who would have reinforced our findings). The problems with Tyrus and Carroll (and Walter Herrmann and Sasha Pavlovic) -- they clearly earned further extra minutes from performing so well when needed. We wanted to erase (as much as possible) causation problems stemming from increased production resulting in increased minutes, as detailed in a criticism from Brian M. (Knickerblogger addresses those concerns on a parallel but distinct avenue as I in this post.) The majority of our players saw their big-minute stint come bookended by low-minute stints; these are not cases where minutes come from production -- minutes are coming from necessity. All the players included had starting teammates who were injured or suspended; thus, the minutes increases. Games were not cherry-picked -- we took the entire span of games from the stretch of necessity, not just the games in which the player got a lot of minutes. We've worked to isolate the issue of changes in playing time/role versus player efficiency. While the sample size is understandably small and every case is not perfect, I think we've done a pretty good job in controlling for the stated factors of concern. We used 17 players. The range of MPG increases were 9.5 and 21. A full list of the names and situations is in this appendix.
Shooting percentage: On average, effective field goal percentage and True Shooting percentage both increased by .003. Eight players saw their shooting percentages improve with the extra minutes; nine saw declines.
Points per 40 minutes: +1.82 on average; 15 improved, 2 declined.
Rebounds per 40 minutes: -0.01 on average; 11 improved, 6 declined.
Assists per 40 minutes: +0.48 on average; 10 improved, 7 declined.
Steals + Blocks per 40 minutes: -0.11 on average; 4 improved, 13 declined.
Turnovers per 40 minutes: -0.10 on average; 9 improved, 8 declined. (Of course, improved means lower turnovers here.)
Fouls per 40 minutes: -0.92 on average; 13 improved, 4 declined.
And finally... PER: +2.38 on average, 15 improved, 2 declined.
Let's take a closer look at what we have here. Shooting percentages, rebounding -- these tend to stay equal regardless of minutes. It's actually remarkable how level the shooting percentages stayed -- all but two of the cases stayed within 10 points of the low-minute rates. Rebounds had a bit more variance -- Alonzo Mourning got a big boost from the starting role while Mikki Moore's boarding suffered. Steals and blocks took a hit from more minutes, which makes sense: starters are better ball-handlers on the whole, and steals/blocks are so few already they'll be more susceptible to wild variation. A common thread among the anti-Millsap population is the argument players like him -- so-called 'energy guys' -- would foul out if they played 30-40 minutes. Well, when minutes increase, foul rate goes down substantially. Scoring rate surprisingly (based on TOIH) jumps, as does PER. The majority of players perform better when they get their minutes in bigger chunks... despite the evidence they should perform worse given the apparent but possibly overstated disparity in opposition skill level. To me, that says per-minute statistics are a good indication of a player's true talent level and decidedly not nonsense.
And before we shake hands and break souvlaki, I would offer the data of one beauteous case in our study here: Paul Millsap himself. Carlos Boozer was injured from late January through the All-Star break. During that span, Millsap's minutes went from 16.3/game to 28.5/game. His PER in the combined 71 games before and after Boozer's injury was 16.1. His PER during the Boozer injury was 18.4, a 14% increase. Yes, Paul Millsap helps prove his own quandary, giving him naming rights on the doctrine.
Players perform better in most measurable categories when they get more minutes. The possible causes? They can get into a flow with extended stints (versus chopped up minutes here and there), roles are more stable/defined in the starting lineup, they aren't looking over their shoulder and worried about making a mistake (Deron Williams, 2005-06) or pressing too hard to impress (Francisco Garcia, forever). This logic all makes as much sense as Silverbird's equally sensical findings.
In fact, the two theories -- the Theorem of Intertemporal Heterogeneity (not all minutes are created equal) and The Paul Millsap Doctrine (the production of low-minute players typically improves or remains the same when given substantially more minutes) -- can coexist, if we realize and admit how contextually based and situationally complex this game is. (And for those who call 'bullsh*t' on our ability to aptly dissect this game through statistical methods -- imagine using Silverbird's 'quality of minutes' data combined with per-minute numbers to correctly identify the inflated and assign more tightly predicated prognostications as to expanded worth on the misused.) Only through continuing to explore the intricate relationships between players, their teammates, their minutes, their opponents and their performance can we hope to unveil the skeleton which ties Tom VanArsdale and Nick Van Exel together.
A million thanks to Kevin Pelton for help on this study and post.