by Kevin Pelton | permalink | trackback | comments |
» Visit the author's site: SuperSonics.com
In 1900, a mathematician named David Hilbert created a list of 23 issues the field of mathematics should address in the coming century. This list, which has come to be known, predictably, as “the Hilbert problems,” was aped at the start of this century by the sabermetricians at Baseball Prospectus, who created their own version for sabermetrics. Football Outsiders’ Aaron Schatz created a list of his own a year ago for football statistical analysis.
To date, no one has created an APBRmetrics version of the Hilbert problems, and while that would be an interesting column, creating such a list is not my aim in this column. Instead, I want to ask a more focused question - where, on such a list, would you rank “the development of more accurate methods of rating basketball players”?
I ask because it seems that, for the past couple of months, discussion about the APBRmetrics community — and, to a lesser extent, the discussion within it — has been dominated by a couple of topics. One is Wages of Wins and the Wages blog, primarily written by co-author David Berri. The other has been John Hollinger’s player rating, PER.
Wages came out in the early summer and has been a hot topic ever since, in large part because the book and its player rating, Wins Produced, has drawn heavy attention from the mainstream media.
Just before the start of the season, ESPN.com published Hollinger’s projected PERs for almost every player in the NBA, along with the online version of the player comments he used to write for his annual edition of the Pro Basketball Forecast. ESPN did Hollinger something of a disservice by presenting his projected ratings as if they were his rankings of every player in the NBA and as if he honestly believes that Chuck Hayes is a better player than Dwight Howard.
Weeks after PER was panned by many fans for not conforming closely enough to their sense of reality, it came under fire from Berri for being developed largely to conform with conventional wisdom. The gods of irony rolled with laughter.
I’m not interested in getting into the specific critiques regarding Wins Produced and PER. Both sides have valid arguments. My question is, “What’s the point?”
Here’s Berri’s answer to that question: “As I noted at the end of my comment on PERs, we are simply debating how people perform in a game derived from ‘Duck on a Rock.’ And when you put it that way, the whole discussion seems kind of silly. I would note, though, that the worker productivity data generated in sports does allow economists and other social scientists to investigate such issues as labor market discrimination and how people process information. For such studies it is useful to have a measure of worker productivity that connects what the workers do to what the workers are trying to accomplish.”
In other words, Berri needs a player rating so that he can perform studies. I don’t see how anyone can disagree with that; I wrote something similar a couple of years ago. It is also true that which rating you use in a study can affect the conclusions you draw.
For example, I was thinking over the summer about the concept of peak age in basketball. A couple of rigorous studies have shown that the peak age in the NBA is about 27, similar to what it is in baseball. However, these studies were all conducted using linear weights ratings like PER and Wins Produced, and the results from Dan Rosenbaum’s work with adjusted plus-minus have suggested that young players tend to be overrated by their box score stats. Is it possible that, if we look through the unbiased prism of adjusted plus-minus, peak age is truly older than we thought it was? That has significant implications for the APBRmetrics community.
That all said, in practice player ratings are used more frequently to, well, rate players rather than conduct studies. PER is now featured on ESPN Insider, while many of Berri’s entries on the Wages blog have centered around how Wins Produced rates various players and what the implications are of these ratings.
Hollinger has said in the past that he intends PER as a jumping-off point to start a discussion about a player, as a summary of the other stats we track for players. That’s difficult, however. Intellectual laziness makes it easy to look at Chuck Hayes posting a higher PER than Jason Kidd and say that implies Hayes is better than Kidd. In Basketball on Paper, Dean Oliver paraphrases Bill James to say, “reducing quality to one number has a tendency to end a discussion, rather than open up a world of insight.” I tend to agree.
Take a second and think about your favorite pieces of writing by Hollinger or Berri or any other APBRmetrically-inclined writer. Think about something that challenged your perceptions or made you think. Now think about this — did that writing center around a player rating? I’m willing to bet it didn’t.
Evaluating or comparing players based on their box-score stats is just one of any number of potentially interesting topics that can be analyzed using stats. These range from studying the impact of the new synthetic NBA basketball to finding creative ways to study the impact of a player’s role in their team’s offense on their efficiency (which would aid in the discussion of PER and Wins Produced) to looking at whether a team’s surprising fast start is likely to continue to studying whether to foul when leading by three points late in a game to tracking defensive stats that the NBA doesn’t count.
Even in situations where evaluating players is clearly the objective, like trade analysis or after the draft, player rating systems alone can’t do the job. How a player’s skills fit in with those of his teammates is crucial to such analysis. Sometimes, in the case of young players, it’s more useful to project their future by using similarity scores and finding comparable players. It’s never as simple as looking at which player has the better PER or more Wins Produced or whatever rating you prefer.
The debate over player rating systems does matter, both because a better system means better studies and because these discussions, when they’re conducted in an open-minded, back-and-forth fashion, can open up our understanding of the game. In the big picture, however, player ratings are just a part of the APBRmetrics community. There’s a lot more to analyzing basketball than rating basketball players.
Kevin Pelton serves as beat writer for SUPERSONICS.COM and provides commentary for 82games.com. He will contribute occasional columns to CourtsideTimes.net.
Published on Tuesday, November 28th, 2006 at 1:01 pm
Problem number one: Finding a way to remove the author of “Blink” from the equation.
Roland ratings tried to combine individual and team performance stats. It is the right direction but it is too simplistic to date. Combining adjusted +/- and role adjusted counterpart matchup statistics would get much closer.
Blend, you and Kevin are right, PER or Wins Produced are really just starting points. But that is different from how they are being treated in the world and I’m not sure how you change that.
I like and use PER, it’s easy to explain and I write for a more mass audience, but I look at it to see a snapshot. Like (to chose an easy example) if I see Luke Walton jumped from a PER of 11.6 last season to 17.2 this season, PER peaks my interest and makes me want to know why. That’s when I start looking at other stats, in this case three point shooting percentage.
Hollinger has long been a proponent of that, as Kevin mentions, I don’t know about Berri. The question, to me, is how to get that message out, that these player ratings are 10% of what we want stats to do. Of what they can do in the right hands. Player ratings strike me as a bit of a blunt instrument, and while that is handy more often we need a scalpel.
I guess one way you start to change it is articles like this one.
Great column, Kevin. I think finding a great player rating system is really important, but there are so many other little things that need to be addressed and fixed before we can attempt to get there that the debate isn’t close to issue #1 for me.
While we’re discussing basketball’s Hilbert Problem, though:
- Potential assists, or whatever you want to call them.
- Forced turnovers (both offense and defense), and unforced turnovers.
- A personal pet is d-rebs off free throws.
- There are a million defensive issues that should be looked into. As far as I’m concerned, every single APBRmetrician could focus solely and entirely on defense for two years and we wouldn’t be halfway there. There’s so much left to understand in a statistic sense, as Kevin Broom showed with his Zards charting.
- Aging patterns.
- How coaching decisions affect player performance and team performance. (Bill Simmons might help fund this if Doc lasts much longer.)
The current hubbub is about an accurate, single, all-inclusive number to represent a player’s current or past performance. From the economics standpoint, this is necessary. Does Player X warrant Salary Y based on his performance? However, a single number that represents all offensive and defensive player performance is not absolutely necessary to determine economic worth. We merely need accurate parts.
The Holy Grail would be accurate predictive numbers for player/team performance. In the context of basketball, this would include predictive power over interchangeable lineups. For example, how would a starting lineup of Shaq, AI, KG, Bobby Jackson and Raef LaFrentz perform? How about Shaq, AI, Dwight Howard, Bobby Jackson and Raef?
Following up on Mr. Ziller’s thoughts…
-I find it hard to believe that we can’t keep track of assists in terms of points created right now.
-I’d be curious in the difference in damage between turnovers where they ball never goes out of play and those that give the opposition the ball out of bounds.
-A win probability version of +/-.
-As for defense, I think any significant breakthrough would most likely come from someone employed by a team both to have the time to compile all the necessary information and to provide a consistent understanding and application of relative blame or credit to individual players.
I appreciate your effort to put the problems currently generating controversy into perspective but I think you underestimate the value of a simple rating system.
for example, one of the most important questions is to come up with a model for player development — both for players within the league and players entering the league from the draft or overseas.
It’s also useful to identify areas to pay attention to during a season, like rookie performance, for example, or the post on the clipers blog about how Livingston’s performance has compared to other HS players taken during the same draft.
I think having some basic summary statistic based on box score stats in incredibly valuable. I think +/- stats are ultimately more important, but they are harder for the amateur to get ahold of and play with than box score stats and I think apbrmetrics gets a lot of contributions from people just playing with numbers in a spreadsheet.
Also, for the record, I think it’s worth giving John Hollinger credit for the fact that some of the most interesting stats articles of the past 5 years are his mini-studies in his first prospectus. His lengthy write-up on comparing rebounders from different eras is a model of conscientious writing about the problems of comparing players across eras. His study about the relationship between 3pa and offensive rebounding was added to my sense of “conventional wisdom.” etc . . .
First of all, great article. I agree that there can’t possibly be one stat that rules all others. Life isn’t as simple as that. However, there are times when you need a summary stat to make things easier. When I do my studies on the draft, I base them all on PER. In the long run it’s easy to use and generally right about things.
The problem is the box score. To make conclusions, statisticians need data. “Traditional” stats are the ones easily available in the box score.
Baseball had a similar problem. There were many traditional stats available, but only so much could be done using them (and baseball much more accurately described by a box score than basketball/football). Eventually, companies like STATS Inc. and Baseball Info Solutions developed play-by-play data which has now opened up another level of data for analysis.
From my understanding, 82games or others are on their way to developing play-by-play data. The box score is useful to an extent, but as has been describe here many box score stats are useless/misleading (the assist comes to mind).
There are limits to what any amount of data can tell you. In the case of basketball, the traditional stats can tell you very little about the game itself. This means that any analysis (PER and the like) based on those stats is inherently limited.
It might be interesting to take the top 20 players by PER and Wins Produced and compare the lists against a blend of the two and see which of the three seems more “right” to most folks.
Ideally I liked to see player rankings based on PER differential or Wins Produced differential by matchup to capture the full defensive aspect.
To get at “role adjusted counterpart matchup statistics” even more fairly it would be great if someone did a comparison of opponent scoring to their average from the boxscore or video like Dougstats publishes on a limited basis to get defensive tendex. Maybe someone in the greater world of fantasy stat providers could add that to the list of things they track.
And for those players who don’t shoot much by role, and that affects their PER value, maybe their PER could be adjusted upwards somewhat, adding dummy value for forgone shots ceded by role assingments to other teammates- perhaps to a degree that would close the gap between their actual shooting and league average shot attempts by half the distance? Would that be a useful compromise hack?
This recent Piston team is an intriguing team for statisticians. Due to the defensive success of Detroit and lack of significant stats on the defensive end it was hard to pinpoint the Pistons MVP the last 2 seasons. Billups had the team’s best PER, Hamilton was the leading scorer (per game & per minute) and Wallace got most of the hardware (All Star, DPOY, All NBA, etc.). I’m sure you could make the case for Tayshaun (stellar defense, the Miller block, etc.) or Rasheed Wallace (his arrival made them champions) as well. With Ben Wallace’s departure, I was hoping to get a clearer picture on this issue.
This year, Wallace’s Bulls have been mediocre, while the Pistons have become one of the league’s best scoring teams (ranked #1 at this writing). The easy conclusion is that Ben Wallace is overrated, one I’m not comfortable agreeing with.
The more complex answer is that basketball is not a linear game. A team like Detroit is deep enough and its players have enough combination of athleticism & skill to change from a defensive powerhouse to an offensive powerhouse. Meanwhile the Bulls’ acquisition of Wallace might be a lesson on NBA’s diminishing returns. This kind of dilemma might shine a light on the PER/WOW/plus-minus debate. Maybe a single number will be able to tell us what a player *was* worth, but when considering a player changing teams/schemes a single number won’t be able to tell us how much that player will be worth.
Often times in baseball people will talk about how valuable a lineup of Mike Piazzas would be, but we can’t do that in basketball. A lineup of 5 Nash’s or 5 Ben Wallace’s wouldn’t be as valuable as a lineup with 2 Nash’s and 3 Ben Wallaces. But we have yet to device a statistical way to show this.
Interesting thoughts, though I have some disagreement on your characterization of the Pistons.
Detroit went from a mediocre offensive team to a great offensive team not when Ben left but when Saunders arrived. The comparison of the 04-05 Pistons under Brown to the 05-06 Pistons under Saunders shows how much a coach’s system can affect a team. Those two teams had basically all the same players yet became much better offensively in Saunders’ system.
And the biggest change with Ben leaving has been on the defensive end. Detroit’s defensive efficiency has plummeted, though it may bounce back some as the season progresses.
So I don’t really see Ben as a guy who necessarily kills your offense or doesn’t make a big difference on defense. I think you’re right on the mark with your diminishing returns theory as far as Ben and Chicago go.
This year’s Ben Wallace is not last year’s Ben Wallace. He’s been in steady (not yet dramatic) decline for a few years.
His rebound rate peaked in ‘03 and has dropped every year since. Blocks peaked in ‘02, downhill every year since.
How to look at a chart of numbers and guess when a player will ‘fall off the table’? I think I’d have to meet the guy, talk to him, and shoot around with him. There has to be enthusiasm. If your skills are slipping, you have to learn some new ones.
Guys who are content with their current package of skills (and/or their current contract) will only go downhill; just a question of how fast. Did anyone know Zach Randolph had the wherewithall to crank his game up this year? Was it in his numbers?
I’m not so sure you couldn’t do something akin to the baseball lineups thing. When they talk about “nine Barry Bondses” or “nine Doug Mientkiewiczes,” they’re pretty much talking about offense only, because defensive metrics are more difficult to come up with there. They’re easier in basketball, though, so looking at “five Ben Wallaces” vs. “five Steve Nashes” isn’t necessarily all that far-fetched.
Leave a Reply