Saturday, August 6, 2022

How Should We Understand Basketball?

Basketball is a hard sport to make sense of. It lacks the statistical rigor of baseball, but some of its fans attempt to understand it through the lens of statistics that seem like sabermetrics nonetheless. These fans are trying to bootstrap a Bill Jamesian statistical revolution in basketball despite lacking the deep understanding of the game that enabled James and co. to determine which metrics actually reflected what was happening on the field. So-called "advanced statistics" in basketball -- spoiler alert -- often do not.

Other fans view basketball through the "eye test," in which their opinion is formed through what they are able to see and pick up from watching actual games. This is, of course, the default approach to any sport, and the way nearly everyone watched nearly every sport right up until Billy Beane and Michael Lewis changed sports forever with the Moneyball A's and the accompanying book (and yes, the book itself was a huge part of the sabermetric revolution).

Of course, now that we are in an era in which sabermetrics have somehow become relatively mainstream (if still poorly understood -- a shock, for an increasingly complex statistical model of a sport fewer and fewer people watch), most serious fans of any sport tend to understand that the eye test, if still a reasonable way to enjoy the sport, is not always the best way to understand it.

The majority of fans -- of any sport, I suspect, but especially of basketball -- tend to draw their opinions from the opinions of those around them. More broadly, we probably all do this with our opinions of everything; in terms of Bayesian inference, we form priors based on our past experiences, which we update based on how well they predict our observations of the world around us. If everyone we know says Some Fact about basketball, we tend to assume that that fact has a reasonably high probability of being true until and unless we see contradicting evidence. And for basketball fans, contradicting evidence is complicated to define and very hard to recognize when you see it.

In this article, I will discuss each of these methods of attempting to understand basketball and explain why none of them, as far as I can tell, is a good way to understand basketball... and why that makes it so painful to engage in the sphere of basketball discourse.


Part I: Basketball "Sabermetrics"... Or Why John Hollinger Is Not Bill James

What Is Sabermetrics?

Bill James is a statistician who, in the 1970s, started explaining to anyone who would listen that the way we understood, thought about, and talked about baseball was entirely wrong.

For decades, no one really listened. There was a small circle of baseball/statistics nerds who paid attention, and gradually that grew into a somewhat larger, but still profoundly marginalized, society of nerds. But there still remained a distinct separation between these people, who thought they understood baseball better than anyone else (and did!), and the actual people responsible for playing baseball, coaching baseball, evaluating baseball players, and signing, drafting, trading, cutting, promoting, demoting, and paying baseball players. The Jamesians had no influence whatsoever on the sport they felt they might be able to revolutionize.

That changed with the advent of the Moneyball Oakland A's, and their success under GM Billy Beane. Beane and the rest of the Athletics' front office used a sabermetrically-informed approach to identify players who were systematically undervalued by every other front office in baseball, and as a result were able to perform astonishingly well despite having one of the lowest payrolls in baseball, a sport where payroll has historically strongly correlated with wins (and still does).

Despite strong resistance to this innovation by virtually every front office in baseball (for more information on this you really should read the book, or at least watch the movie), eventually sabermetrics became a necessary component of winning at baseball (especially after the "Moneyball" Red Sox, who combined sabermetric analysis with the second-highest payroll in baseball and won four titles in fifteen years), and now every team in the league has a sabermetric analytics department.

I could wax eloquent on precisely what it was that Bill James noticed about baseball that so dramatically changed the game, but Michael Lewis already did and I'm not following that act. Instead, simply realize just how ripe every other sport now was for a statistical revolution...


Who Is John Hollinger?

John Hollinger is a sportswriter and "analyst" who maneuvered his way into an honest-to-goodness high-up front office gig at an actual NBA team by inventing a fake statistical metric that didn't do anything and convincing an entire generation of basketball fans that it was the single best way to compare performance that had ever been invented.

His "metric" is called PER, or Player Efficiency Rating, and its formula looks like this:


If you're thinking that that looks like an obvious attempt to obfuscate what's really going into this formula, you'd be correct. If you're thinking that it looks like a very legitimate-seeming metric and that it probably measures something meaningful, you should keep reading.

Here is a simple fact about statistics. You can add up, multiply, divide, and subtract all kinds of numbers, and formulate arbitrarily complex arithmetic expressions that technically measure something. Some of these metrics might measure something profound and meaningful, for instance baseball's wRC+, possibly the best offensive metric in the game, and one which, while not simplistic, is far simpler to define than Hollinger's Monster. Others of these metrics might have no real meaning at all, or might be far inferior.

How exactly are we supposed to determine which metrics are good, and which are bad? It's simple: We determine what we want to measure -- usually, how predictive a metric is of future performance -- and then we test how well that metric predicts the future.

In Ben Morris's stunning and legendary-among-sports-stats-nerds article The Case for Dennis Rodman, he devotes an entire section to explaining why PER is a terrible stat that effectively measures nothing and does so badly. In his conclusion, Morris writes:

If you're not interested in digging into the evidence for why PER is a terrible stat, just take my word for it. It's a bad metric invented whole-cloth by someone who doesn't understand what makes basketball players good and is substituting artificial rigor for good statistical analysis... and with the exception of Morris and people like him, it went (and goes) largely unquestioned by the majority of basketball fans. For a full decade you'd see NBA fan types with sabermetric pretentions throw around PER like it was the be-all, end-all of the basketball debate. (It's recently fallen somewhat out of vogue, as far as I can tell, although its replacements -- as I will explain shortly -- are hardly better.)

This is an inauspicious start to our journey into basketball "advanced statistics." It gets slightly better from here -- but only slightly.


The Three Point Revolution

There has, in fact, been a revolution in basketball akin to the rise of sabermetrics in baseball. I'm talking, of course, about the rise of the 3-point shot.

Here is a chart (that I handmade for you in Excel because I love you) of 3-point shots attempted per game, season by season from the shot's introduction in the NBA in 1979 to the present day:


(If you're wondering what the odd spike from 1994-97 is, it's when they temporarily shortened the line from 23'9 at the top of the arc to 22' all the way around.)

What you're seeing is a gradual revolution, a change in the tides of professional basketball that grew out of a single profound revelation: that 3-point shots are worth more than 2-point shots.

Here's the math on that: The general baseline for a very good two-point attempt is one that goes in 50% of the time. This varies by where the shot is taken -- for a midrange shot, 50% would be insanely high; for an open dunk, 50% would be atrocious. But in general 50% is the number to shoot for -- and indeed, teams tend to shoot at just around that number from inside the arc, depending on the season. (2pt% has gone up over time for reasons that I'll get to shortly.)

A 50% shot from 2-point range has an expected value of .5*2 = 1 point per shot attempt. Easy math. Here's the thing: in order to get the same expected value from a 3-point attempt, you only need to make the shot 33% of the time! (.33*3 = 1, of course.) When the 3-point line was initially introduced, you had very small numbers of attempts and fairly low percentages -- league-wide 3pt% didn't cross the 33% barrier until 1989-90, and it fell just below 33% for the last time the following season. Yet as you see above, the 3-point revolution had barely even begun in 1990, and it really exploded in the 2010s and 2020s, over which span 3pt attempts per game doubled.

This revolution has utterly changed the game. The focus of the league has become efficient scoring focusing on opening up 3-point shooters and making easy layups or dunks in the paint. This is why 2pt% has gone up, by the way; they haven't gotten better at 2's, their shot selection has gotten pickier. No one (well, almost no one) takes contested midrange jumpers anymore, because the data say that the best shots are 3-pointers and layups.

Last season, the NBA at large made 35.4% of its 3-point attempts and 53.3% of its 2-pointers. That yields an expected value of 1.066 points per attempt for a two and 1.062 for a three -- that looks a lot like equilibrium.


(A chart, just for fun, of this equilibrium -- notice how 3-point points per attempt starts low, with the low percentages of the early '80s, but then passes 2pt points per attempt in 1991-92, and only now has 2pt% caught up.)

The NBA, as a whole, is finally taking enough three-pointers.

But individual players are not.

Here's a fun hypothetical: Say you are a basketball player whose contract offers a bonus for finishing the season with a 3-point% of 40% or better (commonly viewed as an elite 3pt%). Say that, in the final game of the year, you receive the ball at the top of the arc for a mildly contested three-point chance. You are currently sitting at 40.01% exactly, and if you miss this shot, you're out $500,000. You figure you have a slightly below-average chance of making the shot, say a 38% chance. To your right is a teammate, a substantially worse shooter with a worse look who could make his shot about 30% of the time. You can take this shot, and risk your percentage dropping, or you can pass the ball, reducing your team's expected value from this possession in order to maximize the financial benefit to yourself. What do you do?

This is obviously a contrived scenario, but the rise of efficiency as the Statistical King of basketball has a genuine impact on how players make decisions. There were 25 players in basketball last season with a 3pt% of 40% or higher, led by Luke Kennard, with 6.0 3PA per game and a 44.9% make rate. Surely Luke Kennard could have taken another couple 3-pointers every game without too big a hit to his percentages. Remember, the league-average points per attempt is around 1.06, so any three he jacks up with a greater than 35.4% chance of going in is good for his team. (Kind of. The more accurate way to phrase it is: any shot he takes that has a higher expected value than the shot his team would settle for if he DIDN'T take it is good for his team... but that's well beyond what publicly-available analytics can do; maybe some NBA front office has those numbers but I sure don't.)

But it's not necessarily good for Luke (whom I'm not trying to pick on). And basketball players, now more than ever, prioritize their own self-interest above their teams'. Especially now that everyone on social media thinks they understand Advanced Basketball Sabermetric Analytics because they use True Shooting Percentage (a metric which favors exactly two kinds of players -- high FT% guards and big men who take all their shots within 3 feet of the basket -- to the exclusion of all others) instead of field goal and three point percentage. If you're an NBA player, you're not just trying to take the best shot available, you're trying to take the shot that will work out the best for yourself. You're trying to take shots that let you raise volume stats without compromising efficiency.

This is obviously a much more nuanced topic than the PER example, and one that I may yet expand on in future posts. But please notice the pattern: no one (including me) has actually done the analysis to determine whether true shooting percentage is a good metric. People just accept that it's the best way to determine efficiency (it's not), and that efficiency is the most important thing a player can strive for (it isn't), and draw conclusions based on those assumptions. There is no real rigor in basketball analysis. Even I don't have it in me to do a multiple regression to figure out what TS% actually measures, largely because I can't figure out what dataset I should do it on. (When I figure this out I might write another post specifically focusing on why NBA fans and analysts are wrong about shooting.)

This is the point: the 3pt revolution in basketball has already taken place, and has presumably reached an equilibrium -- that is to say, barring a change in the overall skill level of the league, we shouldn't expect to see that many more threes taken per game from here on out. But this revolution took decades to arrive, despite the math being obvious for over 30 years, and fans still misunderstand the right way to evaluate the decisions made by shooters. The NBA as a whole has largely optimized for the highest-value shots (this is why the points-per-attempt chart has met in the middle), but basketball analytics still celebrate players who shoot 45% from three, rather than acknowledging that such a high 3pt% means they should almost certainly be shooting far more. An exorbitantly high 3-point percentage, more than anything, thus becomes a measure of selfishness, prioritizing one's own statistics and contract above what's good for one's team.

Within the context of this article, the takeaway here is that even when basketball arrives at a correct decision (from a strategic perspective, if not an entertainment one), it does so slowly, over decades, and without truly understanding the constituent elements involved. I'd honestly be surprised if most NBA teams even realize we've hit a point-per-attempt equilibrium -- I'd bet that next year we'll see more three point attempts than we saw this year, even though statistically we probably shouldn't. (Actually, we should, but those three point attempts should come from high-percentage three-point shooters, which would come with a global increase in 3pt points per attempt, which we haven't seen in about 20 years... but I digress.) And fans, more than anyone, fail to understand what is good for basketball players to do, why it's good, and how to tell the difference between a "good metric" and a "bad metric." In basketball, that distinction doesn't even exist.


Part II: The Eye Test

What Is the Eye Test?

The Eye Test means watching a sport with your eyes (your deeplydeeplydeeply flawed human eyes) and drawing conclusions about that sport by relating what you see with your deep understanding of the sport that you have from your years of playing, coaching, scouting, and managing it professionally--

Oh, you don't have years of playing, coaching, scouting and managing the sport? Well, me neither. Hopefully that won't be a problem.


What Is the Eye Test Good For?

There are a handful of people in this world who are so smart, so knowledgeable about their sport, and so uniquely gifted at not succumbing to cognitive biases, that they can actually use direct observation of athletes to determine how they will perform in the future.

For instance, there's a football scout named Scot McCloughan who successfully scouted a ridiculous list of talent for the franchises lucky enough to sign him. (I talk about this here, in the paragraph starting "Here's a quick list of players you might find interesting.") I'll extend this to Patriots coach Bill Belichick as well -- anyone who can maintain a level of team dominance over 20 years is doing something right, scouting-wise.

And there's Jerry West. As I wrote about here (and in far more detail here), West has contributed to 30 Finals appearances and 14 championships for the teams he's been affiliated with (it's now actually 31 appearances and 15 championships, since the Warriors core he helped assemble from 2011-17 just won another title this year), far and away the most impressive resume in the history of basketball. West, and to a lesser extent a few other major front office figures (Red Auerbach and Gregg Popovich, primarily), are probably also uniquely gifted at recognizing and recruiting elite talent.

You could likely extend this to the elite college football coaches, whose job is largely recruitment-based (Nick Saban, the greatest college football coach of all time, had an NFL coaching record of 15-17, or .469). Then again, it's at least as likely that all college football coaches are working from more or less the same information and Saban just happens to have made by far the best recruitment infrastructure. That's a different skill entirely.

Besides them, there are probably a couple other figures in sports who are genuinely elite at using the eye test to recognize talent. Those are just the only ones I've noticed in my 18 years of watching pro sports semi-obsessively (have you seen my blog??).


What Is the Eye Test NOT Good For?

Everything else.

A dirty little secret of professional sports management is that most people, even at the very highest level, are really bad at their job. You've probably seen pro sports front offices make decisions that, even to you, look terrible, and as often as not you're right. That isn't because you're better at sports management than them, it's because almost everyone is bad at sports management. When there are exceptions, they are blindingly obvious to everyone (well, the Popoviches and the Belichicks are obvious to everyone; not as many people notice McCloughan and West) and they tend to dominate their sport for decades at a time.

Let's return, briefly, to Moneyball. There's a famous (and wonderful) scene from the movie where a group of old-school scouts are discussing the parameters they traditionally use to evaluate baseball players. There are the classics -- run, throw, field, hit, and hit with power, the famous Five Tools -- and then there are the other things. His girlfriend's ugly, but he's got an Attitude, he's good-looking, he's ready to play the part...

The scene is an exaggeration (I assume), but hardly much of one. This is the result of a century and a half of big-league scouts using the same eye-test parameters to evaluate players. This is why Moneyball was such a huge deal in baseball, and why it was so controversial.

But using statistical analysis to supplement or inform the Eye Test is hardly new. Henry Chadwick established the Box Score only one year (possibly two) after the first organized baseball association sprang up, and later devised such measures as batting average and ERA. Even in the 1850s and '60s, Chadwick and others recognized that statistics offered an objective element to baseball analytics that had been absent before. Baseball front offices took 140 years to catch up.

In basketball, things are not quite so bad. There is a general understanding that there is value in at least basic statistics and those advanced statistics that exist (although we've seen why these range from nonsensical to insufficient). But basketball fans nevertheless tend to use the Eye Test far more than e.g. baseball fans.


How Do People Use the Eye Test?

Basketball fans act as though the eye test is a major factor in how we should understand players. I'm going to get more into this in the next section, concerning how we draw opinions from those around us, but for now let me give you an example.

Very often, you will see basketball fans say something like "the statistics don't tell the whole story." You have to have SEEN a player do such and such a thing, seen the ease with which he controlled the court and manipulated the defense, seen the raw power and athleticism and speed and so forth. You cannot, they say, possibly get the full measure by reading a list of statistics; you must have seen the player play or you can't understand just how good they actually were.

This is how the eye test ruins sports discourse. Now, there is truth in the idea that you can't get the full measure of things from statistics; that is certainly the case, and not even the most sabermetrically-informed front office would ever attempt to scout players without ever seeing them in person. (For a quick example of why not: some pitchers have a tendency to tip their pitches, which minor league batters might not notice but major league batters will notice, take advantage of, and use to annihilate the poor pitcher. This is something that a pro scout should notice by watching the player with their trained, professional eyes, but that wouldn't necessarily show up in the statistics until opposing teams caught wind of it.)

But there are problems looming in any claim that you simply can't understand why a player was so good without having seen them. The assumption here is that by actually watching them, you'll notice things that the player did that contributed to their team's success that don't show up in the statistics. There are two issues with this assumption. The first is that if they're doing something that makes their team win, it should show up in the statistics! There should be some statistic, somewhere, that illustrates the thing the player did that was so valuable to his team. Otherwise there's no way to tell that they did anything at all. But I will admit that, basketball statistics being as limited as they are, only a relatively narrow range of all potential player impact is covered by available metrics. (E.g., as far as I know there are simply no really good individual defensive metrics in basketball. None.)

The second, far worse issue is that by watching a player with your eyes, you are far more likely to be impressed by things that have nothing to do with winning than you are to pick up secret contributions that the statistics miss. Say you watch Michael Jordan juke a defender, bowl over another, and rise above the 7-foot center for a HUGE dunk. Wow! The athleticism! Amazing! And now suppose you watch the 6'1 point guard Derek Fisher dribble off a screen and get an easy layup. Okay, whatever. But both of those plays are worth two points. Both constitute identical contributions to the team's winning efforts. Jordan's dunk is no more valuable to his team's success than Fisher's layup, but there's no doubt it will have a far greater effect on your opinion of those players. The Eye Test actually makes you worse at evaluating players, because it leads you to overvalue things that don't actually matter at the expense of the things that do.


Part III: Common Knowledge

The Genesis of Bad Takes

Now we have come to the Dark Place. This is the underbelly of sports "analysis." This is where almost every bad basketball opinion is germinated, gestates, and comes to fruition. In the past two sections I have illustrated (exhaustively...) why statistics and the eye test are insufficient ways to understand basketball, although one is clearly a better try than the other. In this section, we will finally come to the heart of the beast. Because, my friends, no one actually draws conclusions about players from watching them or from reading statistics. All basketball fans draw their opinions of the game from the same source. The Dark Hivemind of Basketball Bad Takes.

This is the anatomy of a bad take:

1. A basketball fan hears a claim, from a friend, a fan, or a professional "analyst".

2. They hear that claim again and again and again from other people of the same level of credibility.

3. They come to believe the claim.

4. The claim is challenged.

5. Either they dismiss the challenge entirely, or...

6. They draw "evidence" from statistics, from the eye test, or from the ethereal void, which they use to argue against the challenge.

7. Thus reassured, they rest... and in their minds they reinforce the claim as true.

8. They continue to repeat the claim, thus becoming part of the miasmatic Bad Take vortex that infected them with the claim in the first place.

This is not me being elitist about sports understanding. This is simply how human cognition works. And the only way around it is to become aware of it and actively work to recognize and compensate for these biases.

Needless to say, sports fans rarely do so. And sports media and analysts all tend to operate within the same set of assumptions about basketball, which leads to certain claims never being meaningfully challenged. Consequently, basketball has developed an intense dogma, in which certain claims are taken for granted among virtually all fans, and are not just unchallenged, but are impossible to challenge.

Take for instance Michael Jordan. The consensus among NBA fans, players, analysts, and just about everyone else on the planet is that Jordan is the GOAT, the Greatest basketball player Of All Time. I am not here to dispute that (or at least, not today...); I make no claim either way about its truth value. My question is this: why do people consider Jordan to be the GOAT?

There are many arguments, every single one of them falling into step #6 of our list up there; that is, all the arguments for why Jordan is the GOAT come from people trying to justify their pre-existing belief through evidence. These claims -- his MVPs, his DPOY, his scoring titles, his rings, his perfect Finals record, his Finals MVPs, his competitiveness, his athleticism, his sociopathic and violent leadership style, etc. -- are mostly reasonable, but they aren't part of any coherent methodology for evaluating players. They're post hoc rationalizations people come up with to justify why they believe the thing they already believe.

So why do most people believe that Jordan is the GOAT? Well, one possible answer is that those arguments are simply right. That if you figure out what the contributing factors of a GOAT ought to be and combine them in a methodical fashion, that Jordan would come out on top of the list. But, spoiler alert, this isn't why. (I once wrote very tongue-in-cheek article attempting to reverse engineer a set of parameters that actually would produce the top 10 list that most NBA fans tend to adhere to; needless to say, the resultant metric is borderline incoherent.)

But there is a better explanation for why Jordan is the consensus GOAT, and it has very little to do with basketball.

Jordan was the flashy, athletic, charismatic, wildly popular, high-scoring face of a league experiencing the first real international boom of its existence. If Magic and Bird saved the NBA in the '80s, Jordan turned it into a commercial juggernaut. His brand basically invented modern sneaker culture and turned Nike from a niche corporation to the iconic sportswear company it is today. This is also why Jordan, despite making less money in his career than such luminaries as Jalen Rose and Mike Bibby, and finishing at just over a fifth of LeBron James's record career earnings, was the first billionaire NBA athlete.

So let's return one last time to that model of how bad takes come about. Where, exactly, does the claim that Jordan is the GOAT come from? Quite simply, it comes from the PR wet dream that was Jordan in the '90s, and the NBA's incessant pushing of Jordan as the face of the league and the sport worldwide.

But let's move beyond Jordan. What else -- and who else -- spawned the NBA's shockingly unassailable dogma?


Bill F*cking Simmons

In 2009, Bill Simmons wrote a book called The Book of Basketball. In it, he provides a stunningly thorough retrospective on the entire history of the league, from its inception to the arrival of LeBron James and co. in the latter half of the '00s. Please note that the word I used there was "retrospective" and not, say, "critical analysis."

Bill Simmons is a fan. He got famous as a sportswriter because he wrote from the perspective of a fan; namely, a hardcore Boston sports fan. Celtics, Patriots, Red Sox, and presumably Bruins. This bias is omnipresent in his writing; awhile back, as part of a series of articles I haven't actually finished, I analyzed Simmons's choices for an all-time "Wine Cellar Team" and explained some of the ways in which the baises I've identified in this article play into his thought processes.

But the level of basketball discourse is such that NBA fans do not apply a critical analysis to Simmons's admittedly impressive grasp of NBA history and dynamics before they internalize his opinions as fact. Rather, (imagine me tapping the model from the last part), they hear his opinions and uncritically accept them as Gospel Truth about the NBA. Or, more likely, considering we're a decade or so on from Simmons's heyday, they probably hear other NBA fans restating Simmons's opinions and internalize those.

Make no mistake, Simmons's grubby little fingerprints are omnipresent throughout about half of all NBA bad takes. Among the ideas that Simmons's biased fan-brain have concocted and spread like a virus through basketball discourse are:

1. A virtually uncontested top-ten all-time list of players. The big difference is that Simmons, despite his bias, includes Kobe at #8 (later #9) in his top-ten list, while most modern NBA fans, apparently far more addled by bias than Simmons himself, drop Kobe from the list.

2. A myth that Wilt Chamberlain was a selfish basketball player who cost his team championships, in contrast to Bill Russell, who won championships through unselfishness. This, of course, fails to recognize that Russell played with, no exaggeration, TWELVE Hall-of-Fame teammates.

3. Some weird unfounded grudge against Kareem Abdul-Jabbar, which is probably part of the reason that no one makes much of an argument that he was better than Jordan even though he's one of the few players with an obvious case.

4. A general bias for Celtics players and against Lakers players that likely contributes to the utter loathing of Lakers players and fans in NBA spaces.

And likely many more. But I don't want this to turn into an evisceration of Simmons personally. I think he's wrong about a few things, but he's right about a bunch of others, and he's one of the more thoughtful and knowledgeable analysts in basketball.

So let's talk about...


OTHER Analysts in Basketball

So football analysis is pretty mediocre, largely because football is an immensely complex game that is far more difficult to parse into discrete roles and responsibilities than other sports. The only people who understand football on a deep level are high-level players and coaches. Occasionally you'll see guys like that in commentary roles, and it's amazing (go watch some of Bill Belichick's more thoughtful press conferences to see what actual football understanding looks like). And there are a few places that attempt to do (generally pretty good) higher-level analyses of football performance, such as Pro Football Focus and FootballOutsiders.

Baseball analysis is on another level. The rise and popularization of sabermetrics have led baseball to become the best-understood sport around, and analysis is commensurately high-quality. You can get great analysis in all kinds of places: statistical analyses, strategic breakdowns, genuinely good retrospectives. It's not universal, but it's far better than the other sports we're discussing here.

Basketball analysis... Well. There exist sources that attempt to break it down play-by-play, and some of them are even kind of good. (Many more are not, and most people, of course, lack the understanding to distinguish between them.) There are a few people out there trying to come up with far more rigorous statistical analyses of basketball, and I've seen a couple pretty charts that try to break down e.g. player shooting ability from various points on the field, with defenders in various locations... Perhaps these analyses will one day answer the question I raised way back in the 3pt% part of this article, about the expected value of an attempted shot versus a pass.

Indeed, there does seem to be some awareness, on the margins of basketball analytics, of the right questions to ask. This article, for instance, attempts a preliminary shot-selection metric with a few major flaws (e.g. lack of incorporation of defense relative to player position, a major factor in shot outcome). And this article posits a simplified mathematical model of shot selection. If these kinds of people aren't finding the answers, they are at least beginning to ask the right questions.

Mainstream basketball analysis, however, is a nightmare. I virtually don't watch basketball anymore, in part because I find the modern state of the sport generally unpleasant to watch, but also because NBA analysts are among the worst in any sport. This incompetence seeps down into the mainstream and colors basketball discourse in a way that simply doesn't happen in other sports.

There's a question I'm wondering about but can't think of a convincing answer: Why, exactly, is basketball so dogmatic? I've blamed a lot of people and things for this, but in truth every sport has its bad analysts, its mediocre stats, and its old-school eye-test adherents. But it's at an extreme in basketball, and I've never understood why. I tend to assume it's something to do with the way basketball discourse is structured and performed, but I honestly don't have a clear answer.

There is one more thing I want to cover, and then, perhaps, we can be done.


Conclusion: On Having Been There

I didn't watch Michael Jordan play. Or Magic Johnson, Wilt Chamberlain, Bill Russell, Larry Bird, or even Shaquille O'Neal in his prime. I've seen games here and there (more from Magic and Bird, fewer from Russell and Wilt), but I didn't start seriously watching basketball until 2008, which gives me 15 years of intermittent experience with the sport, but also means I've missed the first 80% of the NBA's history.

Here is a reason why people listen to Bill Simmons. All else aside, he's been watching basketball since the '70s and has a deep well of knowledge about the sport. He saw the Havlicek Celtics of the '70s and the Magic-Bird duals in the '80s. He watched the NBA grow from a niche league on the verge of collapse to the third-biggest sport in the United States and a worldwide draw. Simmons has a well of basketball knowledge and experience that few can rival, and he is uniquely good at expressing those memories, ideas, and vibes in a way that's genuinely fun and entertaining to read. (Do not let my incessant criticism fool you; I've read a huge amount of Simmons's writing and as a fan-oriented sportswriter I think he's pretty much unrivaled... Or at least he was, back when he actually did sportswriting.)

Of course from an analytic perspective this only has value if you think the Eye Test is a particularly important part of player analysis, which I don't. But there's a certain romanticism to it, a certain authority, and even a respect that the Old Heads of basketball merit.

There is a common argument in any context, but especially in sports and doubly so in basketball, that goes like this: If you weren't there, if you didn't see it, you can't possibly understand what it was like. No mix of stats and highlights and even going back and watching games can approach the feelings of having seen people play who were unlike anyone before or since. Beyond his commercial and PR appeal, the reason Michael Jordan was so special to so many fans was because of the way he made them feel. Everything else -- the awards, the stats, the achievements -- are all secondary. Nothing measures up to having watched him play and having felt that magic in the air.

Of course, I didn't watch him play. But then, neither did 40%+ of NBA fans, and a much higher percentage of such fans that engage in Basketball Discourse. (I did math for these numbers but the sources I used are here and here.) About 60% of NBA fans never saw Magic play, and well over 90% never saw Wilt. The eye test discourse is gatekept by the relentless march of the decades.

But the notion that it should be, that we should leave the task of basketball analysis only to the boomers and xoomers, is absurd. Yet it is with the tools I have mentioned, and those alone, that we strive to understand basketball. And it is through the limitations of those tools that we experience the profound frustration of failing to do so.