clock menu more-arrow no yes

Filed under:

The Yankees 2018 projections shouldn’t be considered predictions

New, comments

Expected value tables don’t always translate into real world results

League Championship Series - New York Yankees v Houston Astros - Game Seven Photo by Elsa/Getty Images

Imagine a standard dice. That’s a six-sided cube, no Dungeons and Dragons version here. If you cast it a single time, what value would you expect to roll?

Simple probability tells you that each side has a 1/6 chance of landing up, and you can find the expected value by multiplying each of the six possible values against their probability, and adding those products: (1x0.1667) + (2x0.1667), and so on and so on. The final sum would be 3.5, or, the expected value of rolling a fair standard dice one time is 3.5.

Of course, you can’t actually roll 3.5. Your rolls will always result in a value different than the expected value, and you could be off by a small amount or large. That doesn’t change the expected value of a single roll, but acknowledges that real-world events don’t fit snugly into mathematical models.

One of the most exciting parts of the baseball offseason is the release of various projections for the upcoming campaign. Matt Provenzano already broke down the recent ZiPS projections for the Yankees, and I’ve used the FanGraphs projected standings to evaluate strengths of schedule. With projection season in full swing, it’s useful to remember that projections are different than predictions, and essentially act as expected value calculations.

ZiPS, for example, works with a variety of variables including player service time, past performance, recent changes or adaptions by weighing the most recent season first, and aging curves. A single value isn’t entered in any of the categories, but rather a range of values is run. This generates a list of potential values, and as we saw above, multiplying potential values against their probability – i.e. the number of times the value appears in the set – generates our expected value, in this case the totals you see on a ZiPS projection.

Other projection systems differ slightly, by grouping players into classes based on historical comps, or aggregating “process” data like exit velocity. In the end, though, they all follow the same basic formula in generating expected values for various statistical categories.

So, when we look at Aaron Judge’s 2018 projection on FanGraphs, specifically Depth Charts (a combination of ZiPS and Steamer), we see a projected four win season, with 38 home runs and a 133 wRC+. Sterling numbers to be sure, but after being arguably the best player in baseball in 2017, many Yankee fans are quick to declare the projection systems faulty. They’re just not, though.

You can compare Judge to historical comps, and almost all rookies see a decline in their second full season. You can look at his past two seasons, one outstanding and one terrible. The more recent, outstanding season is weighted more heavily, true, but that’s why his numbers are so strong to begin with. Expected values by nature represent roughly the median of possible outcomes, and so anyone looking at Judge’s 133 wRC+ projection should note that about 50% of possible values are HIGHER than that, which lines him up for an extraordinary season.

Gary Sanchez is in a similar boat, projected for a 118 wRC+ and 3.9 wins. Again, maybe you think that his true talent level is higher than that, but that doesn’t mean his projection is wrong. Projections consider a declining trend, and Sanchez did decline in his second season of MLB time. He still had an excellent season, and a certain level of regression was inevitable, but trends are what they are. Again, like Judge, 50% of possible outcomes for Sanchez are higher than his official projection.

Projections aren’t an exact science, of course. They will be off to some degree or another, and aren’t particularly great at predicting things like injury or breakouts. The math is solid, though, and should be since they’re refined every single season. Just like a dice, that doesn’t change the expected value of a single roll, but acknowledges that real-world events don’t fit snugly into mathematical models.