When I redesigned my S&P+ ratings last year, I did it with a future redesign in mind. The new offensive and defensive ratings included aspects of field position and finishing drives, but those aspects were pretty broad and clearly included special teams components.
So from the day I unveiled the ratings last year, I professed the further need to begin stripping special teams from offense and defense. In a roundabout way, a good punter was improving the offensive ratings (since it meant the offense created better field position for the defense) and a good return game was improving the defensive ratings.
Between writing a book and undergoing the sixth year of offseason previews, I didn't exactly leave myself a lot of time for exploration here. But over the past couple of weeks, I've been able to revisit an idea I had back in January.
Using the success rates discussed here for punts, kicks, punt returns, and kick returns, and using the net value method for field goals, we can begin to piece together what might eventually become a Special Teams S&P+ of sorts.
Since not every special teams category carries the same amount of weight -- there are usually more punts than kickoffs, and you don't have a return on every punt or kickoff -- we will attempt to ascribe more importance to the more important pieces of special teams.
I started with the weights I referenced in this piece and started to tinker with them based on what made the ratings correlate more strongly with the overall S&P+ ratings (since that's where I'll be taking this, and since, again, pieces of special teams are already in the ratings).
I ended up with the following weights: 44% place-kicking (that surprised me), 24% punting, 14% kickoffs, 14% kick returns, 3% punt returns. That's what ended up with the best correlations, but that might not be where I end up. At first glance, that feels way too high for field goals and kick returns. For now, though, it works.
Looking at kicks and returns as simple yes/no successes seemed to ignore an aspect of explosiveness, but I was wary of going too far down the explosiveness road because we're already taking a small-sample data set and making it minuscule. Last week, I tinkered a little bit with ways to incorporate the magnitude of successful returns, but it didn't make the numbers any more predictive.
So for now, we're sticking with an efficiency-only model. And until a) I collect more special teams data and b) it tells me to make a change, I’m sticking with the weights above regarding the different pieces of special teams.
So basically, the ‘beta’ version of Special Teams S&P+ that I explored back in January is still the best one I’ve been able to come up with. What happens if we incorporate this figure into the overall S&P+?
To explore this answer, I first took the year-end special teams ratings, gave it about 10% weight in the overall S&P+ equation, and looked at the results. It was amazingly explanatory. And while this is massive retrofitting, it rose S&P+'s retrospective performance against the spread (taking the year-end S&P+ ratings and applying it to 'predict' the games that happened) from 55% all the way to 59%.
I was pretty intrigued by this, but it didn't really mean anything yet. S&P+ is intended to be predictive, not simply explanatory. So I needed to simulate a season, from Week 1 until the end of the year, to see how much a difference this could really make. That would inevitably mean lighter weights in the overall formula (since the sample sizes really are tiny until at least midseason). Would it make a difference in S&P+'s performance versus the spread?
Yes. A bit. Re-simulating 2015, I was able to bump S&P+'s performance vs. the spread from 52% to 54%. Re-simulating 2014 (which was a little bit blurry, since I used preexisting 2014 projections ... which were derived in a completely different way, for a completely different S&P+), it went from 50% to ... 50%. (2014 wasn't a great year for S&P+, with more teams than normal making midseason shifts in quality.) I will be attempting to simulate at least 2013 this week, but from all indications I can derive, adding a Special Teams element to S&P+ a) doesn't make the rating any less accurate at the very least (and might make it more accurate), and b) makes it that much more descriptive.
I've always aimed for both with this system -- the ability to make strong predictions and the ability to tell stories about a team -- and I am confidence enough at this point to add Special Teams S&P+ to the mix. Hopefully the increased utility makes up for the fact that I am a damn tinkerer, and my numbers change every damn year.
I am not updating the S&P+ pages at Football Outsiders just yet (will soon), but here are *new* S&P+ ratings for 2015 and 2014 based on the addition of Special Teams S&P+.
2014 and 2015 S&P+ (with special teams)
(This is an embedded Google Doc, so click between the 2014 and 2015 tabs to see each year’s new ratings.)
Note: Special teams ratings are presented in terms of what they add or take away. So, Florida State, No. 1 in Special Teams S&P+, basically benefited by about 1.5 points per game from special teams while Charlotte, No. 128, lost about 1.6 points per game. As you can see, the range here isn’t broad because special teams don’t carry much weight. Still, that’s an extra three points per game. And over two years of projections, it flipped about one of every 100 games in S&P+’s advantage from a Vegas perspective.
One other note: Because field position and finishing drives now carry different weight in the offensive and defensive ratings, those numbers have changed as well.
Any major changes here? Nope. Good teams were still good, bad still bad. (And Washington’s still the “WTF?” standout of 2015.) But you do see a few pretty large shifts in the middle of the rankings, where a one- or two-point swing can move you up or down 10-15 spots.
One of the major tests of a given rating’s stability is by comparing midseason ratings to end-of-season ratings. Obviously more data is going to change your ratings here and there, but a reliable measure will still have reached some solid conclusions by midseason.
Looking at 2014 (primarily because of how strange that season appeared to be), here are the correlations between teams’ Week 7 ratings and their end-of-season ratings:
- Offense: 0.922
- Defense: 0.884
- Special Teams: 0.758
Because of the sample size issue, I wasn’t sure what to expect here, but I figured special teams would be lower. It is, but tolerably so.
Anyway, there you go. That’s the S&P+ that you have to look forward to this coming season ... barring any “holy crap” revelations when I simulate 2013 this week.