FanPost

Logo's Learner: Looking at Recruiting Rankings

Once the football season winds down and the calendar turns, the focus of the college football world shifts to the first Wednesday in February. Along with the hype of individual players and teams, there is a noticeable uptick in articles and discussions on the validity of the recruiting rankings and the system that creates them.

If you are reading this, then you likely also read Ian Boyd’s take on the system, where he focuses primarily on the fact that there are certain areas and types of recruits that often get overlooked by the system. As is common with arguments against the system, Boyd makes mention of teams that seem to outperform their annual recruiting rankings on the field. These include Baylor, TCU, Michigan State, and per the comments, Wisconsin.

Instead of using examples of outliers to attempt to prove a point, I decided to put the rankings to the test using larger data points.

Methodology

I started with the 2014 season, looking first at the SEC, then moving through the other Power 5 conferences. I did not include any Group of 5 schools in this because 1) I didn’t have a lot of time and 2) only one (Boise State) even made the final CFP ranking.

I used 247 Composite rankings for the 2010-2014 classes. I applied a weight to each class to get a five year score, using the average score of each class. The average score is helpful because it is a numerical value that is more descriptive than a star ranking, since the numerical value is a single point while the star ranking is a range. The applied weights are in the table below.

vskj6a.0.jpg

The thought process behind the weights is pretty simple. Players that have been in a program for 3 or 4 years tend to be the biggest contributors. Few out of a class make it to a RS-SR year, and first year players in their redshirt year are not contributing at all.

Next, I ranked the teams within each conference and compared these rankings with the actual results for 2014.

NOTE: Let me link another article from Football Study Hall from last year that found that higher rated classes won nearly two-thirds of the time over multiple seasons.

Results

2zrnigh.0.jpg

As you can see, in three out of the five conferences, the champion was the top ranked team by this method. And even in the exceptions, the champion is in the upper tier.

Much has been made about how Wisconsin outperforms their recruiting level, and a first glance at this chart might seem to show that. However, they only faced three teams in the Big Ten that outrecruited them, and they went 1-2 in those games. The rest of their schedule was filled with teams below them in this ranking.

Similarly, many point to TCU and Baylor as examples of teams that prove the recruiting system is greatly flawed. Once again, though, TCU went 5-1 against teams rated below them, while Baylor went 3-1, meaning that both teams were helped by facing teams that they regularly outrecruited. In fact, the Big 12’s win percentage is not hurt by TCU and Baylor beating teams above them, but rather by Texas and Oklahoma not living up to their rankings (5-4 and 4-4, respectively).

Over the course of the 2014 season, here are the results for each conference:

2n01ybk.0.jpg

Overall, this method accurately predicted two-thirds of the in-conference games between Power 5 schools. While this does not stack up to computer formulas (would come in at 71 out of 73 according to Prediction Tracker), this certainly shows a significantly positive correlation between multi-year recruiting rankings and on-field results.

For a system that uses simply recruiting rankings, with no adjustment for nonqualifiers, transfers, or injuries, accurately predicting two out of every three games is a very good number. This is just one method, but it is yet another one that utilizes numbers and data to show that recruiting rankings do seem to matter.