Revised July 21, 2002

Several years ago I was asked by some loyal followers of my rating system to put into words an exact description of my views regarding the National Championship in College Football. It was not an easy task, as the process itself is very complex. Yet, I do feel it is important for followers of any point of view, whether it be religious, political, or philosophical in relationship with sports, to be well informed of exactly what it is they are following. As a pollster for over 30 years, I certainly recognize, that in this new age of "instant information" the sports public is becoming more and more aware of their options available in regards to college football polls and rating systems. Thus, I felt it important for people to understand my views in relationship to this subject. I hope you find it to be informative, but even more importantly, thought provoking. Happy reading!


The sophisticated game of college football that we know today scarcely resembles that first game played in 1869. Although early games were physical, even brutal by some accounts, they were very simple and one dimensional by today's standards. In the early years teams just lined up and used brute strength to move the ball forward. Today we have complex offensive and defensive schemes that make the mental part of the game just as important as the physical. But, the simplicity of the game in the early years was not without controversy. Like determining a National Champion, for instance. In 1869 there were only two games played. Rutgers beat Princeton 6-4, and in a rematch Princeton beat Rutgers 8-0. So, who do you think won the inaugural National Championship? As you can see things are not always as simple as they seem.

The popularity of college football spread widely in the early 1900's. What began in 1869 with two teams grew to almost 90 major teams by 1920. The NCAA was founded in 1906 to organize and regulate the sport, and points for scores, size of the field, and penalties etc., were all standardized by 1912. But, the NCAA failed to address the one issue that burned in the hearts and minds of players, alumni, and fans of all ages, the question of "Who is No.1 ?" Perhaps if they had addressed it 100 years ago we would not have the controversy that we have today! American's thrive on competition. There is only one Grand Champion bull at the County Fair, one Best of Show at the AKC Dog Show and one Blue Ribbon Apple Pie, so there has always been a need for a college football poll. The problem lies in the fact that there has always been more than one poll and they don't always agree.

The first widely recognized College Football Poll did not originate until 1926. It was a mathematical rating system developed by Frank Dickinson, a professor of economics at the University of Illinois. Later, an onslaught of pollsters came onto the scene all prepared to crown college football's best. The list was staggering: 1927, Dick Houlgate; 1929, Dick Dunkel; 1930, William Boand; 1932, Paul Williamson; 1934, Edward Litkenhous; and in 1935, Richard Poling. All of those gentlemen had various mathematical formulas for determining a national champion. It's obvious that originally, and continuing through the 1920's and 30's, mathematical formulas were the norm for determining who should be declared the Nation's No. 1 team.

All of that changed in 1936 when the Associated Press (AP) began publishing a poll voted on by a national board of sportswriters and broadcasters, and because of its national distribution, their word instantly became gospel. The United Press International (UPI) joined the hoopla in 1950 by soliciting votes from a board of coaches. Their theory, I suppose, was that coaches knew more about football than writers and broadcasters.

It was bound to happen sooner or later, but it wasn't until 1954 that the AP and UPI disagreed on who the No. 1 team in the land should be. The AP chose Ohio State, and UPI favored UCLA. Both were undefeated as was Oklahoma. Ever since that fateful day in 1954 when the two "biggies" couldn't agree, the controversy of " Who's No. 1 ?" has raged on from the Golden Dome to the Tiger Den, from the Coliseum to the Swamp, from Happy Valley to Death Valley and everywhere in between.

Eventually, everyone and his dog got in on the action: The New York Times, Sporting News, Football News, Sports Illustrated, Sears, McDonald's. Heck fire, there are more polls than there are bowls and God knows we've got more than we need of both. Over the years there have been many fine rating systems developed, and with the advent of the Internet you may examine all of them by simply clicking a button. Check out David Wilson's Web Library of College Football Polls at Among those listed you will find Hermann Matthews, who began his poll in 1966 and Jeff Sagarin who began in 1978. Those gentlemen, along with myself and Kenneth Massey, Dr. Peter Wolfe, and Wes Colley, are the current recognized leaders in the mathematical poll process. Although the Dunkel Index is no longer part of the BCS, it continues to be one of the most respected polls in America.


In 1968 I embarked on the path of experimenting with different mathematical formulas. Two years passed before I was able to create a blueprint with which I was comfortable. That wasn't easy. I had a difficult time deciding between what I considered to be two basic approaches. I like to define them as mathematical and personal choice.

A pure mathematical poll is power-based and revolves around a point-spread projection for the upcoming week's games. This is the kind of system familiar to us through computer rankings. This type of system does take emotion out of the decision making process, and at the end of the season their No. 1 team will have a very high percentage chance of beating any other Division 1-A team. Impressive. Impressive but not always fair in head to head competition, which is one of my main concerns with any rating system.

A personal choice poll is just that, it is based solely on someone's personal opinion. In the 1940's and 50's individual personal choice polls were somewhat popular. Sports editors of large newspapers would sometimes announce a Top 10 college football poll at the end of the regular season.

The AP, UPI, USA Today Coaches Poll and most sports-related magazine polls today are all a form of personal choice. The choices are just grouped together to form a larger whole, but the source is still an individual vote and it boils down to being a personal choice. These type of polls are very familiar to us all as the impact on the sport of College Football over the years has been tremendous. The A P and the USA Today Coaches Poll are perhaps the most widely used polls in our society today, and rightly so. They both have a long, respected history with the sport. The problem here, if there is one, is that a personal choice poll can be too emotionally based and motive-oriented. We need enthusiasm in college football, but at times, emotion can override objectivity.

Personal choice polls are fun and exciting. I can assure you that on more than one occasion, while still in school, I raced to get a Tuesday paper to read the polls. They can really get the blood boiling at a rival institution, and remember, everyone is entitled to an opinion. Yet, personal choice polls, like mathematical polls, are not always logical. Many times, I have witnessed a team play a great game against a Top Five opponent, lose by a slim margin, and then drop drastically in the polls. If #7 Clemson loses to #1 Florida St. 20-17, I don't think Clemson should be dropped out of the Top 10. Over the years I've seen it happen numerous times.

You can see the dilemma that was created. I wanted to be fair, but I wanted to be logical as well. What's a guy to do? I solved it by uniquely combining the two. My system is a mathematically based power rating that is, I believe, through a series of checks and balances, as logical and fair as it can be within the boundaries that must be in place to assure objectivity. The Billingsley Report, where power meets logic!

This system is not designed for gambling use. If a person tried to gamble with this information without understanding its functions, they would miserably fail because you cannot look at these figures and determine a point spread. For instance, a #1 ranked Georgia with a rating of 300, playing a #10 ranked Florida with a rating of 270, looks like on the surface that Georgia would be favored by 30 points, since that is the way most systems are designed. Not so in my system. Georgia would not be favored by 30. A point spread can be determined through another step in math, but I use it only as a "performance projection" to determine the strength of the opponent. I never have and I never will support gambling in College Athletics.



The first thing I want to say is the same thing I have always said about my rating system. I'm not here to prove to anyone than my work is better than anyone else's. I have a very healthy respect for a lot of rating systems. This formula is just an extension of my point of view, and they come a dime a dozen. I will say I take my work very seriously. I have a passion for College Football and I have done a tremendous amount of research, more than anyone I know. All that hard work, experience, passion, and dedication has gone into the creation of this formula. I am not a mathematician; I am not a computer geek. I am a devout College Football Fan, and have been since I was 7 years old. My formula is 100% computer generated and it treats all teams equally. I wrote the program myself and its not written using fancy math equations, just simple addition, subtraction, multiplication and division. It's the RULES that make the system unique and the rules are MY RULES, rules that make sense from a fan's perspective, rules that come from 32 years of experience in which I researched the ENTIRE 132 years of College Football.

I'm a pretty strongly opinionated guy, and if you ruffle my feathers I can certainly take you toe to toe on any of these opinions.... but the one thing you will ALWAYS find about me is that I'm willing to listen, and if I'm proven wrong, I'm always willing to admit it and change. You may not always agree with where I place your favorite team, but after looking over your team's history for a decade or two, I hope you can at least say " this guy knows a thing or two about football."

OK, let’s make this short and sweet in the beginning for those of you who don't care about details. These are the main components in the formula, Won-Loss Records, Opponent Strength (based on the opponent’s record, rating, and rank), with a strong emphasis on the most recent performance. Very minor consideration is also given to the site of the game, and defensive scoring performance. Now... for those of you who appreciate details and like to hear me ramble, read on.

Believe it or not, the system is designed after our own United States Constitution. But don't hold that against it! Although at times I feel this system is just about as complicated as our Federal Government, there is one huge difference..... this one works!

The design is one of a series of checks and balances. Just as our Constitution designates Federal, Legislative, and Judicial branches that provide the basis for our Democracy, my formula provides a similar series of checks and balances to ensure accuracy (higher rated teams winning games against lower rated opponents), without sacrificing fairness in head to head competition. The checks and balances revolve around these three basic components, the Strength of the Opponent, the Won-Lost Record, and Season Progression. After 32 years my formula no longer uses margin of victory. It only accounted for 5% of the total for several years, and after careful consideration in the off season of 2001, I decided to remove it completely. For a detailed explanation please read, "BCS Approves Billingsley No Margin Formula" from the Home Page.

The SOS will fast become the "hot" topic of discussion in College Football as this component is now the main ingredient in the BCS formula, and for that matter all 7 computer polls. Especially now that margin of victory is no longer part of the equation. Why will it be so hotly discussed? Because WE ALL HAVE DIFFERENT MODES OF CACULATING SOS. To say, "oh, the most important part of my formula is SOS", means nothing. The important question to ask should be "How is it calculated" and "is that SOS calculated fairly?"

For many years I struggled with whether a team's SOS should be calculated by using a teams rating and rank on the day the game was played, or use an opponents most recent rating and rank. There are excellent arguments for both sides. Early on I used ONLY GAME DAY stats. I felt very strongly that if Georgia was ranked #1 when they played #5 Florida, the Gators should get credit for playing a #1 team, even if Georgia later fell to #10. THE MIND SET OF THE GAME, THE INTENSITY OF THE GAME, REVOLVED AROUND PLAYING A #1 TEAM. How can the mind set and intensity of a game be overlooked 4 weeks later? But critics will say "but what if Georgia fell to #50, do the Gators still get credit for playing a #1 team?" Very good point. It does happen. Rankings can fluctuate dramatically during the course of a season. Look at Alabama in 2000.

Several years ago I made a compromise that I think has worked exceptionally well. I use a combination of both, with percentages tilted slightly towards the game day rating and rank. This way both are taken into account. The current rankings are not totally discounted but more credit is given to the original "mind set and intensity" of the game.

A team's Won - Loss Record is pretty self explanatory. Winning takes care of EVERYTHING as long as it's against quality opposition.

The Season Progression may need a little explanation. They are really a very simple, yet powerful set of rules. I want my poll to "look logical". In the first week of the season if Florida St. beats #107 No. Illinois, and Ball St. beats #58 Memphis, I don't want Ball St. ranked ahead of Florida St. just because they both have 1-0 records. That's not logical. We ALL KNOW Ball St. is not in the same league with Florida St., at least not at this juncture. Let them EARN IT first. Let them prove it over due course of time, then my poll will respond accordingly. That's what I mean by Season Progression. All of my teams start out with a rank, #1-#117, because they ARE NOT ALL EQUAL. We KNOW THAT from past experience, so why not use that experience to begin with. Some would say starting all teams equal, or all at 0, is the only FAIR thing to do. I say it's the most UNFAIR thing you can do, and besides its just plain illogical.

Now, let's go one step further. I don't want a team jumping 60 places from #70 to #10 in November either. You just simply can't turn your season around in one game, even if you beat a #1 team. I want people to be able to look at my poll, look at the previous week's contests, and say, "oh, I can see how he did that". So there are specific rules in place that PREVENT those things from occurring. I guess you could say it "forces a team to progress through the season in a logical fashion". I don’t believe a team should be #50 in week #8 and  #1 in week #9. I wanted to create as much STABILITY as possible in the poll, especially in the Top 10. If a team moves up, I want a person to be able to see WHY, through looking AT THE MOST RECENT PERFORMANCE FIRST, then taking the other factors into account. Additionally, I feel very strongly that the most recent performance should carry a stronger weight. A team should be better in November than they are in September.

The "checks and balances" are played out through a series of four "phases" in the formula. Each phase has a different purpose and a different mathematical function in the application of the checks and balances. I will give as many practical examples as possible as I feel that is the best way for people to understand the point I'm trying to make. The checks and balances provide what I call "the fairness factor". Under these guidelines an undefeated team playing a hard schedule is ALWAYS going to be ranked close to the top. A team with one loss, but playing a very hard schedule can still be in contention for the National Championship, as evidenced by Nebraska's 11-1 record pushing Virginia Tech to the wire for the #2 spot in the 1999 season Sugar Bowl. Additionally, an undefeated team playing a moderate schedule may also be in contention, as witnessed by Virginia Tech in 1999. Let's take a look at the "FOUR PHASES".

Phase One: Making the transition from one season to another

Where a team begins the season is also a hotly debated topic, and understandably so. It is, after all, truly impossible to determine how incoming players, or coaching changes, will effect the returning nucleus of a team. I think it is important for teams to earn their position in a poll and not have 15 or 20 positions handed to them just because of what I personally, or anyone else, personally thinks. At the same time however, teams who have vastly improved from one season to the next deserve the opportunity to have those improvements reflected in their ranking quickly. As I have said many, many, times, I am adamantly OPPOSED to PRESEASON POLLS. They do an incredible injustice to College Football. I could state COUNTLESS examples over the last 50 years of such injustices, but let’s look at the most recent glaring example of Texas in 2001? How in this world did Texas deserve a Top 5 Pre Season ranking after having come off a 9-5 #29 campaign? Moving 26 places without ever playing a down of football? Based on what, a new hot quarterback? Give me a break. The sportswriters may as well hold a lottery in George W's 10 Gallon Hat. It would be just as accurate. Enough of that... don't get me started.

I am convinced that carrying a team's RANK over from one season to the next, and then making the rules for the first few weeks of the season "more relaxed" is the best method to use. To accomplish this I created a different set of rules for the first 4 weeks of the season. Normally, as the season progresses, a team’s "earnings" are drastically reduced as they go thru the various phases in the formula. This creates a more stable poll week to week, not allowing drastic movements up or down, and therefore preventing any one team from changing the whole outlook of their season in one game. However, in the first few weeks, since everyone is more equal in terms of won- loss records, everyone receives a very high percentage of their earnings, double what they do during the balance of the season. This allows a team to be ranked ahead of any team they beat in the first few weeks of play unless the computer detects that it was a "major upset". Believe me, those type of upsets do occur, and if allowed to stand, a "major upset" in the first few weeks can create pure havoc in the correct balance of a poll, so there had to be some boundary in place, all be it so lenient .

Granted, it does put a lot of emphasis on the first few games of the season, but why not? If everyone is aware of their importance, steps can be taken to prepare accordingly. Under the rules written into the program a predetermined figure is used to distinguish between a "minor upset" and a "major upset". The figure comes from my research, where I have found that 92% of the time teams who won in games where this predetermined figure was greater than what I am using were unable to sustain that level of performance. In other words, it was a fluke. I do not believe the stability of a poll should be compromised for something that has only happened 8% of the time over the last 132 years. Because of the flexible rules in the early stages of the season, a team is easily able to re-position itself in the poll simply by performing well. It's not uncommon for teams to shift 15 or 20 places in their first game, but it's because they've earned it, not because it was handed to them.

Another change you will notice from the previous formula is that a teams RATING IS NOT CARRIED OVER, only the rank. A new rating is assigned. The new rating was created from the "average rating of the last 50 years at middle ground", ( #58 ), and then one point up for each rank above and one point down for each rank below. In other words #58 gets 213 points, #57 gets 214, and #59 gets 212. Using this method #1 gets 270 points and #117 gets 154 points. A projected point spread can still be achieved by taking the ratings of both teams, subtracting, and dividing by 3, then giving 3 points to the home team. Moving to this method of assigning a rating to begin the season prevents a team from receiving an undue advantage from having an excessive rating the previous year. I've toyed with this for years, but just decided to implement it with the rest of the changes. I feel like that by doing this I will also be able to get a more accurate read of the strength of teams from one decade to the next, which will be important to me as I run the new formula though all 132 years of football. To begin each season a #1 team will be favored over the #117 team by 38 points. Keep in mind however, this figure has no bearing on the future ratings at all, this is purely for the fun of it, for the fans sake I suppose. I think its fun and important to know how much one team compares to another in strength.

Phase Two : Obtaining the Strength Of The Opponent

The formula created for the Billingsley Report is a performance based formula, but one uniquely designed to NOT TAKE THE MARGIN OF VICTORY INTO ACCOUNT. The greatest success in this formula is achieved by remaining undefeated while playing higher ranked opponents. While losing a game, or playing a lower ranked opponent will not prevent a team from becoming the National Champion, it does create a handicap in the process. Every loss dictates a lesser % of forward progress, and the lower ranked opponent played, the less chance for upward movement.

The initial "point value" assigned to an opponent is based according to THEIR RATING AND RANK. An opponent's strength is determined, not by their won- lost record, which alone reflects only a portion of their strength, but rather, it is based on their rating and rank, which is more reflective of a true strength. This is a HUGE bone of contention between myself and the BCS, one which I have tried to no avail to have addressed over the last two years. Currently the BCS SOS is determined solely on opponents, and opponents, opponents won- loss records. In other words, at the end of the 2000 season, a team initially received the same value for playing Ball St as they did for playing Colorado. Both teams finished 3-8. Tell me what's wrong with this picture? Is there ANYONE out there who can honestly tell me Ball St. was as good a team as Colorado? I don't think so. I'm convinced the BCS strength of schedule formula is flawed. My calculation of strength of schedule, which is a combination of a teams rating and rank, I believe, is a much more accurate SOS.

Phase Three : Comparing The Won -Lost Records

After phase two is completed, the remaining figure is then compared to the number of losses accrued during the course of the season, and also to a team's own position in the poll. The reason for the additional scrutiny is so a team cannot unduly move up in the polls based on one game's performance, all be it a superlative one, in the event a team has not been consistent with their performance. This is accomplished by giving a team a higher % of their earnings, or losses, as the case may be, according to the number of losses in their own record. The less number of losses yields a higher return of their earnings. For example, an 11-0 Tennessee gets 100% of an earned 10 points accrued, where a 10-1 Florida St. gets a lesser %, and a 10-2 Nebraska even gets a lower %, etc.

Next, a team's position in the poll is compared to their own record each time a team acquires a loss on the season. The reason is to prevent teams with multiple losses on the season from remaining high in the poll unless they are playing far superior opposition. An additional deduction is attached to the adjusted accrued value each time a team losses a game. The more losses acquired, the higher the deduction. If a team loses a game, and it's their first loss, the penalty is one %, if it's their second loss, the % is greater, and if it's their third, the penalty is even more severe. This process has proven to filter down the ratings until it is possible to be ranked in the top 25 with 3 losses, but it can only occur if a team has played well consistently, and played a difficult schedule.

Phase Four: Final Touches

A few final minor adjustments, in terms of % of input, but necessary in terms of comparisons, are taken into account. Those include the site the game was played, home or on the road, a look at the defensive performance, and a final comparison of a teams overall record. Two different % are attached to the site of the game, the greatest reward coming from winning on the road as an underdog, but there is a small reward for playing on the road, win or lose. A team's defensive performance is given a special look because in my mind winning the game it’s self is a reward of offensive performance, but the defense often gets overlooked. Great teams are built on solid defense and I feel that should be rewarded, even if it is so very slightly. The reward is based on holding an opponent to less than a touchdown, on a scale of 0-6 points, a shutout getting the most benefit. Also, after all is said and done, a final look is made at a team's overall record, and a very small adjustment is made in that comparison. If a team has a winning record, even by just one game, say 3-2 on the season, they get some reward for that. If however, they have slipped on the season to a losing record, say 2-3, that is taken into account as well. The reason for these three final comparison's is this, if, through all previous categories, two teams come out virtually tied, these three simple characteristics can help determine which team, based on the current performance, deserves an ever so slight edge, even if it's a tenth of a point. To be precise, if two teams are tied at 290, then the team with the better record will be ranked higher. If they are still tied, and one team played on the road while the other played at home, the road team gets the advantage. If they are still tied, the team with the best defensive performance will prevail.

After phase four is completed, the result is added to a team's previous week's rating. That result becomes a new rating which is reflective of the team's overall performance to that point in the season, with a strong emphasis on the most recent performance. This formula has proven to reward teams who, through consistency, create a solid winning record against quality opposition.

I hope we all have a very exciting and rewarding 2002 college football season. I know my participation with the BCS has certainly compounded my passion for football and I hope in some small way contributes to the sport overall. The BCS has done a tremendous service for college football by bringing the poll process to the forefront. Remember, it's not important that you "believe" one poll is better than another. Explore the various options, understand their dynamics and follow who you will, whether it be me or someone else. What's really important is that you trust the BCS process as a whole and celebrate the fact that for the first time in our great sport's tradition laden history, (which I believe is the greatest on earth), we have an opportunity to match the #1 and #2 teams every year. That's quite a statement in itself!

Richard Billingsley

President, CFRC

College Football Research Center