Note that many of the examples
used in these questions and answers refer to games played and seasons
prior to 1999. Although specifics obviously change from season to
season, the general principles regarding the power ratings and their
computation and interpretation do not.
1. How can North Carolina be ranked 9th
with a losing record?
North Carolina played 13 opponents yielding the fourth
toughest schedule. Examining this schedule (see below), they
played seven teams (Virginia twice) that made the tournament. They
beat two of these teams and lost to three others by one goal, including
Princeton which went undefeated. Clearly, North Carolina was able
to stay close with most top ten teams. Traditionally, polls
heavily weigh a victory or loss more than goal difference or closeness
of the score. The power rating credits a victory in addition to
the goal differential but emphasizes the latter more heavily (although
procedures are in place to greatly minimize the effect of "running up
the score").
North Carolina pr = 26.94 rank = 9
opponents ave pr = 25.97 rank = 3
date opponent power score goal difference gain or
rating actual expected loss
222 home vs Butler 22.89 11 5 6 5.71
302 away at Navy 22.41 12 13 -1 2.87 --
308 home vs Loyola 28.76 17 11 6 -0.17 +++
312 away at Duke 31.06 7 8 -1 -5.78 +++
316 home vs Princeton 34.65 9 10 -1 -6.06 +++
322 away at Maryland 29.97 12 13 -1 -4.69 ++
329 home vs Johns Hopkins 32.26 7 15 -8 -3.66 ---
405 away at Virginia 32.86 5 20 -15 -7.59 -----
408 home vs Radford 14.66 16 3 13 13.94
412 home vs Delaware 19.04 21 7 14 9.56 ++
416 home vs VMI 11.60 22 2 20 16.99 +
418 away at Virginia 32.86 13 17 -4 -7.59 ++
503 away at Hofstra 24.55 10 4 6 0.73 +++
2. How could the odds for Syracuse
be 10:1 to win the NCAA playoffs based on their past tournament
record?
The odds of any team winning three games
are (1/2 * 1/2 * 1/2), or 1/8, if all teams were equal. However in
the case of Syracuse, they had to beat Loyola, probably Virginia, and
then eventually probably Princeton. Their power rating against
Princeton and Virginia would have made them underdogs (less than 1/2)
and their chances of beating Loyola were near 1/2 as both teams had
similar power ratings. Thus, their rating at the beginning of the
tourney was 1/10. Even Princeton with its undefeated season had at
best a 1/3 chance of winning the whole tournament. As the tourney
turned out, Syracuse beat Loyola by one goal, did not have to play
Virginia, but lost by one goal to underdog Maryland. The odds do
not take into account Syracuse's track record of making the final four
for their last 16 or so years nor does it take into account its number
of championships. The odds were based strictly on the power
ratings for the year 1997 and the tournament
schedule.
3. Why is there a need for a power
rating? Aren't the other polls adequate?
The power rating is a poll based strictly
on scores and schedule. It is purely numerical and offers an
alternative, however good or bad, to the other polls. The rating
contains two components: the criteria and the numerical solver.
The criteria is a very subjective thing where one determines when a team
should be rewarded with points and when a team should lose points. The
numerical solver, on the other hand, computes the team's rating based on
the criteria, then recomputes over and over again, because each team's
rating is affected by the rating of each team it plays.
When pollsters perform their rating, they
each have their own set of criteria in much the same fashion as the
power rating. However, the pollster is no match to the computer
when calculating the results. Therefore, the power rating can and
will employ a much more sophisticated set of criteria. Pollsters
may use different criteria which can under some circumstances make the
results look arbitrary. The power rating adheres to a strict set
of criteria and therefore results are not arbitrary. Next, polls
are subject to the criticism that they are not impartial and each
pollster may be driven by bias, intentional or unintentional.
Whether this exists or not, some fans will always be suspicious.
The power rating has no emotional investment or bias, since the computer
merely computes. If the criteria is biased, then the results would
be, but at least the criteria is stated up front. Thus the power
rating can be unbiased. The power rating is comprehensive in that
it evaluates all teams with no emphasis placed on just the good
teams.
4. If coaching staffs are in the
best position to judge talent and team strength, why not leave the rating
to them?
Lacrosse coaches can only judge what they
see, and they do not see all of the teams. How can a team with low
visibility get its fair rating? Second, all coaches have different
criteria for rating teams and polls are based on a collection of
opinions which supposedly averages out to the fairest assessment.
The criteria for this average is then never clearly understood and
therefore impossible to challenge. Arising from this inability to
challenge is suspicion of unfairness or bias. At least the power
rating identifies its criteria and clearly substantiates its
findings. Finally, coaches can only do the top 15-20 teams before
it becomes an impossible task. Ask the pollsters who rate Division
III Men or Women to rate all 100 + teams and see how far they get. The
power rating is set up to evaluate all.
5. What credentials do you bring to
the table that makes you an expert on the rankings?
Only my math and computer skills, but
that's all the power rating claims to use. The method applies to
any sport or competition, and thus knowledge of a particular sport is
irrelevant from a numerical standpoint.
6. How can a team be ranked lower
than other teams it beat?
As an example, Hartford beat Rutgers,
Harvard, and Towson State and yet Hartford ranks below them all.
Here is an example where the rating emphasizes goal difference more than
victory. The most difficult part of the algorithm was determining
the relative importance of victories and goal difference.
Examining the rating of the top twelve teams, Princeton is at 36 and
Army at 24.8 -- a difference of 11 goals! Whereas the 13th to 26th teams
have a goal difference of only 2.48 goals. This means that teams
ranked 13th to 26th are so closely bunched that the slightest change in
score or home field advantage will effect their rating. Hartford pr = 22.49 rank = 22
opponents ave pr = 18.99 rank = 33
date opponent power score goal difference gain or
rating actual expected loss
311 home vs Massachusetts 26.45 4 7 -3 -2.30 -
315 away at Rutgers 23.04 13 12 1 -2.21 ++
319 away at Harvard 22.67 10 9 1 -1.84 +
326 home vs New Hampshire 14.88 9 7 2 9.26 --
402 away at Delaware 19.04 11 8 3 1.79 +
405 away at Boston College 15.69 13 7 6 5.14
408 home vs Hofstra 24.55 2 7 -5 -0.40 --
412 home vs Fairfield 15.16 13 9 4 8.99 --
416 home vs Stony Brook 21.96 9 10 -1 2.19 -
419 away at Vermont 19.01 16 9 7 1.82 ++
419 away at VMI 11.60 16 9 7 9.22 -
423 away at Drexel 15.11 13 4 9 5.72 +
426 away at Providence 12.79 10 3 7 8.03
503 home vs Towson State 23.87 10 9 1 0.28
7. Do you have a job?
All of the the directors have full-time
positions elsewhere. All of the high school coordinators probably
also have full-time positions or are students. All contributions
to the site are on a voluntary basis and no compensation is
made.
8. Who sponsors this
rating?
We are not sponsored by any organization
and have no affiliation with any group or league. We have formed
an affiliation with College Lacrosse USA for the purpose of sharing
schedule, score, and team information to better serve lacrosse
fans.
9. Are you associated with the NCAA
or any school?
We have no association with the
NCAA or with any of the organizations that conduct lacrosse polls.
Naturally, we do maintain contacts with numerous coaches and sports
information directors, and one of our directors is associated with a
Division III school, although not with its athletic
department
10. How do you find the time to
perform all this analysis?
The initial time invested in developing
the analysis program took weeks. Once done, the major task becomes
typing in the scores. Since we generally get scores in a timely
fashion, the remaining tasks are left to the
computer.
11. What are the Sagarin ratings?
The Sagarin ratings are a numerical rating
system used for college football and basketball. They were
developed by Jeff Sagarin, a former MIT graduate student. The
LaxPower ratings were designed for lacrosse similar to these ratings.
12. How did LaxPower get off the
ground?
US Lacrosse felt that such a rating would
contribute positively to the sport of lacrosse and was designed to be
similar to the "Sagarin Ratings" for college football and college
basketball. Unlike the Sagarin ratings, women's divisions are also
included.
13. What is the Lacrosse
Championship Series?
It is a fictitious rating patterned after
the "Bowl Championship Series" based on a formula using polls, computer
rankings, strength of schedule and losses to determine the best
teams. It is our creation and has nothing to do with the
NCAA.
14. Does the strength of schedules
change throughout the season?
As the season progresses, the power
ratings for all teams will change and since the strength of schedule is
based on the average power ratings of all the teams its plays within the
same division, it too will change.
15. How important is goal
differential?
When the ratings of a team are calculated,
a win adds to the power rating of that team regardless of the goal
differential. The goal differential is taken into account but its
impact drops off with the larger difference. A 10 goal victory has
almost the same impact as a 20 goal difference.
16. How is the strength of schedule
computed?
The strength of schedule is currently
calculated by taking the average of all of the opponents power
ratings. We have developed a relative power rating between
different leagues and divisions and use these ratings, whenever possible
to include all games for the strength of schedule
calculation.
|