The Glind Rating System

The Glind Rating System is a universal rating system that can be applied to a sport to rate head-to-head matches between teams or head-to-head matches between individual players. The Glind rating system is based on the rating concepts first proposed by Arpad Elo. It is derived from statistical and probability theory. Because of its fundamental mathematical base the system will never need modification nor, in fact, can it be modified. You can use it, or not use it, but you cannot change it any more than you can change 1 + 1 = 2.

The Glind rating system was first used in 1982 by the Southern Tasmanian Billiards and Snooker Association to rate its billiards and snooker players. It still continues as its official rating system. It has survived the two toughest tests: TIME and TRUST.

Typically, a team (or player) usually plays around its average level but sometimes plays extremely well and sometimes plays very badly. In an extended series of games its performances are likely to show a normal distribution in a statistical sense. The stronger team will not always defeat the weaker team but will usually win the greater percentage of their games. Percentage differences between teams are used to calculate a rating difference on a suitable scale. The opposite also applies in that it is possible to calculate the expected winning and losing percentages when the rating difference between two opponents is known.

As each new game is played it is recorded and updated ratings are calculated. The principle behind the ratings adjustment is simple and fair. If a team wins its rating goes up - if a team loses its rating goes down. [1] So far, so good! The amount of increase (or decrease) depends on the difference between the ratings of the two opponents going into their match. What one team wins in points is counter-balanced by what its opponent loses. Thus, the rating system at large is held steady with every team's rating being relative to its performances.

The two opponents contribute to a kitty of 40 rating points. [2] The amount each contributes depends on the gap between their current ratings. If their ratings are equal then each contributes 20 points. For each 20 points of rating difference, the higher rated team is handicapped 1 extra point while the lower rated team contributes 1 less point, thus maintaining the 40 point kitty. A team is limited to a contribution of not more than 39 points and not less than 1 point, regardless of the rating difference. The winning team wins the 40 rating points of the kitty. If a game is drawn or tied then each team takes 20 points from the kitty. Only games PLAYED to a finish are rated - incomplete 'drawn' games and 'no result' games are not rated.

The Rating adjustment table enables anyone to update the ratings.

Diminishing importance of earlier games
An artifact of the Glind rating system is that as a game recedes into the past its effect, or weight, on the current rating automatically becomes of less and less significance. In other words, the most-recent game played is the most important game and carries the most weight, the next most-recent game is the second-most important, and so on ..... which is a logical and desirable effect. It is, therefore, unnecessary to install supplementary rules to reduce the weight of old games on current ratings.

Hunting for the correct rating
The Glind rating system continually seeks the true rating. This capacity is illustrated in the accompanying chart in which two errors have been deliberately inserted, one far too high, one far too low. The rating formula automatically begins its self-correcting process to hunt for the true rating. Each new game played moves closer to the correct rating.


Footnotes

[1] With most rating systems, points are added to the winner's rating. The loser's rating remains unchanged. This looks as though the win is attributable to the winner's superior play only, yet it might really be that the winner played at its usual level and its win ought only be attributed to the loser's poor play. If this were the real reason for the result then points ought to be subtracted from the loser's rating with the winner's rating remaining unchanged.

The reality is that we cannot say for sure whether it was the superior performance of the winner or the inferior performance of the loser that produced the result. Likely it was a shared outcome, in which case the safest solution is that the winner's rating ought to be increased while at the same time the loser's rating ought to be decreased. back

[2] The number scale of this system is arbitrary. Just as 100°C is exactly as hot as 212°F so some other scale of numbers could be applied to this system. The present number scale is just a convenience.

A four figure number scale has been chosen to provide a range wide enough to cover all proficiencies without the need for ratings to have decimal points or fractions to distinguish differences of skill.

To ensure that no rating will ever go negative 2000 points has been chosen as a reference for the system. Ratings will usually be four digit numbers. 2000 points is the smallest suitable base to achieve these conditions.

The function that applies at the statistical base of this system is non-linear. The major portion of it is, however, so very nearly linear that a linear approximation equation has been chosen to simplify calculations to that of mental arithmetic! This equation, and its resultant table, is so simple that it actually conceals the underlying principle of the system. It has the added advantage that probability tables are not needed for calculation. back

For the specific case of cricket ratings: every Test match ever played and every ODI match ever played has been rated. Test match ratings and ODI ratings use the same formula but two quite separate sets of ratings are produced.
Top | Adjustment table | Contents