![]() StartingPoints Referenced by Homepages
|
Holigor's Rating Of Go Players
Keywords: People
The rating that is based on my own system that is very different from ELO can be found on I moved the rating to a different host recently. Due to the misfortunes of the talkcity site I did not update it for some time. There are gaps, yet at the moment the recent international tourmaments (Toyota Denso cup and Samsung cup are included). As for the middle July the most remarkable thing is that Cho U moved to the fifth position in the rating. This is partly because of Mok's poor recent result. Though Cho U has only a minor NHK title, his overall record is quite good. His high position might be an indication of something. Last year at the same time Kato Masao was the highest of the Japanese. Now Kato is about 20th because of the disastrous Meijin league record which devalued to some extent his Honinbo exploits.
The data on the matches are taken from Go news site at If you click on the names of the players you can see their record. The algorithm is described and explained at the site. The rating depends on the input information, certainly, and would change if the other tournaments were added. The table has some additional information. You can see how well the players perform with Black and White, for example. The following is the description of the initial standing (August 2001). Yi Ch'ang-ho is the leader, of course. Look at his record. It is impressive. Two next places are surprising. The information on these two youngsters (Gu Li and Pak Yeong-hun) is not complete though. Perhaps they do not do so well in other tournaments that I don't know about. Let's consider them new hopefuls and watch their progress. Other remarkable features are high position of Cho Chikun despite the loss of major titles, not so good record of Ma Xiaochun and quite low position of Rui Naiwei. She has a great win/loss ratio but most wins were against other ladies and her record with men is not so great this year. Don't pay much attention to the lower part of the table. The system cannot reliably rate players with few games. Anonymous: I looked at your rating system, and I noticed a potential flaw. Isn't it possible that winning against a weak opponent can hurt your rating? Similarly, can't losing against a strong opponent can benefit your rating? HolIgor: It is not the fault. It should be so. Let us imagine a situation when there are 2 players with the equal number of wins. Naturally, the player who played against stronger opponents has to have higher rating. There should not be any doubt about it. Another: But at the same time, it certainly should not be so: if two players are equally ranked, we don't anticipate thinking of one of them as getting weaker than the other if he/she then precedes to win several games, while the other does nothing. Every win generally comes about by outplaying your opponent, and thus should probably be (always) a confirmation of strength, and an improver of rating. And I have to repeat that this system is not ELO. The players do not have current ratings that change itteratively with each win or loss. The rating is recalculated each time with all data in the database. This week rating is not a correction to the last week's rating. One should not use the term "increase" or "decrease" of the rating because the absolute value of the rating is meaningless in terms of strength of the players. It reflects the depth of the distribution only. Yet the difference of ratings, positions of players in the table is what we are looking for. During the last year the rating of the leader (Yi Ch'ang-ho) oscillated in a wide range (about 1000 points) while at the same time the number 2 was always about 200-300 points behind. Returning to the question of weak players I would like to say that initially I thought that all ratings would fit into the range of 1000 points. That is the limit when the system is closed and each one plays all others. But that did not happen practically because the players at the bottom of the table cannot even hope to play Yi Ch'ang-ho or Yi Se-tol in an official competition. They cannot hope to play even the players that usually lose to Yi Ch'ang-ho. They usually play the players that lose to the players that lose to the top. So, practically the rating table has the depth of approximately 5000 points. This value varies as the connectivity of the players in the table improves or deteriorates. If the distribution were in the range of 1000 points the effects of the significant increase of the position of a weaker player after a loss to somebody good would not occur. As it is, yes, one can move up significantly after a loss to a very strong player just because the strong player agreed to play a game against a weaker one. But that happens at the bottom where the positions are not reliable. At the top the players have a lot of games and the effect of the drop of rating from an easy win is just what it has to be: your average opponents were weaker - you go a little bit down, but just a little bit. Andre Engels: I don't agree that that is "what it has to be". I think it is ridiculous to make the assumption that the fact that someone has won a game would make you think he is weaker than you thought without knowledge of the game. Charles Matthews It would be a paradox of inference, certainly. I have only heard about theory of ratings, not read anything intelligible. I believe 'ideal' inference is replaced by something simpler to compute, in general. Can anyone explain further? Anonymous: The question I was trying to raise is "What makes a good rating system?" Holigor's system has the flaw "If a top player decides to play a extra game against a much weaker player, his rating is unfairly decreased." Holigor correctly points out that the top players tend to play mostly among each other, diluting this effect. As long as the players do not take Holigor's ratings seriously, this will be accurate. However, there is the potential for a player to manipulate his ranking by only entering events with very strong opposition. Any rating system has flaws. If the ratings are used to give out something valuable, then people will try to exploit the rating system. Consider the following rating system. "A player's rating equals the total number of games won." Of course, there are lots of flaws with this system. Surprisingly, it is the system used in Contract Bridge, especially in the ACBL. (The "masterpoint" system isn't exactly like this, but for all practical purposes it is.) The advantage of such a system is that it does reward people for something you want to encourage, namely playing more.
SAS: Perhaps Holigor could consider using the HolIgor: I've seen the description of their system. It is tackles the problem far more complex than what I wanted to do. To make a rating system for a go server you have to solve the problem of scaling the rating to the proper handicap. Therefore, it is a little bit messy, involves seeds and assigns the arbitrary probability of a win to the difference of rank. The first version of may rating did not involve anything arbitrary, though it did not solve the problem of relating the handicap to the rank difference. Later on I introduced exponential function to decrease the effect that was mentioned in this discussion. It has an arbitrary coefficient, though there are limits to it. The aging of the results and the numeber of to ripen the rating are arbitrary as well. Anonymous: Adjusting for handicap isn't as hard as it sounds. You would just need a large sample of games and do a statistical analysis. Most rating systems make an assumption "If player A gives 3 stones to B, and if B gives 2 stones to C, then A should give 5 stones to C." This assumption is not exactly true in practice, but with a large sample of games you could properly account for it. Noname: This rating system seems quite great. Although there are some possible ways for a player to manipulate this system, this isn't the official rating of the non-existing World Go Federation. :) Anon: It seems the ranking page has not been updated for over 2 months. I for one hope Holigor will be able to update it soon... HolIgor: I had some problems with the ftp to the website. You reminded me, I checked and it seems working now.
PurpleHaze: How does your system compare with that of Jeff Sonas? (see: HolIgor: It does not. And it does not intend to. My system just makes an ordered list of players according to the index that is meaningless out of the context of the system. If ones wants to make a rating system of its own one should not be concerened about the starting ratings of the players. That's what in my opinion is difficult about ELO and similar systems. So, I invented a system without initial ratings. The system is the simplest thing possible. There are drawbacks though. Like, for example, a perfect player, who won one game against everybody in the list, would not be the 1st as the result. This is a copy of the living page "Holigor's Rating Of Go Players" at Sensei's Library. ![]() |