Gyom

PageType: HomePage    

http://guillaume.salagnac.free.fr/images/gyom-ski.jpg

Plop

I've been playing go since 2002, mostly online but also in real life, trying to teach friends and workmates.

DGS profile: [ext] http://www.dragongoserver.net/userinfo.php?uid=22443


Links

Fun

Bah, *I* have the Hand of God. I can beat any 9p giving *him* 360 stones! (from IFoundTheHandOfGod)

In the land of the blind, the one eyed man is king. Elsewhere, the best he can do is seki. (from HumourAlmostProverbs)

All your stones are belong to me! (from GoCatchPhrases)

Misc

About computer go

Books I own

  • L'Abc Du Go ; Le Livre Clé Des Débutants : Règles Et Méthodes by Hervé Dicky
    • Too beginner-oriented for me. I had already learnt the rules when I bought it, so I didn't learn that much while reading.
  • Perfectionnement au Go by Pierre Aroutcheff
    • First-pass done. But I have to give it more time and more effort to get something out of it.
  • Le Go, un jeu d'enfant? (vol. 1) by Albert Fenech?

Random thoughts

Yet another inactive GoBlog...

Yet another novice tries to write yet another go program (May 08)

After giving up my reflection about pattern-matching-only computer go (cf below), I recently gained interest into UCT algorithms. To play with that, I downloaded SimpleGo, a go-playing program written in python and "meant for beginner Go coders and beginner Go players" (both of which I am). I chose version 0.3 (version 0.4 is far too complicated), and started to remove as much "intelligence" as possible, to get a random-playing program.

Here I learnt my first lesson: writing a random-playing program is non-trivial ! I like Area scoring, because it makes the game very easy to explain to beginners. With only a few rules (captures, maybe ko), beginners can start to play on small boards and learn by themselves life and death, eyes, territory... But it is far harder for a software program ! My first program played strictly random: move=random.choice(game.list_legal_moves()). But this doesn't work ! Beginners do understand very quickly that they should not play in a group's next to last eye, even if it is a legal move. Then I taught the engine to avoid self-atari, but this is really not a good solution: once white has played into black's straight three eye, white should continue filling the eye, even if this makes some white stones in self-atari. What a mess ! Finally, I decided to make the engine avoid self-atari for larger-than-five-stones groups, which enables white to kill a one-eyed black group, but avoids black filling its next to last eye. But this is no more "random", and maybe I still miss some things...

Then I coded a second playing engine, implementing a tree search. I started from the pseudo-code given on the UCT page (which is full of errors ; I might correct it at some point) and implemented it as is. It behaves quite well on 5x5 boards (beats consistently the random player) but trashes on larger boards, and is already unusable on 9x9. D'oh ! But my goal was to experiment on 19x19 ! The problem I am facing now is that my play_random_game() function is far too slow, mostly because randomly finishing a game until every point is filled (common computer-go variant of area scoring) becomes longer and longer as the board size increases. Both players play into opponent's territory, capture large dumb groups, and so on. Thus, it is not possible at the moment to get the huge number of simulations needed for the UCT tree to become meaningful ([ext] A research paper about Mogo mentions 70000 simulations per move !).

...to be continued

An interesting position (Sept. 07)

( moved to BQM381 )

jpeg2sgf ?

Today, I played a game with a friend of mine (on my beloved home-made goban), and we didn't have enough time to finish. By chance, I had my digital camera, so I took a picture of the game before we packed up everything.

Of course, we can continue the game another day, but it would be more convenient to convert the picture to an sgf file, starting from the position we left, and to finish the game online.

I recall having seen once such a converter, a kind of "go OCR" program, but I can't find it anymore on the web. Does someone have an idea about that kind of software ?

By the way, here is the picture: [ext] http://img260.imageshack.us/img260/8910/imgp1099bg3.jpg. I was white, and my friend was black. My prisonners are on the right-hand side, and black had about twenty prisonners. I am very unsure about the outcome the game, so all comments are welcome :-)

(Gyom, 2007-06-20)

Edit (2007-06-26): Hurray ! I found ChrisBall's [ext] image2sfg perl script, and it Just Works. I used it to convert the picture to an sgf file, adjusted the komi to take existing prisonners into account (does there exist a better way ?), and I ran gnugo vs itself on that position. White wins by about twenty points, but the central black groups lives and stays connected to the rest.

That game would therefore have been very interesting to continue by hand, and would have maybe given a very very different outcome...

On the difficulty of problems (April 07)

To improve my play, I recently started practicing go problems. After several tries on SL, I gave up because it was too frustrating: I don't know how to choose the right problems, and I find them either too easy or too hard. A couple of days ago, a friend of mine showed me [ext] GoProblems.com, a website dedicated to this practice. They offer a (not so) nice Java applet to solve interactively random problems from their huge database. This is just great.

Still I am wondering how the difficulty of their problems is evaluated. The [ext] info page says :

goproblems.com measures two types of difficulty. Since it's hard for people to objectively measure the difficulty of a problem, goproblems.com measures two statistics that independently or in small numbers mean little, but over time and in context should provide a means to measure the difficulty of a problem.

When you're playing with a problem, you'll see the difficulty expressed as: x/y. x is the percentage of people who've gotten this problem wrong. y is the average number of seconds it took people to solve the problem correctly.

For instance, when practising a [ext] problem, the difficulty is presented like this :

Difficulty: (81) 1 kyu / 57 secs

This seems to mean that 81% of people are wrong at first try on this problem, and that people eventually solve it in about one minute. But where does the "1 kyu" indication come from ? Does this mean that this problem is intended for 1k players ? But who decides of this level ?

More generally, I am wondering about how to evaluate automatically the difficulty of go problems, with in mind the objective to present "fair difficulty" problems to the user in order to help him improve.

For instance, I tried yesterday the [ext] 20 kyu series which I found very easy, while I am pretty sure that my actual level is not higher than 20k...

Any reactions ?

(Gyom, 2007-04-12)

Steve Hi Gyom. I'm not sure about this, but I suspect from what I've seen before that certain percentages are associated with certain ranks. How the association is made, I don't know, though - it may be based on the claimed ranks of the people who are registered on Goproblems.com? Also note that people are often better or worse at problems than their rank would indicate. People who study more life and death than strategy, for example, tend to excel at problems beyond their rank, while lazy people (like me) tend to lag behind their rank with the level of problems they can comfortably handle.

Gyom Hi Steve, and thanks for your feedback. I do agree with you about the drift between someone's "rank" and her "strength to solve problems", as these are two very different matters. To the same [ext] question asked directly on GoProblems.com, someone answered that the "kyu" estimation is merely function of the percentage:

30k is plevel=0 (0%) 25k is plevel=0.2 (20%) and so on...

IMHO, this is deeply wrong, because these statistics are biased when people make several attempts for the same problem. Both the percentage and the time should be evaluated only on the first try.

Moreover, I as a ~20k, will try mostly ~20k problems, so I won't participate in evaluating the difficulty of ~30k problems, nor ~5k problems.

This issue seems to be problematic for everyone anyways: ProblemRating, SGFWishlist#toc6, PageDifficultyDiscussion...

RBerenguel: I think that these percentages are calculated once, the same as the player rating, that is only afected by the first try, not second or third ones. I am 6k (KGS) and have done from 15k to 5k, it's important to do the review part.

Gyom Of course ! I actually did the "review part" (goproblems rated 30k, 25k, 20k) before getting to 18k problems these days. And you're right: if the percentage is calculated only on first tries, and if everyone starts from easy problems before reaching his actual level, then the percentage must be relevant. Thanks !

A pattern-matching-only go program ? (January 07)

As you might notice, I'm quite interrested in computer go. I am still far from beating GnuGo, so I don't pretend to have a miracle answer to computer go, but I wonder if some ideas have been tried before (feel free to add comments if you have some!) :

What I would like to see is a go program that would rely only on a big (pro) games database, and that would not have too much "intelligence" as itself. By matching the current position (à la Kombilo) against a big database, would it be possible to find a good next move ? The idea is somewhat to generalize joseki dictionnaries : some situations are very frequent, very well-studied, and we do have answers for them. Why not try to follow well-established knowledge, and not re-invent the wheel (I mean: in-depth searching, life and death analysis, etc).

The main difficulty is to find a satisfying pattern matching algorithm. For the first few moves, it would be very easy: game openings are an extensively studied subject, so plenty of games do have similar openings, and we could just match the whole board (modulo mirroring and rotating, of course). But the question arises when it comes to middle-game moves: what kind of "pattern" do we want to look for ? Obviously, searching for the board "as is" will not yield any results, so there is no hope to find a next move there, so we must search for more "generic" patterns.

One idea could be to first search the wole board, and if not found (or maybe if not enough matches found), then search for a narrower pattern, closer to the last stone, and so on ? obviously, this approach would always yield a "next move", but I'm very curious about how well it would play.

Does anyone have some knowledge about a similar try ?

(Gyom, 2007-01-11)


Blip left some comments on my blog about this subject: [ext] http://guillaume.salagnac.free.fr/blog/?2007/01/11/51-pattern-matching-computer-go#co :

Yep, I've heard of things like that in the machine learning community, I'll try to get you some references as soon as I have some time...

I also remember that one idea involved finding the exact best solution on really small boards, and generalizing to normal sizes.

[ext] http://books.nips.cc/papers/files/nips19/NIPS2006_0223.pdf

(Gyom, 2007-01-15)


The "narrowing" part of the job could maybe made easier with some knowledge similar to what is said in PointPopularityByMoveNumber

(Gyom, 2007-01-17)


After a little bit more thinking about this subject, I now doubt a lot about the feasability of this approach. It could be great for opening moves, acting like an "opening library"-driven computer player, but for the middle-game and end-game situations, I think there is very little hope to find useful advice in a game database. Games are so different from one to another, that there is almost no chance to find the same situation.

The only solution would be to have a very very clever narrowing heuristic, but this means putting back some intelligence into the program... (Gyom, 2007-01-18)


Are there more chances to find something better if only considering 9x9 games ? (Gyom, 2007-01-22)


Well. I now think that there is very little chance to find any useful advice in a database when considering middle-game and end-game moves. So the best we can do is maybe to switch to another engine when database searches give up, and that is exactly what others have already done: GoFigure is an example of such an idea.

Well. That's life... (Gyom, 2007-02-07)


Gyom last edited by 134.214.146.149 on December 20, 2012 - 14:36
RecentChanges · StartingPoints · About
Edit page ·Search · Related · Page info · Latest diff
[Welcome to Sensei's Library!]
RecentChanges
StartingPoints
About
RandomPage
Search position
Page history
Latest page diff
Partner sites:
Go Teaching Ladder
Goproblems.com
Login / Prefs
Tools
Sensei's Library