Cutting the Gordian Knot
A decent Chess program can beat most players. A Go program tends to fall over against an amateur once that amateur has played the program a few times. The amateur learns the program's weaknesses, the program doesn't learn anything.
Because Go programs suck, they only get to play experts at ridiculously high handicap levels (e.g. 25 stones.) It's hard for their programmers to learn much from these games.
All computer tournaments are cool, but they look like white belt karate tournaments: so many mistakes that it's hard to even call what they are doing "Go."
Addendum: Even in the era monte carlo programs like Zen, the tag-line is arguable. 4 stone handicaps are ridiculous from the perspective of learning to make computers play Go as successfully as they dominate chess.
The program will be making big errors within the first 50 moves. Playing to the end of the game is basically a waste of time. A human opponent will point out that the game is over, and explain why. A computer has no hope of understanding why it will be dead in 100 moves time.
One bad move in beginning of the game means the game is pretty much over in terms of learning: even if the computer realizes it's behind, it's now forced to play catch-up rather than high-powered Go.
Note: A fuseki library is not the same as "understanding" how to play in the opening. It is clear these criticisms will survive in one form or another until an artificial intelligence can go head to head with a human with some degree of spontaneity.
We take a http://prize.hutter1.net/ approach. We pick a canon of say 10 thousand pro games and provide them as the dataset. The goal is to compress that dataset into as few bytes as possible (bytes=size of compressed dataset + size of program, general purpose language runtimes are free.)
This, I think, is a huge win: one bad move doesn't kill a computer go program; we don't need to even run tournaments, etc; we have a benchmark number we can see improving over time.
To flesh this out a little for non-CS guys: compression is basically a question of assigning probabilities to what happens next. A good Go program should assign high probabilities to moves that experts make and so get good compression numbers. If a program guesses wrong on move n, it pays a price in compressing that one move, but gets to keep on guessing at the moves in the ongoing game. One bad guess doesn't poison it's view of the entire game. At the end of end of the pro game, we just compare how well computer Go programs compressed the game: shorter compressors predicted the actual moves better, just like expert kibitzers!
Showing a program professional moves does not address the issue of understanding. I'm no pro, but I would say that 99% of the time, someone does not have to respond to one of my moves in a way that say, shusai responded to one of siegens. While trying to teach by pro example is a powerfull idea, I fail to grasp in what ways a program could ever hope to make good moves by simply copying games that in all probability present moves irrelevant to any situations the program might find itself in.
Read the explanations below - I think that you, too, are misunderstanding the proposal. You're not feeding a program a bunch of pro games to use as reference. You're using the games to TEST the program's ability to predict pro moves. The point of compressing the games isn't to fit more of them into the program's library, it's as a measure of the program's understanding. Just as a compression program that "understands" English spelling and grammar is better at compressing a file containing English text, a program that can accurately predict pro moves will be better at compressing professional game files.
We can even provide an optimal compression function so that Go programmers don't need to write their own: for any board, the programmer just provides the probabilities of a play on each point. This has the added advantage that we can watch a pro game and see the preferred plays of the computer players at each move.
Kosh: So eventually the computer has a database of all possible positions? There are about 3^361 positions, so good luck!
Alex: I think you're misunderstanding the proposal. It works like this. Say you have a pro game that's 256 moves long. Each move is chosen from one of 362 - n possibilities, where n is the number of stones on the board (362, not 361, because pass is an option). Say that averages out to 256 possibilities, so 8 bits per move. Without compression, the game is going to be 256 x 8 = 2048 bits.
However, you write a Go-playing program that attempts to predict moves (without any randomness in the algorithm, so it'll always predict the same move for the same position). Then, any move that it correctly predicts can be recorded with just a single bit, telling the decompressor "ask the prediction algorithm - it knows the next move." Now, a program that predicted every move completely accurately (impossible, since pros aren't perfect, so even a kami no itte program would guess wrong sometimes) would compress each game down to 256 bits instead of 2048. One that got a third of the moves right (as is about my average when looking at pro games for the first time) would use 256 + (2048/3) or about 1000 bits. If another program was compressing the games down to 900 bits, that program is "better."
Depending on the accuracy of the program, it might be preferable to have the algorithm make, say, seven guesses instead of one... then each move would be compressed as three bits, to tell the decompressor which guess number is the correct one, while the three-bit string "000" would tell it that none of the guesses is correct and the following eight bits will spell out the actual move.
Anyway, the point is to change the standards for producing an intelligent Go program from actually PLAYING Go (where one mistake early on can spell disaster, no matter how well the computer plays computer-friendly situations like L&D and the endgame) to simply predicting pro moves, which is a somewhat different task.
Mirsha I think part of your idea as presented is flawed. Your ideas on compression don't seem terribly sound, as simply compressing the next best move is a bit pointless as it lacks the context to actually know what that move is. Using this idea you could certainly create an abstract binary tree which would accurately map out a given game but it would have no context in which to be valid. For example 010 might indeed be the 'correct' response but is 010 A10, or K5? You would need some form of key to translate these meaningless machine values back into Goban points which raises another problem. Each game independently would give rise to it's own unique key which wouldn't be interchangeable with other games and would hence cause problems when looking for alternative solutions. Of course the solution here is simply to create the key in common with all the games but that may hamper it's efficiency.
Another thing I'm not quite clear on is the use of such a program. Lets say this program existed and I played it then my first move was A1, a move unlikely to be seen in it's training data set, what next? Black wins by default on the first stone?
I'll relate a story about AI here, the American military decided to invest a whack of cash into developing an AI system which could monitor video footage and scan it for hidden tanks. The scientists went away and built a system, trained it on data then tested it out. It worked perfectly fine. They then had a look at how the AI made it's decisions and found it's decision making was based entirely upon the position of the sun in the sky. It made the right decision for all the wrong reasons and would never have worked in a real situation.
Alex: First of all, it's not my idea - I didn't create the page. I'm just trying to clarify the original author's idea, as I understand it.
Second, I think you're misunderstanding the way it works. Both the compressor and the decompressor access the same Go AI, which has no randomisation and will always predict the same move in the same board position. So, given an empty board, let's say its preferences are Q16, R16, R15, R14, R17, etc... then if the first move in the actual game is R15, the compressor says "Okay, that was the AI's third pick," and codes it as 011. The decompressor comes along, says, "Right now, the board is empty," so it feeds the AI an empty board, the AI says "Q16, R16, R15..." the decompressor says, "R15 was its third pick," and writes R15 to the decompressed file.
Now it KNOWS what the first move was, so the next three-bit set from the compressed file ISN'T without context. Just as the compressor is feeding the board position at each move to the AI and compressing the moves one by one, in sequence, likewise, the decompressor is decompressing each move in sequence, and so consequently knows what board positions to feed to the AI in order to interpret "the AI's fourth pick" as an actual board position.
You do have a valid point that using only pro games as a data set would tend to lead to people developing algorithms that only respond with pro play to pro play, and would be confused by amateurish moves.
Like I said, it's not my idea and I don't actually think it's the right way to go. A better example than the A1 thing is that if you're trying to predict moves, the best bet when you see a peep is just to connect without reading out other possible responses, since the connection is the most common. Thus, you would end up with an AI that always connects to anything that looks like a peep, which is hardly going to lead to a strong opponent.
LukeNine45: This is an interesting approach from a benchmarking perspective (it's easy to tell if you're doing better or worse). However, the best compression needs to have some domain knowledge (i.e. it needs to understand what it's compressing), so for go this is equivalent to writing a very strong program. In other words, this approach doesn't make the problem any easier to solve, only (possibly) easier to track progress in. (Maybe-- it's possible that knowing the exceptions to all the rules is what makes you strong, so that the last 0.5% compression might make the difference between a 10 kyu program and a 2 dan program)