A solved game is a game whose outcome (win / draw / loss ) can be correctly predicted from the start position when each side plays optimally.
Given the rules of any two-person game with a finite number of positions, one can always construct a minimax algorithm that would exhaustively traverse the game tree. However, since for many games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database?, and are effectively nothing more than that.
Any two-player game without chance can be "solved" on several levels:
In the weakest sense, solving a game means proving whether the first player will win, lose, or draw from the initial position, given perfect play on both sides. This can be done in a purely theoretical way that may not actually help determine perfect play.
More typically, solving a game means providing an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game. The most complex game solved till now (January 2011) is probably checkers
The strongest sense of solution requires an algorithm which can produce perfect play from any position, i.e. even if mistakes have already been made on one or both sides.
Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if the solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (for example tic tac toe).
An ultra-weak solution (for example Hex on a sufficiently large board) generally does not affect playability.
|Table of contents|
There is only a finite number of possible go games, and so it can be solved. That is, one or more "best games for both players" can be found by looking at every possible game.
However, on most board sizes we don't have enough time/computing power to realistically look at every game, so for these sizes the discussion is purely academic.
Also there can be a problem that different rule sets decide differently on the same game. It is possible that this is only a theoretical problem that in practice does not exist.
Go without komi can be ultra-weakly solved and is a win or draw for Black, because if it was a win for White (the 2nd player), Black could pass on his first move and then let White have the first move. (this is an illustration of the Strategy-stealing argument.
However, in normal go, there is komi so this doesn't prove anything and doesn't help one to win the game.
The komi is to compensate White for not having the first move . If the compensation was correct and both players played optimally, then both players would end with the same number of points and so the game would end in a draw. This implies that the correct komi will be an integer (whole number without fractional part).
Traditional it was the case that draws/ jigo were seen as a win of one of the players, and this has resulted that after komi was introduced the komi was made to include a fractional part to prevent draws.
- Optimal play and Komi
- Perfect play
- value of the first move
- jigo for a description how the komi became fractional
To weakly solve go is to provide an algorithm program that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game. It is not necessary to play the best move in any situation (many situations are unreachable when one player plays optimal), or even to play the best move in every reachable situation. (if in a situation there are 4 moves and one leads to a 1/2 point and another to a win of 100 points choosing the 1/2 point win is good enough)
For 7x7, a group of strong players have, with help from the computer, tried to find perfect play. They claim to have looked at all reasonable variations, but of course not at all variations. Thus 7x7 is not truly solved in the way that is meant on this page. (not even ultra weakly solved]
There are many reasons that makes go difficult to solve.
- Game space complexity
- State space complexity,
- Decision complexity
- Go is a diverging game.
Taken apart these factors don't stand for much but together they make solving go complex
Game tree complexity refers to the number of different moves that are available to the player at any time during the game. it is defined as: The game-tree complexity of a game is the number of leaf nodes in the solution search tree of the initial position of the game
in layman's terms: "In every position there are many moves to consider times there are many moves to make."
The state-space complexity of a game is defined as the number of legal game positions reachable from the initial position of the game.
Decision complexity refers to the complexity to decide if a given position is won/ draw or lost by the player. Is a group alive or dead. there is no easy formula to decide this. Playing it out is probably the only solution (and then you are back to the game tree complexity)
Compare this with chess where almost always if a player can capture the opponents Queen without losing his own he wil win the game.
Games can be roughly divided in 2 categories: divergent and convergent.
Games where the number of possible positions decline (after the openings phase) are called converging, This is the case in most games where the number of pieces declines during the game, most likely this is by capturing pieces of the opponent. These games can be solved by making a database of all end positions and backtracking backwards create information over more complex positions. (retrograde analysis)
For example in Chess there are end game databases for about any combination of 7 pieces and less, and they are useful. They make the game in practical terms shorter and easier. (the state of a position (win draw loss) can be found in the database)
In divergent games the number of possible positions increases throughout the game, and trying to examine all possible end positions becomes almost useless.
In go, because the number of pieces increases during the game, so does the number of different positions. And making a list of all possible end positions becomes an almost useless occupation. because trying to use it for backtracking (creating knowledge on earlier positions in the game) doesn't work.
Our goal is a time estimate, as large as that may be.
Go has been solved for boards up to 5x6?, where the number of variations is managable enough to let the computer go through all of them in reasonable time.
The IEEE (Institute of Electrical and Electronic Engineers) publishes a journal called Spectrum. An October 2007, article titled Cracking Go by hardware and chip designer Feng-Hsiung Hsu PhD. that designed Deep Blue, the chess computer that beat World Chess Champion Garry Kasparov. Hsu stated in the article Nevertheless, I believe that a world-champion-level Go machine can be built within 10 years, based upon the same method of intensive analysis--brute force, basically--that Deep Blue employed for chess. In the article, Dr. Hsu does not discuss these characteristics, among others, that distinguish Go from Chess:
- the lack of a whole board opening book. Opening books played a crucial role in Deep Blue.
- the consequences of a local fight upon other fights and the global position. Chess is essentially a single local fight.
- the lack of a reasonable whole board evaluation function.
Dr. Hsu mentions two challenges the effort faces:
- the first problem is the tree of analysis.
- the second problem is the evaluation of the end positions.
- solving go on small boards (5 by 5 is solved)
- Number of Possible Outcomes of a Game
- Number of Possible Go Games.
First, how many games are legal games of Go? Let this number be L, for Legal.
Second, what portion of legal games are reasonable? This number is the most difficult to estimate. Let this number be R. Let R be a decimal number from 0 to 1, expressing percentage.
Third, how quickly can a computer evaluate a finished game and determine it's outcome? This number will be relatively constant. Let's use E, for evaluation.
Fourth, how quickly can a computer generate a reasonable game? This is difficult to estimate, but should be relatively constant. Let's use G, for generate.
Finally, how many users will donate their computers, and for how long? Let U be users, and T be hours/day.
U*T gives us hours per day of computer time.
L*R gives us how many games we must evaluate.
(G+E)(L*R) gives us the total computer-time for the project. Let's measure this in hours, as we measure U and T.
Time Estimate in days = ( ( G + E ) * ( L * R ) ) / ( U * T )
Computer task times
These two "constants" vary a good deal depending on the speed of the computer used, but most people have computers of similar speed. Most people probably have at least Pentium 3.
Let's measure G and E with hours, as with U and T. Just to keep it simple.
G = 0.0005 to 0.00001 hours
Generating the game. Different approaches could be taken. Most likely, dozens to hundreds of games could be generated per second.
E = 0.00001 hours
Evaluating the score. Very quick for a computer. Less than a hundreth of a second.
Maximum number of legal positions is 3^361*362 (see discussion)
Note that the question 'how many legal positions' does not equal 'How many legal games'
R = ?
This is the most difficult to estimate. I welcome your thoughts.
For example, we could specify "only games with first moves in corners" or "only games with first moves in corners OR approaching other corners" These two specifications eliminate a huge number of games (and yet leave a ridiculous number)
We can't really say "only games matching a fuseki played by a pro" because pros could have failed to play the best fuseki yet.
Likewise, we can't really restrict by joseki.
Herman Hiddema: For a rough estimate, lets make some assumptions:
- In any position, there are (on average) 10 reasonable moves.
- An average game lasts for 200 moves.
Given these assumptions, the number of reasonable games is in the order of 10^200
U = 10,000
As few as one thousand, but with the East, maybe many more. Maybe as many as fifty thousand, but probably more like ten thousand.
T = 3 hours / day
Some users will give 24 hours of CPU time. Some will only leave it on at night. Some will only run it as a screen-saver and will turn their computers off at night. Unfortunately, the last category will be the largest.
These numbers obviously involve a lot of guesswork, so any result will have a large uncertainty.
A more formal uncertainty analysis should wait until we have better numbers.
Thank you for taking the time to read this. I am sure it will generate some discussion. Please concentrate on finding better estimates for our variables, especially R.
Just for the curious:
Suppose every particle in the universe (eg: sub atomic. electrons, quarks, etc. etc. Overestimate and let's say 10^90) was evaluating one board position every planck time (the smallest unit of time. Approximately 5x10^44 planck time per second)) for the entire life of the universe (~14 billion years, or 4x10^17 seconds).
How many board positions have been evaluated?
5x10^44 * 4x10^17 * 10^90 ~= 10^151 board positions.
Let's also calculate the maximum number of boards as 3^361. [3^(19*19)] That's 1.7x10^172.
So that means it would take the universe 10^21 more 14 billion year periods to evaluate all the board positions.
So solving go using brute force is simply impossible. An entire universe dedicated solely to this task could not do it.
symplicity: As a hobbyist programmer who has done some work in this sort of stuff, it pains me when people take the raw tree complexity or state-space size of a game and claim that it must take approximately that many steps of computation to solve the game. If that were the case, then Checkers or even Connect 4 would probably not have been solved yet.
From the point of view of game-tree complexity, only a square-root proportion of the game tree need be examined to prove the value of a game (alpha-beta pruning). From the point of view of the state space, it is likely that drastic orders of magnitude fewer than the total number of possible positions would need to be examined. It is absolutely unnecessary to examine even anything close to every possible position. Also, all the estimates on maximum theoretical game length (~10^48?) are all irrelevant, because it is exceedingly unlikely that any game too much longer than a few hundreds of moves is actually relevant to proving the game result.
On the flip side, positions may require multiple visits due to cases where the game history is relevant, that is, in ko. But we could special-case handle simple ko, and for longer cycles (superko), it's not unreasonable to think that for the vast majority of the branches in the game tree relevant to solving the game, the result will be independent of any superko rule that might be used.
Taking into account more Go-specific properties, a solver could drastically prune the search space in many other situations. For instance, in the oodles of branches where one side controls more than half the board, one can restrict the search to proving only that the territory is safe (and after the square-root reduction, the vast majority of the branching occurs as one player optimally refutes completely stupid play of the opponent, so this pruning could happen for a great proportion of the game tree). Also, certain classes of moves might be provably eliminated, such as filling in single-point eyes of groups with 2 or more eyes, etc, which could give a few orders of magnitude when applied everywhere in the tree. And so on, with other optimizations.
Of course, all this still puts 19x19 Go unbelievably far out of reach of solution. But even so, it's not nearly as simple as just counting the size of the game tree. In reality, the square root generally suffices, and we can do quite a bit better than even that.
MrMormon: Which rulesets were solved?