Longest Possible Game
Table of contents |
Update:
The paper "Combinatorics of Go" by John Tromp and Gunnar Farnebäck, available on http://www.cwi.nl/~tromp/go/legal.html, has several results concerning the number of positions and games of Go. They derive recurrences for L(m,n), the number of legal positions on an m x n board, and develop a dynamic programming algorithm which computes L(m,n) in time O(m^3 n^2 lambda^m) and space O(m lambda^m), for some constant lambda < 5.4. An implementation of this algorithm lets them list L(n,n) for n<=16, while L(17,17) is currently being computed. For bigger boards, they provide asymptotic bounds and derive a simple formula giving highly accurate estimates, based on a provable "base of liberties" of approximately 2.97573419204335725.
They also study the Game Tree complexity of Go, proving an upper bound on the number of possible games of (mn)^L(m,n) and a lower bound of 2^{2^{n^2/2-O(n)}} on nxn and 2^{2^{n-1}} on 1xn boards, in addition to exact counts for mn<=4 and estimates up to mn=9.
Concerning long games, they prove that a 19x19 game can be (103+155/2)*2^154= 4110473354993164457447863592014545992782310277120 moves long, not counting passes, while a 1xn game can be 2^{n+1}-4 moves long. (This paragraph contains an arithmetic error.)
Phelan: Could you correct it, then? or point it out so someone else can?
I suppose the remark is of existential nature and cannot supply a constructive proof of existence of the mentioned error ^_^ --Sigmundur?
Tapir: Hint: How likely is it, that 361 * 2^153 (the less impressive way to write (103+155/2)*2^154) ends with the digit 0? (The expected last digit would be 2.) More importantly the paper by Mr. Tromp does not contain much about the length of games and surely not this number.
kevinwm: 4,121,891,336,534,812,136,496,329,879,770,141,953,873,372,250,112
Introduction
ilan What is the length of the longest possible Go game? I believe that some people will be surprised to find out that it is possible to have a game which lasts longer than the size of the board. In particular, for standard 19x19 Go which has size 361, there have been a number of professional tournament games which have lasted longer than 400 moves, the longest being 411 moves, played by Yamabe Toshiro, 5-dan Hoshino Toshi 3-dan in Japan in 1950. More information about these games as well as the game records can be found on this site https://web.archive.org/web/20080310003412/http://www.msoworld.com/mindzine/news/orient/go/special/records/longest.html
The reason that a Go game can last longer than the board size is the fact that captures will clear the board of old stones and allow new ones to be placed. I discovered this early on in my Go playing career, because there is a bug on the Yahoo Go server which can force you to capture your opponent's dead groups while he passes, so a stubborn cheater will keep the game going in the empty space created by the capture, lengthening the game sufficiently so that you lose on time.
To get an idea of how long such a game lasts, here is a sample game with many passes and captures on a 5x5 board:
This continues until moves 41-47
White now captures, and the process starts over again. I call this Stage 1 and the previous one, Stage 0, where the count is being kept by the number of White stones at the bottom.
Black fills up the board until the following position is reached
White captures, and the process starts over again with Stage 2 (two White stones at the bottom), where Black makes 25-3 = 22 moves (not counting passes), filling all the board except for one empty intersection (25-2 = 23) moves (not counting passes).
In general, Stage K will start with White capturing Black by placing a Kth stone, then Black making 25-K-1 moves, filling all but one empty intersection, for a total of (25-K) moves.
The end comes at Stage 24 (= 25-1). Here White places a 24th stone, but Black cannot place a stone and leave exactly one empty liberty (one move).
It is fairly simple to understand what is going on, but in order to count all the moves, it helps to consider only the number of stones actually played, and ignore passes, and to consider the general case of an N point board, since this will give a simple algebraic formulation for the answer, which is easier than keeping a tally of a special case.
So, as above, Stage zero will take N-1 moves. In Stage 1, White makes a capture then Black fills in N-2 points for a total of N-1 moves. In Stage 2, it is one White move and N-3 Black moves for a total of N-2. In general, for Stage K, there is one White move and N-K-1 Black moves for a total of N-K. The final stage in this process is K = N-1, where White captures a lone Black stone (1 move).
The game can keep on going, but this is a good place to stop and consider how many moves have been played. The exact number is (N-1) + (N-1) + (N-2) + (N-3) + ... + 1. There is a well known formula 1 + 2 + ... + N = N*(N+1)/2, and this gives the simple expression (N+2)*(N-1)/2 for the total number of moves. In other words, a game can last about N^2/2 moves on an N point board, which is much longer than the number of points = N. Note that if you count passes, then you get about twice this number. On a 19x19 board, the number of moves is 65,340 (not counting passes!) which is already a fairly big number.
If you don't like the idea of so many passes, a similar game can be constructed without any passes using the following ideas (easier to show on an even size board)
Once again, the repeated captures will make a sum of the form 1 + 2 + 3 + ... + (N/2) (assuming N is even, otherwise (N-1)/2 in the last term) which will give about N^2/4 moves, but since this is occuring twice (once at the top and once at the bottom) it will actually produce about N^2/2 moves once again.
Lower Bounds
It follows that the simplest non-trivial lower bound for the length of the longest game is of the order of N^2, where N is the number of intersections on the go board in question. I was wondering whether this could be improved.
In fact, this problem has been considered previously by John Tromp who found much better lower bounds than the ones I constructed here. In a 1999 rec.games.go article reproduced on the page number of possible outcomes of a game of this very site, he gives a lower bound of essentially 2^(N/2) for a board of size N.
Below, I will present algorithms which give an order N^3 lower bound for linear and square boards. These algorithms use the same basic ideas, but the linear case is simpler to describe. I will assume the rule where repeated positions are not allowed, otherwise known as the Superko rule. In order to simplify the argument, I have used the version of superko in which players can pass (note that passing repeats a board position), but I believe that all the following algorithms can be modified to ones in which there are no passes except at the very end, as in the Introduction.
It also seems that these algorithms can be improved to give games longer than any fixed power of N, as N goes to infinity, and maybe even an explicit lower bound like N^(Log Log Log N). However, it gets complicated fast...
I would be interested in getting an upper bound, but this seems like a much harder problem. Some progress on this question can be made for a very different type of Go board, Totally Connected Go, see below.
Alex Weldon: Well, the most obvious upper bound is the number of board positions, legal or illegal, which would be 3^N. A better upper bound would be had if you could subtract the number of illegal positions, containing strings like BWB, WBBW or | W B. This is hard, because of the problem of subtracting duplicate positions (i.e. if you count all the positions that contain BWB and all the positions that contain WBBW, you have to subtract out the number that contain both, to avoid counting them twice). My math skills aren't up to the task.
ilan: Well, the simple syntactic transposition from N^3 to 3^N actually makes for a huge spread, especially considering that the true upper bound is probably not that much bigger than N^3. The simplifications you describe don't seem that useful, as they probably only remove a negligible factor, but with a lot of work, as you point out. The superko rule is the most efficient, as it decreases the length from unbounded to a finite number.
Interestingly, 3^N is not an upper bound on the longest possible game. On a 1x1 board, you can have a 4 move game, which is longer than 3^1 = 3.
Alex Weldon: Well, if suicide is forbidden, the longest game on a 1x1 board is actually 2 moves, both of them passes. Even if suicide isn't forbidden, it just results in an empty board again, so if you're disallowing repeated board positions, it's still illegal. Anyway, you are right that passes potentially allow for longer games than the number of possible board positions, but since you can't have more than one pass (or two passes, depending on the rule set) in a row, that really is just a constant factor of 2 (or 3).
ilan: Pretty funny. I guess "tiny go" is subtler than I thought.
Actually, it would be more then 3^1 if there was 4 players or more playing a game of 1x1 go
Typical Go Games
In general, it seems that if L is the length of the longest game on a board of size N, then the total possible number of games is about N^L, see the page on Number of Possible Go Games.
So if N^3 is a correct lower bound for the length of a Go game, then you get the interesting result that this strange type of Go game, with most of the whole board being recaptured at least (on the order of) N^2 times, is in fact typical, in the sense that it represents the majority of all possible go games.
Roughly speaking, this has the following implications for the ordinary 19x19 game: Almost all possible games on an ordinary 19x19 Go board will last at least a million moves and a large percentage of the board will be captured at least 10,000 times.
Algorithm 1
This algorithm will produce a game of length order of N^3 on a 1xN linear board. Perhaps it has some interest, because I don't see a way to apply the John Tromp algorithm to a linear board.
Stage (0,1).
Black makes a move starting from the right corner, then gets captured by White from the left.
Stage (0, K).
There is a single White stone 2K -2 points from the right Black makes moves starting from the left side of the White stone, captures it and continues to the corner, then fills, and then gets captured by White from the left.
No position is repeated from the past, because Black's first move was farther to the left than any of his previous moves, and White's move was also further left than any previous move.
Stage (0, [N/2]).
[N/2] is the greatest integer less than or equal to N/2. It is the last possible stage of the type (1,K) level, since there will be no more room on the left after.
Stage (0, [N/2]+1).
There is a single White stone left at the corner or one away from the corner. White now plays on the leftmost open intersection after Black passes.
The number of stones played so far is C* N^2, for a constant C (probably = 1/4).
Stage (M,1).
There are 2M White stones starting from the left corner. Black makes a move starting from the right corner, then gets captured by White from the left.
Stage (M, K).
There are 2M White stones starting from the left corner. There is a single White stone 2K -2 points from the right Black makes moves starting from the left side of the White stone, captures it and continues to the corner, then fills, and then gets captured by White from the left.
No position is repeated from the past, because Black's first move was farther to the left than any of his previous moves at this level of M and White's move is further left than any other previous move at this level, and there are more White stones on the left than in a lower level of M.
Stage (M, [(N-2M)/2]).
It is the last possible stage of the type (M,K) level, since there will be no more room on the left after.
Stage (M, [(N-2M)/2]+1).
Apart from the 2M White stones on the left, there will be 2M+1'th White stone on the left connected to, or one away from the corner White stones. White now plays on the leftmost open intersection after Black passes.
The number of stones played in each of these stages is of order C* (N - 2M)^2, and one has M = 1,...,[N/2], so the total number of stones played is of order D * N^3, for some constant D (maybe = 1/12). The number of moves played is double this, since number of passes was roughly equal to number of stones played.
2x2 Example:
To get an idea of how these things work on a square board, here is a 46 move game on a 2x2 board which goes through 42 actually different board positions. I did this casually, so even assuming I haven't made a mistake and repeated positions, it is probably not optimal.
The total number of positions is 3^4 = 81, and it looks to me as if there are only two illegal positions:
Alex Weldon: There are more than two illegal positions. Including non-identical rotations and colour reversals, I count 24.
ilan: Right, I was actually considering positions which can occur when a stone is placed as legal, and forgot that you then have to remove the captured pieces to get the actual position (maybe influenced by the way things go in diagrams). The position I gave can never occur at any time, so I suppose it is "strongly illegal".
Alex Weldon: I thought that was probably what you were thinking, and it sort of relates to what I was thinking would be a theoretical next step in bringing down the upper bound, assuming one had found a closed form for the number of legal positions on a board of given size. Your "strongly illegal" position means that a diagonal position like this can only be reached from a single stone in the corner, not by capturing. Likewise, the other two positions shown here.
Now, with non-identical colour reversals and rotations, that gives us 16 hard to reach positions (4 of type 1, 8 of type 2 and 4 of type 3). However, there are only 8 possible positions that have nothing but a single stone (of either colour) in one corner. Therefore, at most half of the hard-to-reach positions can be reached in a single game, reducing our upper bound for the 2x2 board from 57 (81-24) to 49, which is getting very close to the 42 you managed to get to.
[Gunnar Farnebäck]: This is a sharp upper bound. The longest games on 2x2 visit 49 out of the 57 legal positions. There are 746304 different such games (including symmetrical variations).
Of course, it's much harder to find and categorise these "bottlenecks" on larger boards, and ridiculously hard to generalise it for an arbitrary board. Nonetheless, my intuition tells me that would probably be the best way to go in researching upper bounds.
58-move game on 2x2 board
Cuc?: In the estimation for the upper bound for the longest game on a 2x2 board using the SSK-rule (Situational Superko), one needs to take into account the possibility that board positions can reoccur, albeit with the other player to move.
The game below on a 2x2 board reuses 12 board positions. It cycles through 4 instances of a 14-move portion of the game ending with 2 passes, totaling 58 moves. Moves that aren't in the diagrams are passes.
The board position after move is the same as after move . However, after move it was Black to move, and after move it is White to move.
After move the position is the same as after move . But after move , it is Black to move while after move it was White to move.
The board position after move is the same as after move . However, after move it was Black to move, and after move it is White to move.
After move the position is the same as after move . But after move , it is Black to move while after move it was White to move.
The board position after move is the same as after move . However, after move it was Black to move, and after move it is White to move.
After move the position is the same as after move . But after move , it is Black to move while after move it was White to move.
The board position after move is the same as after move . However, after move it was Black to move, and after move it is White to move.
After Black is not allowed to capture White's stones, because it would revert back to situation 1 (White to move). So, moves 57 and 58 are passes, and the game ends.
This is the longest game on a 2x2 board I have been able to construct to date. I don't know that this is the longest possible game. I'm interested if this can be improved upon. Note that all possible board positions with 1 stone are used and that all board positions with 1 stone missing are used twice. It's an awesome long game, considering that there are "only" 57 possible (legal) board positions.
Totally Connected Go
The following produces a non-trivial upper bound on the longest possible game of Totally Connected Go, which is Go played with N points, where any 2 points are connected.
As Alex Weldon points out, for any type of Go game of size N, the simplest upper bound for the length of the longest game (not including passes) is 3^N, the number of possible positions, legal or illegal (actually 3^N - 1 since the initial empty position is not counted as a move). A better upper bound is the number of legal positions, which is hard to compute, in general. However, a closed form for the number legal positions can be obtained for totally connected go as follows.
As usual, the only illegal positions are ones in which a group has no liberties, but is still on the board. However, since any open point leaves a liberty for any existing group, illegal positions are exactly the positions in which every point is occupied by a stone, i.e., in which there are no empty points, so there are exactly 2^N such positions, corresponding to the possible choice of two colours for each point. The number of legal positions is therefore 3^N-2^N.
The upper bound on the longest game can now be computed the following idea: In general, call a position "weakly illegal" if it is the result of a capture in a legal position, but where the captured stones have not yet been removed.
Note that every legal capture generates a weakly illegal position. However, a weakly illegal position can generate one or two legal positions, because it is possible that either Black or White could have entered the position, as in the diagram.
Now in a long game on a board of size N, after at most N moves there must be a capture if the game is to continue further (you need room to place more stones). Therefore, there will be a weakly illegal position generated at most every N moves. Since each weakly illegal position gives at most two legal postions after the captured stones have actually been removed from the board, it follows that the longest possible game can have at most 2*N*W moves, where W is the total number of weakly illegal positions. Now in totally connected go, there are 2^N-2 weakly illegal positions (only full board of one colour is not weakly illegal), so the upper bound is N*2^(N+1) (the -2 term in W is insignificant and should be ignored to yield a simple formulation). I believe you can improve this bound by a factor of 2 by counting more carefully. Putting all of this together, one gets the result
The length of the longest game (not including passes) in Totally Connected Go with N points is bounded above by N * 2^(N+1). If P = 3^N - 2^N is the number of legal positions (the trivial upper bound) then the previous upper bound is essentially (Log P) * P^(Log 2/Log 3).
Unfortunately, this argument will fail for any normal type of Go. In particular, for totally connected go, almost all positions are legal, as N grows to infinity, whereas for normal Go, a positive proportion of positions are illegal. For example, on a square board
All positions containing a captured stone in a given corner are illegal, and there are 2 * 3^(N-3) such possibilities, yielding a proportion of at least 2/27 for the percentage of illegal positions among all positions.
One can obtain strong other bounds for other extremes forms of Go. For example, in Totally Disconnected Go, no point is connected to any other point, so all positions are illegal, since no stone has any liberties. It follows that the longest game (not counting passes) has length zero.
Algorithm 2
I believe that Algorithm 1 for linear boards works to give a lower bound for a square board. In order to make things clear, I will define "move" to mean transition to a different position, that is, a pass will not be counted as a move. I will try to keep track of the exact number of moves in this algorithm in order to get an exact lower bound for 19x19, but this will make the analysis much more complicated. So, consider a square board of side length Q and N = Q^2 intersections, and also let T(X) = 1 + 2 + ... + X = X(X+1)/2. This algorithm can be easily modified to remove all passes, see below.
Stage (0,1): Black starts by playing the first move in the upper right corner (1 move)
Stage (0,2):
Black captures the right hand corner, then White plays N-4 moves, filling all but one outside liberty of the Black group (N-3 moves).
Stage (0,3):
Black captures White, who then makes N-5 moves filling all but one outside liberty of the Black group (N-4 moves).
Stage (0,K):
Black captures White, who then makes N-K-2 moves filling all but one outside liberty of the Black group (N-K-1 moves).
Stage (0,N-3):
Black captures White, who then makes one move filling all but one outside liberty of the Black group (2 moves).
Stage (0,N-2):
This is the end of Stage (0,.) and the total number of moves in this stage was 2 + (1 + 2 + 3 + ... + (N-3)) = 2 + T(N-3)
Stage (1,1): White captures the Black group, then Black surrounds White (3 moves).
Stage (1,2):
Black surrounds the right hand corner, then White plays N-6 moves, filling all but one outside liberty of the Black group (N-5 moves).
Stage (1,3):
Black captures White, who then makes N-7 moves filling all but one outside liberty of the Black group (N-6 moves).
Stage (1,K):
Black captures White, who then makes N-K-4 moves filling all but one outside liberty of the Black group (N-K-3 moves).
Stage (1,N-5):
Black captures White, who then makes one move filling all but one outside liberty of the Black group (2 moves).
Stage (1,N-4):
This is the end of Stage (1,.) and the total number of moves in this stage was 4 + (1 + 2 + 3 + ... + (N-5)) = 4 + T(N-5)
Unfortunately, this method doesn't generalise directly, and there will be a lot of details ahead. You really need to consider Q = sqrt(N), which is the length of the board. For stages up to Q-2, the above method works without change.
For M < Q-1
Stage (M,1): White captures the Black group, then Black surrounds White (M+2 moves).
Stage (M,2):
Black surrounds the right hand corner, then White plays N-2M-4 moves, filling all but one outside liberty of the Black group (N-2M-3 moves).
Stage (M,3):
Black captures White, who then makes N-2M-5 moves filling all but one outside liberty of the Black group (N-2M-4 moves).
Stage (M,K):
Black captures White, who then makes N-2M-K-2 moves filling all but one outside liberty of the Black group (N-2M-K-1 moves).
Stage (M,N-2M-3):
Black captures White, who then makes one move filling all but one outside liberty of the Black group (2 moves).
Stage (M,N-2M-2):
This is the end of Stage (M,.) and the total number of moves in this stage was M+3 + (1 + 2 + 3 + ... + (N-2M-3)) = M+3 + T(N-2M-3)
Avoiding passes
You can avoid all the passes in Algorithm 2 by using the mirror image technique as in the Introduction. Once again, you lose a factor of 2 in number of board positions, which you win back because there were no passes.
Stage -1
The method starts by setting up two disjoint board sections. In the case of an odd size N = QxQ board, place a row of White stones (Q-1)/2 rows from the bottom and a row of Black stones (Q-1)/2 rows from the top and on an even size board put the White stones Q/2 from the bottom and the Black stones Q/2 from the top. By symmetry, these and all the following moves can be done by alternating White and Black moves, as White just plays the mirror image of Black's last move.
One can now proceed with Algorithm 2 by alternating moves on top and bottom, and following the algorithm normally on the bottom, and with colours reversed on the top. One does have to make some slight modifications, as can be seen by following the first few stages:
This continues as in Algorithm 2 until the last (0,.) Stage.
The (1, .) Stages start with Black and White capturing the respective groups, and I hope that by this time, the correspondence with Algorithm 2 is now clear.