Some Philosophical Questions about Computers and Go
Please feel free to answer a question, pose a new one, rant about the author, etc.
How long will it be until a computer is clearly the Go champion?
- Will Moore's Law hold for the next 50/100/200 years?
- It's proven very difficult to figure out what Moore actually said, since he's misquoted in so many different ways. So to answer this without reference to Moore's Law: Computers will continue to improve at surpising and impressive rates forever, not just for the next 200 years; the rate of improvement probably cannot be estimated beyond the next two years.
- Moore's original paper (from 1965) is available at
http://www.intel.com/research/silicon/moorespaper.pdf. It expresses what was later called his law in terms of number of components per IC, but subsequently it became more common to measure the linear feature size, or just "speed" instead. The speed increase has been a factor of 2 every 18 months, ever since the 1970s (it was faster than that during the 1960s) and it is generally believed that this rate will persist for the near future. However, in a fairly recent (1990s) video, Moore predicted that the ultimate limiting factor in the rate of semiconductor improvements will not be physical or technical, but economic: the cost of semiconductor manufacturing plants is growing exponentially. That is not currently a problem, as demand is growing too; but if the world's appetite for faster silicon continues at this rate, then the amount we spend on semiconductors will equal the global GDP by some time in the 2020s. So I believe that Moore's Law as we know it will not apply for the next 50/100/200 years, because we will not be able to afford it. -- Gary?
- There is no particular evidence that an increase in computing power is sufficient to make the current programs into world class champions, almost without regard to the computing power available. Given current program skill, the programs would need to look dozens of moves ahead (let's say 30) and try all potentially remotely reasonable moves (lets say 50 moves). Standard alpha-beta routines put that at 50 ^ 15 moves tried and carefully evaluated; that's about 10 ^ 25 or 2 ^ 84. Say a modern computer can evaluate 10 moves a second; further suppose that your future supercomputer is 100 billion times faster (10 ^ 11). This means you can generate a good move in only 10 ^ 13 seconds, or only 300,000 years! Now, relatively standard pruning algorithms for selective searching will significantly cut down the number of nodes tried; to get it down to 1 day per move would require a pruning factor of about 10 ^ 8, which is probably doable. So, if Moore's Law holds for the next 55 years (doubtful), then maybe the standard techniques would produce a professional grade player; I doubt even that would produce a world champion, though. In short, we need more intelligent programming; the go programs are getting better as a result of a lot more than just faster CPUs. A professional quality program within 50 years would not really surprise me; a world champion would. -- evand
- While I think that 1p is reachable and strong 9p is unlikely unless true AI is developed, increasing computer speeds can buy us correspondingly increasing reading capability for tsume-go, ladders, connections and so on, where the branching factor is typically much less than you show above. Further, not all of the tactical readings would need to be repeated at each move. This increasingly strong tactical foundation could allow a program to advance faster than a human who understands the same level of strategic planning. Designing enough knowledge to read the tactical coup in the game shown under Yi Se-tol would be a real challenge, I think, but the low branching factor in the forced moves makes it at least conceivable. -- Scryer
- Niklaus: The difference in playing strength between 1p and 9p is rather small, tiny compared to the difference between go programs now and 1p. What makes you think that if 1p level is reachable, we can't go all the way to 9p?
- The difference in playing strength is small, but the thing to realise is that the difference is a reasonable fraction of the distance to perfect play. If it takes a person as long or longer to go from 1p to 9p as it does to get to 1p in the first place, why should we expect a program to advance any faster? We can already see that programs improve more slowly as their strength increases, I would expect this to be more true at the professional level, not less, especially as the number of people capable of playing go at that level and capable of writing good programs gets smaller and smaller. -- evand
- I think you are over-estimating the strength of 9p players. Cho Chikun says he feels overwhelmed when he thinks of the vastness of go, i.e. he doesn't think he really understands much on an absolute scale. I'd say the distance from 9p to perfect play is much greater than the distance from 1p to 9p. -- Bob McGuigan
- First, All those assumptions are based upon the evaluation of board including a lot of things like deep tactical reading, life and death reading, and full board positional judgement. My point is that in order for a current go program to make a professional quality judgement about the value of something like a strategic attack on a group, it would have to read out all potentially reasonable continuations of the attack, including things like leaning moves and kikashi elsewhere, and try other attacks on the board in attempts to set up a double attack, etc. Current programs do caching that avoids repeating the reading when possible; I just assumed that the cache would not change substantially, which is probably correct to within an order of magnitude or so. Finally, I'd like to point out that the Yi Se-tol game you mention is *not* a tactical coup. Anyone can see that if you keep playing out the ladder, you eventually get that move (not everyone can think of it, though); the problem is one of strategic judgement: having played out the ladder and captured the stones, who is ahead? has b so badly damaged his position by playing a dead ladder that the game is lost, or is the capture big enough to make up for it? This is not an easy judgement to make for a human, and a go program of today could only make it by reading out the ladder, and then reading out a reasonable fraction of the remainder of the game. -- evand
- When a computer becomes the 19x19 champ, will a person still be able to beat it at 39x39?
- I hate to think how much importing a 39x39 goban from Japan will cost! And where would I play it?
- 19x19 has features beyond mere scale; it is unique among candidate board sizes (NxN, where N is an odd number) in that it is closest to the point where territory on the third line and below is equal to territory on the fourth line and above. Any different size would change this balance and hence upset the quality of play. -- Bignose
- Sorry, I don't buy this theory! If B only plays on the third line and W only plays on the fourth line, then B wins by 15 points.
- Bill: And Black makes 8 more plays than White. (Pointed out by Fujisawa Hideyuki, IIRC).
- If they are playing on a 21x21 board, then W wins by 17 points. The two boards are about equidistant from this fabled point. In any case, I see nothing special about the third and fourth lines. Go is undoubtedly a deeper game the larger the board, and historically the standard board has kept growing in size. The limiting factor is probably the time needed to make all the moves, which of course grows quadratically. -- Saesneg
- quadratically - It grows a little more complicated than that. The number of intersections grows quadratically indeed. There are, however two contrary influences on the increase of the time needed.
- Some evidence suggests that the number of moves to dispute the same amount of intersections decreases with the size of the board.
- The complexity of the individual moves increases with the increased number of possible moves. -- mAsterdam
- It hasn't grown that substantially. The "classic" old board is 17x17 and the current board is 19x19. The reason for sticking with a 19x19 as opposed to a 21x21 is that the go academics felt anything larger than 19x19 would be near impossible for a human to reaonsably read. The combinations and possiblities would be too vast. As for the time it takes to play out, in spite of the increase in possible combinations and moves, I don't believe it would be much greater than a 19x19. People rarely fill every intersection on the board, and with bigger sizes there would likely bie larger alive territories. Sure, you can play on any size you want, but players of old have long considered a 21x21 board and still opted to stay with a 19x19. But when you're dealing with AIs, it would be folly to concern yourself with larger boards when you can't even handle a 19x19. -- LeiMagnus
- I'm going to conjecture that there exists an A(n) algorithm for deciding upon the best move in a board position, where n is the size of the board (in intersections). For those who haven't seen this notation, I'm saying that the average time taken by the algorithm to find the best move (on a board with n intersections) is bounded by n multiplied by some constant (which we may or may not know). To put it another way, for an arbitrarily large board, doubling the size of the (edge of the) board will not make the task of finding the correct move significantly more than four times as hard. --Bildstein
If a computer becomes champ, will Go still be interesting?
- Unlike chess, people could still play handicap games
- We could decide "against God, I would need four stones."
- Computers have "solved" many other games that people still find interesting to play against one another
- Checkers being a prime example. (But a bogus one, in that it has not been solved. The closest an entity has come to Kami no itte in checkers may be Schaeffer's program
Chinook, which has won a human championship but has not solved the game. Solving checkers is a long-term goal of the Chinook project. --Scryer
- They have solved the White Doctor opening (10-14 22-18 12-16) at 2004-08-02.
- The methods used by computers to beat humans are usually uninteresting to humans (brute-force problem-space searching) who will still want to improve their own play
- For me the interesting aspect of having an invincible computer would be to observe games of an even higher quality than professionals achieve. Not that I understand those either. I would like to read games of Invincible v. Invincible, commented by professionals. But I think there is little chance of such a machine in our lifetimes. -- Saesneg
- Assuming that "Invincible" implies "Perfect Play" then it shouldn't be too interesting (at least no the second time) because it would theoretically be the same game over and over again. Both players always get the maximum amount of territory possible for each game position. It might be rotated or miai might be played our slightly differently, but that should only allow for a few variations (and most likely minor ones at that). --IsaacFreeman
- Why would the difference have to be slight? Our notion of limiting, at least, seems to have near-perfect play playing out a wide and varied game. There are so many board positions possible that I'd suspect that there would be many qualitatively different, perfect games. Miai could be on the scale of twenty moves or more. Near-perfect play, at the least, seems to attest to this: how many different games have there been with the same score? Countless, essentially. When play becomes perfect, we might be limited somewhat, but it could well be that play is, say, restricted to .1% the variety of current top play, which would still probably be very varied.
- For all those people like me, who for practical purposes will never run out of people who can beat them easily, it won't matter at all if a computer joins them. For the casual player, it's for the joy of playing or self improvement, not being the best. People who can't lose either get very frustrated or very good. --Confused
- It wouldn't bother me if computers were to become better go players than humans. It doesn't bother me that a race car can travel a mile faster than I can or ever will on my own. I still get satisfaction out of "personal bests". So I'll still enjoy go when computers become the strongest players, though maybe it won't happen in my lifetime. -- Bob McGuigan
Go exposes moral character, so computer Go players will always suck at some wabi/sabi level?
- Not necessarily. Just like in music, computers can produce pleasing pieces without a trace of soul, which are better than most of the junk produced by humans with a soul - if they haven't sold it already. Only because Go can be used to express one's character doesn't meant that every game has to contains a character expression. -- Confused
- I am pretty sure that Go will never be "solved" although artificial Go player may one day be better than any human player. Michael Moos?
- The go board has a finite number of states. (Yes, it's large, but it's finite.) 5x5 go is solved. (
http://www.cs.unimaas.nl/~vanderwerf/5x5/5x5solved.html) Please explain the size where it becomes unsolvable, and why that number is special.
- Likely there is a size where go becomes unsolvable (although I don't think it's only 19x19). To solve a game you need time and storage - in a universe that is finite in size and probably has a limited lifespan, you don't have an infinite amount of either. And no, I don't think this will ever be a consideration in computer go, but it's a relevant observation. Mitja Lustrek
When a computer becomes champ, will anyone care?
- All non-luck games will already have been won by computers. Go will inevitably be the last skill game to fall?
- Wouldn't its being the last game make it more worth caring about for that?
- 25-50 million people around the world play Go. They might care :-)
- A similar number play chess. Did they all care?
- Klaus: Chess has not been solved yet! Kramnik was (at very least) better than Fritz, and Kasparov as good as Junior. Human chess players still suffer from trying to change their playing style according to their opponent. And, by the way, even people I hardly consider to be chessplayers did care!
- Many people don't consider the brute-force problem space search methods to be very interesting from a can-computers-think perspective, because there is no creativity shown by such methods.
- Computer players are not limited to brute-force problem space search methods and neither are geometric prover programs which have already come up with creative new proofs for old theorems.
- As Kasparov-Deep Blue proved, Chess was a dead end for Artificial Intelligence research (as have other games). So there is no worry of a computer being champ. -- Tim Brent
- This conclusion seems muddled. That a computer has beaten the best human at a match set of games of chess does not mean that another one can't be programmed to beat the best human at another game, such as Go. Games were not a dead end for AI (Artificial Intelligence), but actually a fruitful area that led to many techniques, such as planning, backtracking, pruning, and blackboards. One can easily imagine that a sophisticated game such as Go will lead to other sophisticated techniques. Further, some existing techniques such as neural networks and genetic algorithms have not been exhaustively applied to Go, yet.
- "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." Edsger Dijkstra
- One of my favorite Dijkstra quotes, and it's being abused here. There are people who say "Sure, computers can play chess, do calculus, navigate city streets, and compose music, but can computers think?" Unfortunately for you, should you try to debate such people, they won't ever give you an operational definition of what it means to think. Press them hard, and you come to the conclusion that thinking is "whatever I can (mentally) do, that a computer can't do". Well forget it, Dijkstra says, such questions are uninteresting. A submarine can locomote, through water, from point A to point B. Do you want to call that "swimming"? Who cares? Similarly, the question of "can a computer play professional-level Go?" is very interesting. The question of "is such a computer thinking?" is pointless. -- John Aspinall
- I don't think it is being abused. I think it being used in exactly the way you intend.
- If so, then I apologize for ladling on unneeded criticism. But if so, then why is the quote here at all? If we are discussing Computers and Go, and we are all clear that discussion of "thinking" just adds noise to the signal, then why include a quote about computers and thinking? John Aspinall
- Whoever this Dijkastra person is seems to be making a trivial point (at least taken out of context). As is implied by Aspinall such questions are dependent entirely on the semantics of words like "think" - essentially the provision of an operational definition of thinking. Consequently it is facile to say they are uninteresting in general - they are exactly as interesting as the meaning of their referents. It doesn't seem trivial to me to debate whether or not what a machine is doing when it composes music etc. is thinking. The really interesting question is, "What would the construction of a professional Go-playing program tell us about thinking, under a given definition of the term?" In the case of the above discussion of brute force methods the answer seems likely to be, "nothing at all" if the definition of thinking adopted implies (as I think it usually does) thinking in the sense that humans do it.
- Dijkstra
was a moderately famous computer scientist (amongst computer scientists). Obviously his interests and yours differ.
- unkx80: Abused or not, this statement by Dijkstra is often quoted in artificial intelligence texts. The computer scientists do not seem to have a consensus as to whether a computer (they used the word "agent") is intelligent or not, in fact, there are at least four schools of thought on what intelligence as applied to an agent is. Hence, I do not think think people can ever agree on whether a computer can think, specifically, whether a computer player playing Go is a thinking player. One thing for sure is, some undergraduates will remember Dijkstra for his famous algorithm for finding shortest paths.
- Rethinking what I wrote above I suppose what I would regard as interesting is what could be learnt by constructing a go-playing agent that didn't rely on brute force to any great degree. Douglas Hofstadter (for example) regards analogies as central to human intelligence. Go seems like a wonderful space for representing analogies generally within. On the subject of quotes from AI books, Russell and Norvig disappointed me greatly in their otherwise excellent book by saying, "Go is a game which seems likely to benefit from future advances in AI."
- Michael?: But what makes you think human brains don't use brute force when solving problems?
- I didn't say they don't use brute force but that they don't use it exclusively. Otherwise we'd never be able to reliably come up with reasonable answers to big problems in a short space of time. What I wanted to distinguish between is (in the ideal case) a perfect go-playing agent which searched _every_ possible board conformation and made the right move every time as a result (hence 'solving' 19x19 Go) and one which made excellent moves as a result of a combination of some brute force and some heuristic measures. The point is that humans can explain board states with reference to mental states and groupings of pieces. Something which purely enumerated every possible board state (and did *nothing* else) would not be able to do this, since its 'understanding' of the board is based entirely on chains of past and future play and contains no higher-level grouping at all. Finally, think of how you play Go, and how professionals describe it - in commentaries it is common to read many moves ahead but no-one ever describes a fuseki in terms of every possible continuation to the very last move - even if we assume that humans enumerate the search space to some degree and do so faster than modern computers (and therefore play better) the space is still far too big for us to be doing it to any great extent.
- 1) Humans will be the ones to create the computer that can beat any human, so in a higher sense it's a moot point. 2) The game is complex enough, for people at least, that there is a strong psychological aspect of the game. I can't study myself to dan-level, it has to be something I'm good at, otherwise it wouldn't be an achievement. A computer, or person, that has one unchanging, monolithic strategy by which he/she/it always wins is boring. IOW, we won't care if a computer is the best, because we won't play him much. 3) In keeping a sense of goodwill towards the concept of playing a game, winning shouldn't be ultimate goal. Those of us that truly like to play, ergo, won't care if a computer is the best. Those of us that do want to win more than they want to *play* won't care either. Only one of us on Earth is in the unfortunate position of having no human bet\ter than him/herself. I'll leave it to him/her to resolve that issue. --Baz
Intuition is likely unreliable in these matters
As a strong computer programmer (with an interest in AI), and a weak Go player, I'd be very reluctant to make judgements about computer Go. It's obviously a very hard problem, but I think that for someone's intuition about computer Go to be especially valuable, they'd need to be an upper-level dan player and have a talent for innovative AI solutions. There's a couple of possibilities, none of which are mutually exclusive, and it isn't easy to guess which might apply:
- Modest strategic abilities and decent heuristics might be prove formidable in the hands of a machine that could read out millions of board positions per second. The computer wouldn't necessarily have to have great intuition--just good enough to prune the search tree to point where brute force becomes formidable (this appears to be what happened with Deep Blue; it's not so much a great chess player as an adequate player with inconceivable reading ability).
- Computers could prove formidable for a few games because of novel play style, but would not stand up in long-term competition against people who specifically worked to attack the computers' weaknesses (I gather this is already the case, for single-digit kyus). Similarly, Deep Blue would not likely stand up to Kasparov in series of a few hundred games without heavy, ongoing intervention by the programmers.
- Dan-level play could prove truly deep, requiring AI advances that bring computers to near human-level intelligence in several important areas. Even so, I wouldn't be surprised to see this happen within my lifetime--like Bill Joy, I sometimes wonder if humans aren't in danger of obsoleting ourselves.
Rarely are interesting AI problems solved by search. It's often a leaden, uninspired technique. You've got to look for either a sudden insight or a methodical understanding of the actual problem.
A lot of supposedly hard "AI" problems get partially solved when somebody comes up a key insight--for example, the MIT "eigenfaces" algorithm boosted face recognition from 20% to 90+% accuracy by applying undergraduate statistics cleverly. Spam filtering went from horrible to 98+% accuracy with the discovery of a clever hack inspired by Bayesian probabilities.
Other times, progress in "AI" problems comes from dutiful enumeration--and tweaking--of hundreds of rules. For example, look at the sudden jump in bot quality in Counter Strike: Condition Zero. This was based on a few clever algorithms and lots of hard work, carefully enumerating hundreds of human-like behaviors and limitations, and testing their interactions extensively. The end result is fairly similar to playing against real humans, but only because the author was an expert Counter Strike player.
Thus, any workable solution to computer Go is likely to be surprising in nature. Certainly no low-level Go player understands the game well enough to know what problems need to be solved to play at a consistent 1p level, and few professional dans have spent enough time studying AI to have a good intuition for what novel techniques might be available. Even then, a few hundred dan-level AI hackers could overlook important tricks for a decade or two, only to be surprised by an AI "tesuji".