Computer Go

    Keywords: Go term

Table of contents

Current state of computer go

In March 2016, AlphaGo defeated Lee Sedol (9p) 4-1 even game: [ext] . AlphaGo is number 2 in the world on [ext] with a rating of 3596 on April 26, 2016.

In October 2015, AlphaGo defeated Fan Hui (2p) 5-0 even game: [ext]

This represents a huge jump in strength of computer go programs, previously computers had only managed to beat pro players with a 4 stone handicap. Below is a table and graph of the first time a computer beat a pro player at each handicap. See also KGSBotRatings for progress measured by rank on KGS.

First win at each handicap
Handicap Date Computer Human
9 2008-8-7 MoGo Myungwan Kim 8p
8 2008-9-4 CrazyStone Kaori Aoba 4p
7 2008-12-14 CrazyStone Kaori Aoba 4p
6 2009-2-9 MoGo Li-Chen Chien 1p
5 2011-3-8 Zen Kozo Hayashi 6p
4 2012-3-17 Zen Takemiya Masaki 9p
0 2015-10-5 AlphaGo Fan Hui 2p
0 2016-03-09 AlphaGo Lee Sedool 9p

In 2006 Go AI programmers made a breakthrough by incorporating Monte Carlo (MC) methods, and between 2006 and 2012 they made quick progress. The MC programs have proven adept at harnessing the last several decades of increases in processing power. Zen19 has achieved excellent results; see below for an in-depth discussion. Other go AI projects, including Aya, Pachi, and CrazyStone, are also working with MC methods.

From 2012 to 2015 progress was slow. But in 2015(2014?) a paper was published showing the promise of using Neural Networks. Several bots incorporated this new technique, and Zen19 was able to reach KGS 7d by adding this.

Then in January 2016, Google announced their bot AlphaGo which made several improvements in the way Neural Networks are trained and used in their AI. AlphaGo defeated Fan Hui 2p 5-0 in a 5 game match.

Historical background

Go has the highest state space complexity of any game so Go was the hardest information complete game for a computer program to master. Therefore it took longer for a computer program to master Go than any other information complete game (in an information complete game there is no chance element and all player have all information. Unlike poker, for instance).

The first Go program to play a full game of Go was Al Zobrist's program in 1968. The first time a computer competed in a human Go tournament was in the 1980s, Nemesis at the Massachusetts Go Club. More details?

In 1995, professional player Janice Kim defeated the top program of the day {which?} despite of a whopping 25 stone handicap. By the early 2000s, computer programs {which?} were at the footsteps of amateur 1 dan - quite a large improvement, but still lightyears away from the realm of top human professionals.

In 2006 a new approach to Go AI began sweeping the Go scene: Monte Carlo. This method builds a tree of potential moves and responses, but evaluates each move in the tree by using a semi-random playouts. In a playout, the program plays the game to completion, using either random moves or very simple heuristics. When a playout results in a win, that improves the score of the moves it originated from, and the best performing move is chosen. As of 2012, a program playing on KGS has reached 6 dan. Naive extrapolation would have predicted computers reaching professional strength sometime between 2015 and 2018. Alphago achievement was faster than predicted by most experts.

It has been argued that these ranks are a bit deceptive, however, because with sufficient practice playing against computers, a human can often adapt to the weaknesses of the programs and subsequently perform much better against them (see the Shodan Go Bet for one attempt to confirm this idea). Also, it is a common belief that on the go server people tend to take games against computer opponents less seriously. Top programs have won against professionals with a 4 or 5 stone handicap.

In October 2015, AlphaGo developed by Google DeepMind defeated Fan Hui (2p) 5-0 in 19x19 even games.

Thus computer Go is a very exciting part of Artificial Intelligence (AI), and many new ideas and techniques are yet to be discovered. It is interesting to note that in Chess, with a much lower state space complexity than go, the computer has proven superior to the top human players much sooner.

Go has much higher state space complexity than most other games so it will take longer to find an algorithm to play at a high level than other games.

In fact, Go endgames have been proven to be PSpace-hard, let alone other parts of the game. Also, many other aspects of Go, including life and death, are also known to be NP-hard. This means that it is very unlikely to be able to find a reasonably fast algorithm for playing perfect Go. So it looks like it's all about heuristics (surprise surprise). Or it means that it will take a long time to find a reasonably fast algorithm to play Go at a very high level.

Patrick Taylor: I'm not sure heuristics are all that bad. Humans don't know the perfect move sequence any more than computers do. Therefore, we basically play with heuristics as well. Proverbs are essentially heuristics used by human players to approximate good play.

In July 2011, KGS records accurately show computer program Zen19s achieving 4 dan. In June and July 2011, Zen19s played 700 games on the KGS Go server. Zen19s plays at 20 minutes main time and then 30 seconds per move. A player can download the games of Zen19s from the KGS server. The player can study the games to find the program's weakness and try to exploit it. After 700 games and almost 2 months, Zen19s is 4 dan on KGS. This shows that finding the weakness of computer players is just as hard as finding the weakness of human players. Computers have truly reached the 4 dan level. Successful chess programs do not use brute force, but are selective in finding a move. Chess programs that use brute force, play at a low level because of the huge branching factor. Chess programmers do not use fast algorithm for playing perfect Chess. Mark Schreiber, July 22, 2011

(Note: 2006 was the last year that GNUGo won the CO)

Strength of strongest programs

It is natural to want to know both which Go program is the strongest, and how strong it is. However, due to the recent progress of computer Go, a definite answer is complicated.

As of December 2015, the common wisdom of the western Go community was that Zen was the strongest program. In the 19x19 section, Zen placed first at the 14th Computer Olympiad in 2009, second in the 15th in 2010, first in the 16th,17th and 18th in 2011,2013 and 2015.

Zen has lost some monthly Computer Go tournaments on KGS to both CrazyStone and Pachi, and lost the 2013 UEC Cup to CrazyStone on points, despite beating CrazyStone in the direct match-up.

Zen's reputation was established largely by two convincing victories over Takemiya Masaki in March 2012, at five and at four stones. Zen has scored other victories over pros, as well. However, it is not clear how seriously professionals have been taking these exhibition matches. At the TAAI2012?, for example, Zhou Junxun used 8 minutes of his 45 minutes of thinking time in his game against Zen.

As no professional system has awarded Zen a rank or arranged a formal contest between Zen and a professional player, the best gauge of Zen's strength is its rank on KGS. On July 20 2013, the account Zen19d is a strong 5d on KGS. Zen19d was 6d for a few games, but was unable to defend the rank. However, because Zen is only available to play on KGS for short periods, demand is high and many of its opponents play it at times and at time settings that they would otherwise refuse. Furthermore, in July 2013, Zen19d played 346 game, in nearly half of which (147 games) Zen gave three or more handicap stones; giving six stones in 10% of the games (34 games); high handicap games favor white. It remains to be seen what rank Zen can defend on a long-term basis.

However, stubborn skeptics have consistently claimed that the KGS ranks of strong programs are inflated, and have been embarrassed by the programs' fast progress again and again. Programs went from 2 kyu in 2007 to 6 dan in 2012. See KGSBotRatings.


See the [ext] Computer Go Bibliography maintained by the [ext] Computer Go Group at the [ext] University of Alberta.

List of Existing Go Playing Programs

Please see Go Playing Programs for a discussion about the best programs currently available.

List of Existing Problem Solving Programs


Some online Go servers, such as KGS, provide software opponents or robots to clients. Robot is derived from the Czech word robota meaning drudgery or slave labour which certainly describes the work needed to pummel double digit kyu players like me. PatG

Robot Substitution Gameplay has been proposed, but not yet developed.

Anti-Computer Strategy

It might be entertaining and educational to consider strategies for beating computers at Go.

The current generation of strong Monte Carlo programs may be confused by ko. Also, while the weak go programs of the 90s were relatively strong at the endgame, this is no longer a special strength of bots and it is possible to make up points in the endgame.


Below are competitions where Go playing programs can be tested.

International Competitions:

Regional Competitions:

  • European Championship (1987-2005,2008)
  • US Championship (1988-2000,2008)
  • USENIX (1984-1988)

Small Board Competitions:

History of Go Programs vs Human Professional Matches

Program vs Professional

History of Go Programs

  • [1989]: GNU Go 1.1 was posted to March 13 1989
  • [1968]: Albert L. Zobrist wrote the first ever program which played complete Go games.
  • Ishi - old file format

Articles in Magazines


(Smaller Go Bibliography, with comments)

  • [ext] : Jan van der Steen's computer Go page has numerous links to famous programs and programmers as well as programming resources and articles.


  • gnugo-list

Authors: Gounter, Chestnut

Computer Go last edited by on April 26, 2016 - 18:51
RecentChanges · StartingPoints · About
Edit page ·Search · Related · Page info · Latest diff
[Welcome to Sensei's Library!]
Search position
Page history
Latest page diff
Partner sites:
Go Teaching Ladder
Login / Prefs
Sensei's Library