Other Games Considered Unprogrammable

   

Other Games Considered Unprogrammable

...or at least very difficult (just as Go used to be)

If you think Computer Go Programming is too difficult (even the rules are difficult to get correct: ko/superko, life and death/seki, end of the game, counting etc) to start your own program, here are some alternatives:

Lots of games exist for which no good computer opponent exists. Often no-one has ever tried to get a good opponent working. Particularly interesting are games where the human player seems to be able to beat even the strongest (currently possible) computer opponent. (See also: Intelligence)

Table of contents


Similar to Go in the aspects:

  • that the branching factor is high
  • once played, a stone stays on the board and doesn't move

Especially interesting are the following two:

Hex

  • Solved up to 8x8. Strong programs exist, however less strong than the best humans on common board sizes.
  • the best programs successfully include a theorem-proving approach
  • kde-opponent "six-0.5.0.tar.gz" is very good for example
  • See computer-olympiad-pages for a list of papers on hex
  • In 2019, Leela and gzero seems to have surpassed the best humans but it is not official as the hex community does not have a global ranking. (See [ext] https://www.littlegolem.net/jsp/forum/topic2.jsp?forum=50&topic=739)

Havannah

("unprogrammable"?!, $1000 offered in 2002 for computer that beats human)

havannah is very similar to hex:

Is a strong opponent possible with hex methods? -- ab

hk (2007-11-23, -12-17): Probably not. The (limited) theorem proving approach that good Hex programs use is based on a systematic way of generalizing a basic approach invented by Claude Shannon. I don't think there is any obvious parallel to that basic approach in more complex games, but it might just be worth digging up the very first computer Go approaces, and see what one can do ;-)

Hyperpape (2016-01-13): Recently, Christian Freeling (a top ten player at little golem) lost the Havannah Challenge. In 2002, he offered $10,000 to any bot that could beat him in one game of a 10 game match. The challenge occurred this year, and we were surprised to find that he lost three games, to two separate MCTS bots. Humans are still ahead of computers, but it looks like the gap is rather narrow — narrower than in Go even.

connect6

Willemien you can even play it on a goban after Black plays only one move White and black alternate having two moves. The game is suppost to be fair. But what does that mean?

  • It means that Black doesn't have any obvious advantage, in spite of moving first; there's no need for a pie rule or other workarounds.

Dots and Boxes

ilanpi: I find this game similar to Go in many ways. I believe that it might be more difficult than Go, because there is a parity condition where I could give it inherent instability. It has been completely analyzed by computer up to the 5x5 dots case (16 boxes). The 6x6 case (25 boxes) is unsolved as of 2002. I have played these sizes and have found the play quite similar to 7x7 Go, and therefore similarly much simpler than 9x9 Go. Unfortunately, it seems hard to get a good game on anything larger than a 5x5 dots board, so I don't think much concrete is known about Dots strategy for larger sizes (as compared to go, for example). I have written a web site about Dots: (dead link) [ext] http://cf.geocities.com/ilanpi/dots.html

  • excellent page!

coldnight: I must say I dissagree this game is very simple compered to go there is very little real depth in the strategy in it

Patrick Traill (2018-08-18): Is this not in Winning Ways?

Twixt

another "connection game"; almost no programs exist currently, so they are also weaker than human players; difficulty..?

[ext] http://www.johannes-schwagereit.de/T1.html the program Twixt T1, with source

Some discussion, for example [ext] "Anti-computer" abstract games.

Search in newsgroups: [ext] twixt in rec.games.board.* and [ext] twixt in rec.games.abstract (newsgroups).

Twixt-Variant: Twixt PP, no links are ever removed, own links may cross other own links; this may be easier to program, with those few and difficult special cases left out, where links had to be removed.

Mind Ninja

hk: Nick Bentley's "Mind Ninja" (an awful name for a great game) is fundamentally hard to write an AI for. The game starts with a negotiation phase, where one player defines the pattern to be built (within certain constraints), and the other decides whether he would rather attempt to build it, or attempt to block it from being built. In the most general form of the game, shape and size of the board is part of the negotiation phase, and there is no upper limit on size. So the game's state space complexity is theoretically infinite! Not only would an AI have to be able to evaluate and play an immense number of patterns, it would also have to be capable of proposing new patterns.

nb?: Hi! I'm the inventor of Mind Ninja. Although I agree that programming Mind Ninja is difficult, I don't think it's as difficult as hk supposes. For the purposes of competitive play, the players will not be allowed to invent new patterns, but will rather have to choose from a list of patterns. This list may be long (I envision between 100 and 300 patterns), to ensure that the game retains the variety which is its strong suit. The upshot is that the game will not be infinite in competitive play. With that in mind, I think that a monte carlo program, which can play without lots of expert knowledge, could be employed to generate at least a moderately competent AI. In fact, I'm searching desperately for a programmer to hire for this very purpose.


Dots

[ext] https://en.wikipedia.org/wiki/Dots_%28game%29 Dots (Czech: Židi, Polish: Kropki, Russian: Точки) is an abstract strategy game, generally played by two (or more) people on a sheet of squared paper. Instead of territory control, as in Go, the primary target of dots is capturing enemy dots by surrounding them with a continuous line of one's own dots. Once surrounded, dots are not playable.

Infinito

[ext] https://boardgamegeek.com/boardgame/160609/infinito Infinito is more a theoretical concept than a game because it is born during the extravagant research for a game with the maximal branching factor. There is a discussion (here: [ext] https://boardgamegeek.com/thread/956310/infinite-game-infinito) whether Infinito has an actual infinite branching factor or, if not, how large it is. Given that, Infinito looks like the most difficult (quasi-)game out there to program. Infinito probably is not playable by humans (or any other finite beings), but it is possible to play the finite version, Myriades.

Myriades

[ext] https://boardgamegeek.com/boardgame/160612/myriades Myriades is the finite and playable version of Infinito. Myriades has a terribly huge branching factor, one of the largest (how large is an open topic), so it would probably extremely difficult to program a good AI.

Similar to chess

While the upper two were for the Go players, the following one is for the chess players

Arimaa

  • A chess-like game made public in 2002 with an extremely high branching factor (over 17000, vs. about 35 for chess, and a couple of hundred for Go)
  • there was a prize of $10000 for a computer program that defeats a human player before the year 2020 (The Arimaa Challenge was won on April 18, 2015. [ext] http://arimaa.com/arimaa/)
  • Although computers have trouble with the high branching factor, it was long unclear whether humans handle it all that much better. For a while around 2004(?) it looked like the computer program Bomb by David Fotland could win the prize, but human players developed strategies that have definitively put them in the lead. Bomb still plays at a reasonably high level.
  • In 2012, a computer won three games during the challenge match while still losing the match.
  • [ext] http://arimaa.com/

Shogi

Anonymous opinion: I've heard this is hard for the computers too.

Bildstein's opinion: I've played quite a few ganmes of Shogi against a friend who learned the game in Japan, and he consitently beats me. But he has told me that he has some Shogi software (which he bought in Japan) that consitently beats him even on the easiest difficulty level.

SomeOne?: Actually, shogi is almost as well researched as Chess, but is more difficult for computers than Chess. It is probably slightly more difficult than chess in both tactics and strategy. Still, Shogi programs have reached a very high level, and it is estimated that in a few years Shogi programs will challenge the best players. currently they still have to play handicap games against the best. The level of the best Shogi programs is better than about 99.8% of Shogi players, and when compared with Chess, it is estimated that the level is about that of an IM in Chess. Also, shogi positions cannot be simplified with mass exchanges as in chess, because captured pieces can be used by the opponent, a fact that makes it oh so much harder for computers.

mschribr: In 2013, a computer easily beat one of the 20 highest rated players. In 2014, the computer will be ready to play the highest rated players.

Crazyhouse

mschribr: This is similar to chess with a difference that captured pieces reverse color and can be put back in the game. It would be interesting to see how well computers play compared to people. I think with this small change people can beat the computer again.


Nope, see Sjeng [ext] http://www.sjeng.org/indexold.html It has successfully beaten players rated over 2600 points, including several titled GM's and IM's. One version of Sjeng even attained the first place in the crazyhouse players list on the Internet Chess Club (ICC), and the first place on the Free Internet Chess Server (FICS).

Other

Volo

Volo is the least successful robot at Boardspace.net, and therefore rates high on my (ddyer) personal list of difficult games to create AIs for. In Volo, there are two basic types of move "create a flock" by placing an isolated bird, or "merge flocks" by moving a line of birds such that two or more flocks are combined. The goal of the game is to combine all birds into one flock.

Several factors contribute to the ineffectiveness of both alpha-beta and uct based search. First, the branching factor is huge; there are always lots of free placement moves, and once larger flocks form, there are many ways that lines of birds within those flocks can combine and move. Second, the rules of the game force many counter-goal moves to be made, so hill climbing strategies (like alpha-beta) perform poorly, and undirected strategies such as random playouts with UCT result in hopelessly unrepresentative positions.

ktron

ktron: Seems very difficult to me too. Although this is an "action"-game, if you remove the time pressure from the human player you can also look at it as a strategy game, played on a graph, with branching factor 3 and possibly large depth.

Given the condition for winning (being the last player with a legal play), Tron would seem to be well-suited to the kind of analysis so popular on SL.

Alex: I'm not convinced. It seems to me that, past the early stages of the game, it should be possible to prune that branching factor down to one certainly or almost-certainly best choice for most moves, with just the occasional choice of two to think about. Essentially, the endgame boils down to one giant semeai, and I think a properly-programmed computer's ability not to make blunders in that kind of tactical situation would end up beating most humans' superior intuition in the opening.

hk: I agree that this game probably isn't "unprogrammable". But it has been proposed as a serious abstract strategy game (rather as an action game). It's even implemented at [ext] http://www.googlegamecenter.com/ as SnailTrail?.


Zèrtz

Gerhard: The Zèrtz game ([ext] http://www.gipf.com) is also difficult for computers since you must look a large number of moves (ususally > 10) ahead. The different winning possibilities and strategy changes during the game can make this a computational difficult problem. I know of only one Zèrtz-playing program that plays extremely poor.

hk: Zèrtz poses much the same problem for computers as Arimaa: A player effectively makes many moves in sequence. On a free turn, it's usually a placement anywhere on the board + removal of a ring, but you can also usually sacrifice a marble to get another move, rearranging the position. Strong human players routinely make long strings of sacrifices (up to 11 marbles) to set up a winning position. However, Zèrtz is played on a pretty small board, and there are a limited set of readily recognizable pattens that could be stored in a database. Perhaps using a algorithm which is less dependent on alternating turns, like the Monte Carlo algorithms used by Go programs, could work. Zèrtz players have worried that the game may start showing signs of being solved (by humans) - for instance, most openings are proven losses. For that reason, it is often played with 11 or 24 additional rings. I suspect these will be even harder for computers to deal with.

Current Zèrtz programs are Holtz and the bots at Boardspace. As far as I can see, Holtz uses a form of quiescent search, but no real lookahead. The bots at Boardspace.net use alpha-beta, but moves that trigger forced capturing sequences are tracked much deeper than "free" moves.

Proof Number Search may be an idea in Zèrtz, since it has produced so good results in suicide chess. That's again just speculation... but the more I read about PN search, the more convinced I am that computers can play a mean game of Zértz at the very least.

octi

[ext] http://www.octi.net/

game of war - kriegspiel

[ext] http://en.wikipedia.org/wiki/Le_Jeu_de_la_Guerre [ext] http://r-s-g.org/kriegspiel/

Stratego

mschribr: This is a popular strategy game with imperfect information. The state space complexity is high. I think there is an annual computer tournament. I think people are much stronger than computers. It would be interesting to see how much stronger people are over computers.

Computer Olympiad

International Computer Games Association

Mind Sports Zine (non computer-topics)

Lists of Programs

(This is now a dead link. The Internet Archive led me to the archived thread [ext] https://web.archive.org/web/20100929084042/http://www.boardgamegeek.com:80/geeklist/5260/free-computer-artificial-intelligence-opponents-fo , though the screenshots don't seem to have survived as full images.)


See also

Artificial Intelligence Programming
Computer Go Programming


Other Games Considered Unprogrammable last edited by 200.225.255.15 on August 27, 2019 - 11:56
RecentChanges · StartingPoints · About
Edit page ·Search · Related · Page info · Latest diff
[Welcome to Sensei's Library!]
RecentChanges
StartingPoints
About
RandomPage
Search position
Page history
Latest page diff
Partner sites:
Go Teaching Ladder
Goproblems.com
Login / Prefs
Tools
Sensei's Library