Computer Go

Overview of Computer Go

Why do humans want to build strong computer Go programs? First as a research challenge, but also, importantly, to create computerized partners for people learning and studying Go (for which, not coincidentally, there is a large commercial market). Programming a computer to play a strong game of Go is a staggeringly complex problem, which has challenged researchers for more than three decades. But more than a hundred developers around the world continue to attack the problem, and have now produced programs that play at a weak amateur level (around 10-kyu), with the programs becoming inexorably stronger, at the rate of up to one stone per year. And a number of interesting research directions also hold out promise for faster progress in the future.

It is easy to find a computer to play against. The free program "Igowin" is a 9x9 version of The Many Faces of Go, a leading Go program written by David Fotland. Igowin can be downloaded from; even beginners can play it against it, using Go's flexible handicap system. For the Macintosh, Explorer is free software. Several dozen programs are now available commercially, from vendors of Go products such as Yutopian Enterprises, usually at prices under US$100. Recommended programs include Many Faces, as well as Wulu and Fungo. The Free Software Foundation has also brought its open source philosophy to the computer Go world, in the form of GNUGo, which is availably freely from

In addition to playing Go with humans, many programs also allow you to study master games, keep game records, investigate variations, and even ask the program why it made a particular move! Some computer programs also are available as opponents on Go servers; for instance, Many Faces often plays on IGS, where it is ranked 16k. And GNUGo plays on NNGS.

How well do computer Go programs play? Judge for yourself, based on the game record below, showing the first 50 moves of the close-fought game between Michael Reiss' Go4++ (White) and Ryuichi Kawa's Haruka (Black) at the 21st Century Cup competition held in July 2001. Go4++ won by 7 points. Go4++'s center-oriented approach is clearly visible.

An early known reference to the possibility of computers playing Go was a prescient comment in a 1965 article by Dr. I. J. Good, the noted statistician, and currently University Distinguished Professor Emeritus of Statistics at Virginia Polytechnic Institute. In "New Scientist", Vol. 25, Pages 172-174, he wrote:

"Go on a computer? – In order to programme a computer to play a reasonable game of Go – rather than merely a legal game – it is necessary to formalise the principles of good strategy, or to design a learning programme. The principles are more qualitative and mysterious than in chess, and depend more on judgment. So I think it will be even more difficult to programme a computer to play a reasonable game of Go than of chess.

"The experienced player will often be unable to explain convincingly to a beginner why one move is better than another. A move might be regarded as good because it looks influential, or combines attack and defence, or preserves the initiative, or because if we had not played at that vertex the opponent would have done so; or it might be regarded as bad because it was too bold or too timid, or too close to the enemy or too far away. If these and other qualitative judgments could be expressed in precise quantitative terms, then good strategy could be programmed for a computer, but hardly any progress has been made in this direction."

After 35 years, Dr. Good's comments still ring true. It has indeed proven more difficult to program a computer to play a reasonable game of Go than of chess, far more difficult in fact than Dr. Good could have imagined.

Why is computer Go so hard? And why haven't the lessons learned in conquering the computer chess program been more useful for computer Go?

Why is Computer Go Hard?

IBM's Deep Blue computer chess system finally defeated world chess champion Garry Kasparov in a match in 1997. Of course, it would be too simplistic to say that Deep Blue is "stronger" than any human; but in any case there are only a handful of humans on the planet with the ability to defeat Deep Blue on any given day. In contrast, the number of humans that could defeat the strongest currently existing Go programs is in the millions. Which is another way of saying that the current programs can at best boast a rank of 10-kyu, more than a dozen "stones" weaker than the dan rankings of stronger amateurs. What is standing in the way of creating stronger Go programs?

To understand this, it's useful to look at how computer chess programs work. To oversimplify, they are based on look-ahead (or tree search) and evaluation. Evaluation involves judging how favorable a particular board configuration is, quickly and fairly accurately. Look-ahead means scanning ahead, often dozens of moves. These two elements function in lockstep, like wheels of a car. Evaluation, although approximate, allows the program to choose which positions to explore more deeply using look-ahead. Look-ahead, bolstered by "pruning" algorithms such as "alpha-beta" to avoid spending time looking at less promising variations, compensates for the inevitable deficiencies in the evaluation function by allowing the program to look at real future board positions. In chess, the speed of evaluation and look-ahead is great enough for a program like Deep Blue to consider 300 million positions per second, more than enough to achieve world champion-level performance. Two other wheels also support the high-performance chess vehicles of our time: comprehensive databases of openings, and pre-calculated endgames. Virtually identical concepts are at work in Chinook, the world-champion checkers program created by Jonathan Schaeffer and his team at the University of Alberta, chronicled in the book "One Jump Ahead".

What about using these approaches in a computer Go program? Go presents intractable barriers for both look-ahead and evaluation. The problem with look-ahead is that at any particular point in a Go game there are many more reasonable moves to consider than in chess; the experts would say that Go has a much higher "branching factor". The problem with evaluation is that there is no way to roughly evaluate the favorability of a particular Go position; a difference of a single space in the position of an individual stone can affect the life-and-death of huge groups and thus have a massive impact on the score. But even with a highly simplified evaluation function, the high branching factor makes reasonable performance impossible; and even if the branching factor were not so high, the difficulty of evaluation would similarly make a chess-like approach infeasible. Taken together, the problems that Go poses for both evaluation functions and look-ahead seem to render this chess-like approach difficult to apply to Go. As for the other two "wheels", of opening libraries and pre-solved endgames, opening libraries are used in all strong Go programs, but cannot yield the same benefit as in a chess program; single-corner openings in Go can and do change depending on the full-board position, and a library cannot solve the problem of which joseki to select in a given situation. Pre-solved endgames, which proved decisive in bringing the Chinook checkers program to world-class level, remain a distant dream for Go-playing programs.

David Fotland notes that another issue differentiating Go from chess and making it more difficult computationally is that "people read more deeply in go than in chess. This is because people are visual, and the board configuration and relationships change less from move to move than they do in chess."

But won't ever-improving computer performance make up for these gaps? Not likely. A very rough estimate might be that the evaluation function is, at best, 100 times slower than chess, and the branching factor is four times greater at each ply; taken together, the performance requirements for a chess-like approach to Go can be estimated as 1027 times greater than that for computer chess. Moore's law holds that computing power doubles every 18 months, so that means we might have a computer that could play Go using these techniques sometime in the 22nd century. In summary, Go's high branching factor and complex evaluation function virtually preclude use of the tried-and-true techniques applied to solving chess. That's why today's Go programs are a mélange of heuristics and specialized modules.

Another view.

A Brief History of Computer Go

In 1962 Remus, then at IBM Germany, published the results of his investigations into computer Go. Two years later, Thorp and Walden published their "A Partial Analysis of Go." The first program to win against a human being, albeit a rank beginner, was by Zobrist at the University of Wisconsin, who used templates to generate candidate moves which were then scored using a weighting function. The program read ahead four moves looking for ataris, and was said to play at the 38-kyu level. Zobrist may be best known as the inventor of Zobrist hashing, a highly efficient algorithm for caching board game positions.

In the early 1970's, the concept of influence function was first developed by Ryder in his doctoral work at Stanford. Later in that decade, the Interim 2 program developed by Reitman and Wilcox was the first to adopt the element of strategy, such as attack and defense, and concepts such as "sector lines" dividing the board, ideas that Wilcox also taught to humans. Wilcox continued developing innovative Go programs over the next ten years. In 1979, Benson published his seminal work on life and death, giving an algorithm which could determine unconditional life or death of a group without reading. This algorithm was later refined by Martin Müller, a leading computer Go researcher.

In 1990, Kierulf, Chen, and Nievergelt published their Smart Game Board, a workbench for addressing Go (and other board games) from a knowledge engineering standpoint, with a strong software engineering orientation as well. This program was the grandfather of Go viewers and editors. The program they wrote was called Go Explorer, a predecessor to Ken Chen's Go Intellect. The 1980's also saw the birth of stronger programs, some of which are still among the top programs today. The most famous is certainly Many Faces of Go, written by David Fotland, one of the giants of computer Go. First released in 1990, it is the successor to several earlier programs written by Fotland. Another leading program was Goliath, written by Mark Boon of the Netherlands, notable for its use of patterns to suggest moves.

Strangely underrepresented during this period of computer Go development was the entire country of Japan, and indeed all of Asia. This is all the more surprising since Japan is arguably the birthplace of modern Go, and certainly possessed the necessary financial and technology resources to make a determined attack on the problem. A Go program, dubbed "Go-Sedai" ("Go Generation", a pun on "Fifth Generation") was written as part of Japan's over-rated Fifth Generation Project in the 1980s, but never achieved any notable strength; its only claim to fame is to be one of two Go programs known to have been written to run on a parallel computer.

Marking the growing maturity of computer Go was the emergence of computer Go tournaments. The most famous, due to its US$1,000,000 prize for winning against a professional player, was the International Go Competition sponsored jointly by Acer Incorporated and the Ing Chang-Ki Wei-Chi (Go) Educational Foundation since 1985. This prize, known as the Ing Prize, unfortunately expired in 2000 and as of this writing has not been extended. Another major competition, the FOST cup based in Japan, was held for five years between 1995 and 1999. A major new tournament is the 21st Century Cup, organized by the Intelligent Go Foundation, a non-profit engaged in promoting computer Go; the first competition was held in York, Pennsylvania in July 2001.

The second half of the 1990's was characterized by the emergence of a number of programs quite comparable in strength. Reiss' Go4++ regularly places near or at the top of computer Go competitions, with its moyo (territorial framework) oriented strategy. Many Faces continues its strong legacy. Strong programs have also emerged from Asia: in China, a family of programs including Handtalk, Goemate and Wulu has developed, sharing tight, fast code and strong amateur players as authors. From Japan, Kawa's Haruka program has performed strongly at recent competitions. Fungo is a strong Korean entry.

Cognitive Science

Go and cognitive science have a natural affinity in at least two regards: Go provides an interesting canvas for cognitive science research, while a cognitive science approach may help develop stronger computer Go programs. As a canvas, Go may be more interesting than chess or other games, by virtue of possessing more "layers" of meaning, of abstraction. Chase & Simon showed in their famous 1973 paper that chess players organize information in "chunks", giving them greater efficiency in remembering positions and sequences. Reitman repeated the experiment for Go, but the results were disappointing; chunks in Go, if they do in fact exist, are vaguer, less linear, more complex, and overlapping. Other research shows that, at least for strong Japanese players, the fact that a move has an "explanation" is an important factor in being able to remember it. Saito and Yoshikawa found that players give names to moves, they verbalize moves, and even found evidence of little move trees in players' conversations! Unfortunately, the connection between any of these findings and the creation of stronger computer Go programs has not yet been found.

How Computer Go Programs Work

Today's strongest computer Go programs are essentially finely-tuned expert systems, based heavily on heuristics, selective search, hand-crafted rules, and pattern matching. Almost all computer Go programs make use of the same basic elements; let's take a look at them.

Board Representation

Computer Go programs must, of course, maintain a representation of what's on the board, as well as what would be there if certain moves were played. Typically, stones are collected into groups of different degrees of connectedness. A three-level categorization might include one where all stones are immediately adjacent; a second where connection can be proved with quick tactical reading; and a third type, more loosely connected. Often, life and death status, eye information, or other data is maintained with each group. For the designer of a computer Go program, a major issue is, for each type of information, whether to store or cache it (which takes more memory), or recalculate it each time it is needed (which takes more time).

Move generation

"Move generation" means proposing candidate moves. In some sense, move generation lies at the heart of computer Go programs; early computer Go programs can be thought of as being essentially nothing more than move generators, in some cases performing no reading (look-ahead) at all! Like a human player, computer Go programs generate moves eclectically, using a combination of approaches, or based on a series of goals, if the program takes a goal oriented approach. For instance, Many Faces generates moves which:

  • Take large full-board opening (fuseki) points (based on strong players' games)
  • Follow standard corner openings (joseki), based on a library of such openings
  • Make good shape, based on matching patterns given in a pattern library
  • Help its groups live or try to kill the opponent's groups, based on life and death analysis
  • Stabilize its own groups, or destabilize the opponent's, based on tactical reading
  • Defend its own groups, or attack the opponent's
  • Serve as ko threats
  • Expand and reduce territories

Joseki libraries are something used by almost all Go programs, like chess programs. Unlike chess, though, Go openings change greatly depending on the overall board position. The joseki library can provide not just a single move, but an entire sequence leading to a position that can then be evaluated in the context of the full board.

Life and death is a specialized problem in Go, so most programs use specialized modules to deal with it. Thomas Wolf's GoTools life and death (tsumego) system can solve high dan-level tsumego problems, and has been integrated into at least one Go program; the problem is that most life and death problems in real games have dependencies on outside stones, something that a pure life and death tool has problems dealing with.

Typical Go programs will generate 5-50 candidate moves.


After move generation, the move to play must be selected - most basically, by evaluating the board position after each candidate move is played. The candidate moves may be ordered prior to evaluation by some sort of heuristic, such as the urgency of the goal that the move is believed to help achieve.. Evaluation means to guess, for every point on the board, whether that point is going to end up being White territory or Black. That, among other things, requires knowing which stones are alive and which are dead. This means, in turn, knowing which stones are connected, even loosely. (Determining whether stones are connected may be performed using a simple, limited form of look-ahead; Go4++ is reported to use probabilities of whether stones are connected or not.) The program then needs to analyze each group's eye-shape or life and death status. For unplayed areas of the board, an influence function, where existing stones "radiate" strength into surrounding intersections, can be used to approximate eventual territory. The evaluation process in Go is by definition complex enough that as few as several hundred, but not more than several thousand, full-board evaluations can be carried out in the course of selecting a single move, given the computing power available as of this writing.

Reading, Look-Ahead, and Tree Search

Above we mentioned that tactical reading is needed to determine the connection status of stones. This is just one case of what is called "reading", "look-ahead", or "tree search", something necessary at various points in the evaluation process. But note the reversal of roles here compared to the case of chess: in that game, search sits in the driver's seat, calling evaluation at each position it chooses to look at; in contrast, in most Go programs, evaluation calls search to help it obtain its result.

Future research directions

While Go programmers strive to make their real-world programs stronger (in part, so they can sell more copies!), researchers are pursuing several interesting directions, all holding out the promise of helping to create the very strong Go program of the future.

Combinatorial Game Theory

Combinatorial game theory "decomposes" a game into subgames, and views the game as a sum of those subgames. This approach was developed by Berlekamp and Conway and presented in their book "Winning Ways". Using CGT, the best move in each subgame, being small in size, can be found more easily; the best move overall can then be found by choosing the best move from the list of all subgame solutions. For Go, the obvious nearer-term application is the endgame. This research direction has been championed by Müller.

Adversarial Goal Planning

Steven Willmott has studied the modeling of goal structures in Go, including that of the (human) adversary.

Computer Learning

In the chess context, learning meant tuning coefficients on a chess-style evaluation function. Computer Go programs have no such singular "evaluation function" whose coefficients can be tuned. But other computer learning approaches, such as neural networks, seem at first glance perfect for computer Go. After all, humans use gray-matter neural networks to play go, don't they? And there are tens of thousands of game records that can be used to train a neural network.

Alas, the overall problem of Go is so massive that is completely infeasible to approach it through neural networks. But neural networks can certainly help in specific areas where they make most sense – such as recognizing local shape, or "suji". Markus Enzenberger, author of NeuroGo, has reported good results using temporal difference learning to teach a neural network to find the best local plays, while also using an "external module" - in this case, one to determine life and death, something a neural net would have a great deal of trouble trying to do on its own.