Gomoku AI & Computer Solving: Is the Game Solved?
How computers proved that Black wins and what it means for human players
Gomoku as a Mathematical Problem
Every two-player, perfect-information, deterministic game like Gomoku has a theoretical outcome — either the first player wins, the second player wins, or the game is a draw, assuming both sides play optimally. Determining that outcome is known as “solving” the game.
Gomoku sits in a fascinating position in the hierarchy of solved games. It is complex enough to offer rich human gameplay, yet simple enough in its rules that computers have been able to determine the perfect-play outcome conclusively.
The Proof: Black Wins
In 1994, Victor Allis published his doctoral thesis at the University of Maastricht, which included a proof that the first player (Black) wins freestyle Gomoku on a standard 15×15 board with optimal play. The approach combined several techniques.
How the Proof Worked
| Technique | Role in the Proof |
|---|---|
| Threat-space search | Efficiently searched for VCF and VCT winning sequences |
| Database of positions | Stored analyzed board states to avoid redundant calculation |
| Proof-number search | Determined the theoretical value of positions |
| Domain-specific knowledge | Pruned irrelevant branches using Gomoku strategy heuristics |
Allis did not enumerate every possible game — that would require examining roughly $10^{70}$ positions, far beyond any computer’s capacity. Instead, he used threat-space search to show that from the starting position, Black can always find a forced winning sequence regardless of White’s responses.
What the Proof Tells Us
- Black wins with perfect play. Starting from an empty board, Black has a strategy that guarantees five-in-a-row.
- The strategy is not memorizable. The winning tree involves millions of variations. No human can learn it.
- Human games remain competitive. Because the winning strategy is so complex, practical games between humans are still full of uncertainty and strategic richness.
History of Gomoku AI
Computer Gomoku has a surprisingly long history, stretching back to the early days of artificial intelligence research.
Timeline
| Year | Milestone |
|---|---|
| 1960s | Early programs play basic Gomoku using simple heuristics |
| 1980s | Stronger heuristic engines emerge, capable of beating casual players |
| 1994 | Victor Allis proves Black wins freestyle Gomoku on 15×15 |
| 2000s | Threat-space search becomes standard in competitive Gomoku programs |
| 2010s | Yixin wins multiple Computer Olympiad gold medals |
| 2020s | Neural network approaches begin supplementing traditional search |
Key Programs
Yixin is widely regarded as the strongest Gomoku program. Developed by Kai Sun, Yixin combines deep threat-space search with sophisticated evaluation functions. It has dominated the Computer Olympiad’s Gomoku category for many years.
Other notable programs include Carbon, Piskvork, and various open-source engines that serve as training tools for human players. The Gomoku programming community remains active, with annual competitions testing new approaches.
How Gomoku Engines Work
Modern Gomoku engines use a combination of techniques to play at superhuman strength.
Threat-Space Search
The most important algorithm for Gomoku AI is threat-space search, which focuses exclusively on forcing moves — fours and open threes. By only considering moves that create immediate threats, the search tree becomes much narrower than a full-width search, allowing the engine to read extremely deep sequences.
A threat-space search might check:
- Can I win by VCF from this position? (Check all possible four-sequences.)
- Can I win by VCT? (Check sequences mixing fours and open threes.)
- Can my opponent win by VCF or VCT? (If so, I must block.)
Evaluation Functions
When no forcing sequence exists, the engine falls back to a heuristic evaluation function that scores the position based on factors such as:
- Number and type of each player’s threats
- Control of the center
- Connectivity and flexibility of stone formations
- Defensive vulnerabilities
Search Algorithms
Beyond threat-space search, engines use alpha-beta pruning, iterative deepening, and transposition tables — the same family of algorithms used in chess engines. Some newer engines have experimented with Monte Carlo tree search, inspired by the success of AlphaGo in the game of Go.
AI’s Impact on Human Play
Computer analysis has significantly advanced human understanding of Gomoku.
Opening Theory
Engines have evaluated every standard opening position to great depth, confirming which openings are winning for Black and which give White good chances. This analysis has refined tournament opening protocols and swap rules.
Tactical Depth
Positions that human players debated for years have been resolved by computer analysis. Many supposed draws turn out to contain deeply hidden VCF sequences that only a computer can find within a reasonable time.
Training Tool
Strong players regularly use engines to:
- Analyze their tournament games and find missed opportunities.
- Study complex positions and verify whether a threat sequence exists.
- Practice against superhuman opposition to sharpen reading skills.
The Renju Question
While freestyle Gomoku is solved, Renju — with its forbidden move rules for Black — remains an open problem. The forbidden moves create a much more complex game tree because Black must constantly avoid illegal positions, and White can exploit these restrictions strategically.
Research on solving Renju continues, but the additional branching complexity means a complete solution is not expected in the near future. Renju remains the primary competitive format precisely because it is not solved and offers balanced, uncertain outcomes at the highest level.
What Solving Means for Players
Knowing that Gomoku is “solved” sometimes discourages newcomers who assume the game is therefore trivial. In reality, the opposite is true:
- The complexity is enormous. The perfect-play strategy is beyond any human’s ability to memorize or execute.
- Human games are rich. Even between expert players, mistakes happen, creative ideas emerge, and the outcome is always in doubt.
- Variants remain unsolved. Renju, Gomoku with swap rules, and Gomoku on larger boards are not solved and offer deep competitive play.
Gomoku’s solved status is a testament to the game’s mathematical elegance, not a limitation on its playability.
Summary
Gomoku was proven solvable in 1994 when Victor Allis demonstrated that Black wins with optimal play on a 15×15 board. Modern AI programs like Yixin play at superhuman levels using threat-space search and sophisticated evaluation. While this computer work has advanced theory and training, the game remains deeply engaging for human players due to its enormous practical complexity.
Play Against the Computer
Test your skills against a Gomoku AI and see how your strategy holds up. Play online for free and discover whether you can find the winning path.
Play Gomoku Free