google.com, pub-7254539599465946, DIRECT, f08c47fec0942fa0

Burrowed in the alleys of Hongik-dong, a hushed residential neighborhood in eastern Seoul, is a faded stone-tiled building stamped “Korea Baduk Association,” the governing body for professional Go. The game is an ancient one, with sacred stature in South Korea. 

But inside the building, rooms once filled with the soft clatter of hands dipping into wooden bowls of stones now echo with mouse clicks. Players hunch over their monitors and replay their matches in an AI program. Others huddle around a Go board and debate the best next move, while coaches report how their choices stack up against the AI’s. Some sit in silence, watching AI programs play against each other. 

Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol. And in the years since, AI has upended the game. It’s overturned centuries-old principles about the best moves and introduced entirely new ones. Players now train to replicate AI’s moves as closely as they can rather than inventing their own, even when the machine’s thinking remains mysterious to them. Today, it is essentially impossible to compete professionally without using AI. Some say the technology has drained the game of its creativity, while others think there is still room for human invention. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks as a result. 

For Shin Jin-seo, the top-ranked Go player in the world, AI is an invaluable training partner. Every morning, he sits at his computer and opens a program called KataGo. Nicknamed “Shintelligence” for how closely his moves mimic AI’s, he traces the glowing “blue spot” that represents the program’s suggestion for the best next move, rearranging the stones on the digital grid to try to understand the machine’s thinking. “I constantly think about why AI chose a move,” he says.

When training for a match, Shin spends most of his waking hours poring over KataGo. “It’s almost like an ascetic practice,” he says. According to a study in 2022 by the Korean Baduk League, Shin’s moves match AI’s 37.5% of the time, well above the 28.5% average the study found among all players.

“My game has changed a lot,” says Shin, “because I have to follow the directions suggested by AI to some extent.” The Korea Baduk Association says it has reached out to Google DeepMind in the hopes of arranging a match between Shin and AlphaGo, to commemorate the 10th anniversary of its victory over Lee. A spokesperson for Google DeepMind said the company could not provide information at this time. But if a new match does happen, Shin, who has trained on more advanced AI programs, is optimistic that he’d win. “AlphaGo still had some flaws then, so I think I could beat it if I target those weaknesses,” he says.

AI rewrites the Go playbook

Go is an abstract strategy board game invented in China more than 2,500 years ago. Two players take turns placing black and white stones on a 19×19 grid, aiming to conquer territory by surrounding their opponent’s stones. It’s a game of striking mathematical complexity. The number of possible board configurations—roughly 10170—dwarfs the number of atoms in the universe. If chess is a battle, Go is a war. You suffocate your enemy in one corner while fending off an invasion in another.

To train AI to play Go, a vast trove of human Go moves are fed into a neural network, a computing system that mimics the web of neurons in the human brain. AlphaGo, which was later christened AlphaGo Lee after its victory over Lee Sedol, was trained on 30 million Go moves and refined by playing millions of games against itself. In 2017, its successor, AlphaGo Zero, picked up Go from scratch. Without studying any human games, it learned by playing against itself, with moves based only on the rules of the game. The blank-slate approach proved more powerful, unconstrained by the limits of human knowledge. After three days of training, it beat AlphaGo Lee 100 games to zero. 

Google DeepMind retired AlphaGo that same year. But then a wave of open-source models inspired by AlphaGo Zero emerged. Today, KataGo is the program most widely used by professional Go players in South Korea. It’s faster and sharper than AlphaGo. It’s learned to predict not just who will win, but also who owns each point on the board at any given moment. While AlphaGo Zero pieced together its understanding of the board by looking at small sections, KataGo learned to read the whole board, developing better judgment for long-term strategies. Instead of just learning how to win, it learned to maximize its score.

The software has reshaped how people play. For hundreds of years, professional Go players have navigated the game’s astronomical complexity by developing heuristics that replaced brute calculation. Elegant opening strategies imposed abstract order on the empty grid. Invading corners early was a bad bargain. Each generation of Go players added new principles to the canon. 

But “AI has changed everything,” says Park Jeong-sang, a South Korean Go commentator. “Fundamental moves that were once considered common sense aren’t played at all today, and techniques that didn’t exist before have become popular.” 

The starkest shift has been in opening moves. Go starts on a blank grid, and the first 50 moves were canvases for abstract thinking and creativity, where players etched their personalities and philosophies. Lee Sedol fashioned provocative moves that invited chaos. Ke Jie, a Chinese player who was defeated by AlphaGo Master in 2017, dazzled with agile, imaginative moves. Now, players memorize the same strain of efficient, calculated opening moves suggested by AI. The crux of the game has shifted to the middle moves, where raw calculation matters more than creativity.

Training with AI has led to a homogenization of playing styles. Ke Jie has lamented the strain of watching the same opening moves recycled endlessly. “I feel the exact same way as the fans watching. It’s very tiring and painful to watch,” he told a Chinese news outlet in 2021. Fans revel when a player breaks from the script with offbeat moves, but those moments have become rarer. Over a third of moves by the top Go players replicate AI’s recommendations, according to a study in 2023. The first 50 moves of each game are often identical to what AI suggests, many players say. 

“Go has become a mind sport,” says Lee Sedol, who retired three years after his 2016 defeat to AlphaGo. “Before AI, we sought something greater. I learned Go as an art,” he says. “But if you copy your moves from an answer key, that’s no longer art.” 

Playing Go is no longer about charting new frontiers, some players say, but about following the dictates of a superhuman oracle. “I used to inspire fans by advancing the techniques of Go and presenting a new paradigm,” says Lee. “My reason for playing Go has vanished.”

A mysterious mind

The players who have stayed in the game are trying to reinvent their craft. But it can be hard to discern what the new principles are.

Disarmingly slight and formidably calm, Kim Chae-young, one of the top female Go players in the world, grew up learning the game from her father, who was also a professional Go player. But when AI began to reshape the game, she found herself starting over. “I needed time to abandon everything I had learned before,” says Kim who shared her screen with me as she pointed her cursor to the blue spots suggested by KataGo. “The intuition I had built up over the years turned out to be wrong.” 

As she leaned close to her monitor, her blinking screen showed the winning probabilities of each move, with no explanations. Even top players like Kim and Shin don’t understand all of AI’s moves. “It seems like it’s thinking in a higher dimension,” she says. When she tries to learn from AI, she adds, “it’s less about rationally thinking through each move, but more about developing a gut feeling—an intuition.”

Researchers are trying to discover the superhuman knowledge encoded in game-playing AI programs so that humans can learn it too. In 2024, researchers at Google DeepMind extracted new chess concepts from AlphaZero, a generalized version of AlphaGo Zero that can also play chess, and taught them to chess grandmasters using chess puzzles. The Go concepts that players have picked up from AI systems so far are “probably only a small portion of what you could potentially learn,” says Nicholas Tomlin, a computer scientist at Toyota Technological Institute at Chicago, who coauthored a study probing Go concepts encoded in AlphaGo Zero.

But extracting those lessons remains a struggle. “Top-tier players haven’t yet been able to deduce the general principles behind AI moves,” says Nam Chi-hyung, a Go professor at Myongji University. Although they can emulate AI’s moves, they have yet to glean a new paradigm for the game because its reasoning is a black box, she says. Go may be in an epistemic limbo. 

Even if AI is an opaque teacher, it’s a democratic one. It has supercharged training for female Go players, who have long been underdogs of the game. For decades, training meant studying under top male players, and the most competitive matches took place in male circles that were difficult for women to break into, says Nam. “Female players never had access to that experience,” she says. “But now they can study with AI, which has made their training environment much more favorable.” More broadly, AI has narrowed the gap between players by helping everyone perfect their opening moves.

Female players have climbed the ranks over the last few years as a result. In 2022, Choi Jeong, then the top female player in the world, became the first woman to reach the finals of a major international Go tournament. Dubbed “Girl Wrestler” for her fierce, combative style of play, she took on Shin. She lost, but the match broke new ground for women in Go. In 2024, Kim made headlines for winning the Korean Go League’s postseason playoffs. She was the only female player in the tournament. 

Training with AI has given Kim newfound confidence. Analyzing male players’ moves with AI has shattered their veneer of infallibility. “Before, I couldn’t gauge just how strong top male players were—they felt invincible. Now, I know that they make mistakes, and their moves aren’t always brilliant,” she says. “AI broke the psychological barrier.”

Go players find a new identity

Although AI has mastered Go far better than any player, fans continue to prefer watching people play. “A Go game between AI programs is not very fun for fans to watch,” says Park, the Go commentator. Such matches are too complex for fans to follow, too flawless to be thrilling, he says. 

Players can mimic AI’s opening moves, but in the middle game—where the board branches into too many possibilities to memorize—their own judgment takes over. Fans revel in watching players make mistakes and mount comebacks, exuding personality in every stone on the board. Shin’s playing style is combative but marked by machinelike poise. Kim deftly navigates  the most chaotic positions on the board. 

“In Go, every move is a choice you make, and your opponent responds with a choice of their own,” says Kim Dae-hui, 27, a Go fan and amateur player. “Watching that process unfold is fun.”

With fans like Kim still watching, Shin finds meaning in his game. “I can play a kind of Go that tells a story that only a human can,” he says. 

After his retirement, Lee searched for a new job where he could have an edge as a human. He started making board games, giving speeches, and teaching students at a university. “I’m looking for a new domain that I can enjoy and excel at,” he says.

But lately, he feels more hopeful for the game he left behind. “It’s every Go player’s dream to play a masterpiece game,” he says—a game of technical brilliance, with no mistakes, fought to a razor’s edge between evenly matched players. “It’s like a mirage,” Lee says, chuckling. “Maybe AI can help us play a masterpiece.” 

Shin hopes he can do that. To Shin, AI is a teacher, a companion, and a North Star. “I may be one of the strongest human players, but with AI around, I can’t be so arrogant,” he says. “AI gives me a reason to keep improving.”

Leave a Comment