news

Lee Sedol, who was defeated by AlphaGo, spent 8 years rebuilding the collapsed world

2024-07-22

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina


Losing to the AI, I felt like my whole world had collapsed.

Lee Sedol said in a recent interview with The New York Times.


In 2016, this Korean chess player who has won the world championship 14 times represented humanity in a match against Google's AlphaGo, and ultimately lost 1:4.

When he accepted the invitation, he thought it would be a "fun" experience:

The fun part is that I think I can win. I never think I can lose.

That may be one of the most important highlights of AI technology before ChatGPT came on the scene.

Now, less than two years have passed since the release of ChatGPT. We have seen multiple fields being impacted by AI, and more aspects of life seem to be foreshadowing changes. We can’t help but speculate and imagine the future of AI.

In this context, the Go community, which was impacted by AI earlier than other industries and fields, can help us see a possibility that has already happened.

After defeating humans, stronger AI is further dehumanizing


I couldn't enjoy Go anymore, so I retired.

Three years after playing against AlphaGo, Lee Sedol officially announced his retirement.

For Lee Sedol, who started learning Go at the age of 5, Go is not only a competition, but also an art, an extension of the player's personality and style. However, in the AI ​​era, it has "become" an efficiency game of algorithms.

In fact, another thing happened during these three years.

In 2017, DeepMind announced a new version of AlphaGo - AlphaGo Zero.

AlphaGo was born from the neural network's learning and self-practice of more than 30 million moves by human masters, but AlphaGo Zero was deviated from the "human touch" from the very beginning. During the training period, it did not come into contact with any human chess records and relied solely on playing chess with itself for training.

In just three days, AlphaGo Zero defeated AlphaGo 100:0.

The Atlantic called it "AI that doesn't need to learn anything from humans."


In Go, there is a technique that looks simple or insignificant, but can pose a deadly threat in the long run. Some people would say it is like a "ghost."

However, the chess records of AlphaGo and AlphaGo Zero are so difficult to understand that they can be directly regarded as "a mysterious guidebook dropped by an alien civilization."

Michael Redmond, an American professional chess player, said in 2017 that one of the important ways humans learn Go is to build a story: "That's how we communicate. It's a very human thing."

This may also echo Lee Sedol's point that when playing chess, chess players also show part of themselves as human beings.

Redmond added that according to his own observations, human chess players are likely to "surrender" directly when they first see "AI-flavored" chess moves:

The way AlphaGo plays chess always makes people feel very "inhuman". Faced with such a game, it is even difficult for us to get involved in it.

Lee Sedol, one of the first Go masters to be impacted, could not get over it for a long time.

He became obsessed with AI.


After retiring, in addition to opening his own Go academy, publishing books, and launching Go-based board games, Lee Sedol also began giving speeches on AI:

I was early in facing the problems of AI, and others will experience them too. It may not have a happy ending.

For him, the most worrying thing about AI is that it may change human values:

In the past, people were in awe of creativity, originality, and innovation, but since the advent of AI, a lot of this has disappeared.

Not everyone agrees with this statement.

The era of human-machine co-creation


AI destroyed all existing order in the Go circle and then began to rebuild it.

said Jiuheng He, a Go enthusiast who studies artificial intelligence at Cornell University.

In many Go academies, using AI to learn Go has become a process that almost all Go players have to go through.

At a Go academy in Hong Kong, Ng Chee Man provides iPads to students to learn Go using AI.

Every time a student plays chess, the AI ​​will show the "best move" suggestion. At the same time, the system will also record which moves the student made well and which ones were not.


Last year, a study published in the Proceedings of the National Academy of Sciences pointed out that since AI entered the Go circle, the judgment ability of human players has been improved.

As early as 2016, before AlphaGo defeated Lee Sedol, Fan Hui, who had competed against AlphaGo in a private test, had a similar experience.

Although he lost, Fan Hui said AlphaGo made him look at Go in a whole new way, improved his skills, and quickly raised his world ranking.

The 2023 study was based on chess records accumulated from 1950 to 2021, containing data on 5.8 million moves.

Researchers found that before AlphaGo defeated Lee Sedol, the quality of human chess players' judgment remained basically stable for 66 years, but during 2016 and 2017, the quality of chess players' judgment began to rise.

In other words, although human chess players may not be able to beat AI chess players, their judgment ability has indeed improved.

"It's very exciting to see how quickly human players can adapt and incorporate these new moves into their methods. These results suggest that humans will adapt to these discoveries and build upon them to greatly improve their potential."

David Silver, chief research scientist at DeepMind and head of the AlphaGo project, commented on the research.


Ke Jie, who was defeated by AlphaGo in 2017, also said in 2023 that he rarely practiced with real people except for competitions, and believed that AI had even become the source of creativity in Go:

Creativity is not just doing something different. Creativity must be put into actual practice and tested. Now most of the innovations in Go are done by AI. If we want to play something different from before, we will most likely lose because AI has come up with different thinking through a lot of actual practice. This is creativity.

In addition, the performance of another professional chess player is particularly noteworthy.

South Korean chess player Shin Jin-seo is the first post-2000 chess player to win the world championship. He is often called "Shengong Intelligence" by chess fans because he is famous for his long-term AI training and research.


In February this year, Shin Jin-seo defeated China's main player Gu Zihao in the 25th Nongshim Cup, achieving six consecutive wins in a single season and 16 consecutive wins across seasons, surpassing his predecessor Lee Chang-ho. In March this year, he talked about his relationship with AI:

I feel like AI and I are now friends. I am learning with AI that is better than me. AI and humans have completely different ways of thinking. AI solves problems through mathematical algorithms, and I have benefited a lot from learning from AI's ideas.

Now, professional players in China, South Korea and Japan all use AI for training.

"AI Flavor" Revelation


Just as in the era of generative AI, some designers and authors need to conduct complex self-certification because of the so-called "AI flavor" contained in their works, the Go community, which has long integrated AI, has also been facing various issues derived from the "AI flavor".

In current Go games, AI is often used to predict the winning rate and recommend the best moves. This allows the audience to gain a sense of "initiative" while watching the game, and to have multiple viewing angles.

In 2022, during the match between Chinese chess player Li Xuanhao and Shin Jin-seo, many of his decisions were consistent with the AI's optimal judgment prediction of the top three, so his teammate Yang Dingxin suspected that he was using AI to cheat.

Li Xuanhao, who was born in 1995, works "from 9 a.m. to 9 p.m., 365 days a year, and is really hardworking" in AI training, so his chess moves are sometimes seen as having a "machine-like" feel.


In response to the doubts, the Chinese Go Association conducted an investigation and ultimately determined that there was no evidence to support the accusation, and Yang Dingxin imposed a penalty.

But cheating with AI does exist.

In 2020, 13-year-old Korean professional chess player Kim Eun-ji was found to have made moves in online matches that were 92% identical to those recommended by the AI. After investigation, he was found to have cheated (and admitted it), and was banned for one year.

In 2022, Chinese chess player Liu Ruizhi was convicted of AI cheating, becoming the first professional chess player in China to be punished for AI cheating. Compared with Kim Eun-ji, Liu Ruizhi already knew how to avoid the "AI flavor" and only used AI at some key points.

In response, competitions in various countries are constantly improving anti-AI cheating mechanisms.

At the same time, some people have used the "AI flavor" to defeat AI.

In 2023, American amateur player Kellin Pelrine defeated the Go AI KataGo.

KataGo is one of the most powerful open source Go AIs currently available, and South Korea also uses it to train players.

Pelrine used a program called FAR AI to play more than 1 million games against KataGo. In the end, FAR AI found KataGo's weaknesses, implemented them in the man-machine duel and won:

That strategy isn’t exactly child’s play, but it’s not particularly difficult to learn either.

Then, he used the same method to defeat another powerful Go AI, Leela Zero.


The key to the strategy is to create a large "circle" to surround a group of opponent's pieces, and then suddenly place a piece to another unrelated corner to interfere with the AI.

Pelrine said that if a human chess player saw the circle, he would definitely know that there was a problem, but the AI ​​would not notice it.

This weakness seems a bit "tricky". Can it be patched by letting AI conduct targeted training?

A report in Nature last week cited a preprint paper from this year, pointing out that when faced with a program specifically designed to find AI weaknesses, model loopholes are not as easy to fix as imagined.

This time, KataGo was again targeted. The researchers used three different strategies to make KataGo more capable of counterattacking:

  • Let KataGo learn how to respond to attacks through self-play;
  • Iterative training: attack KataGo with an attack program, feed back the vulnerability to KataGo, let it learn to deal with it through self-play or other methods, and then attack KataGo with the attack program again, and repeat the cycle;
  • Train a new Go AI system from scratch, using a different neural network model.

Although these trainings helped KataGo improve its defense capabilities to a certain extent, the attack program was still able to find loopholes and defeat KataGo with a winning rate of 91%, 81% and 78% respectively.

These attack programs themselves are not excellent Go AIs and can be easily defeated by humans.

Of course, the key here is not to compete whether humans or AI are more powerful.

The point is that for Go, a field that AI once “disrupted” and after so many years of application and improvement, AI still has many problems. Adam Gleave, the author of the paper, said:

If we can’t solve this problem in a single domain like Go, then in the short term, the possibility of fixing jailbreaks in models like ChatGPT seems slim.