Friday, January 29, 2021
Wednesday, January 27, 2021
Tuesday, January 26, 2021
A chess engine trained to play like a human
Chess engine sacrifices mastery to mimic human play
By Melanie Lefkowitz |
When it comes to chess, computers seem to have nothing left to prove.
Since IBM’s Deep Blue defeated world chess champion Garry Kasparov in 1997, advances in artificial intelligence have made chess-playing computers more and more formidable. No human has beaten a computer in a chess tournament in 15 years.
In new research, a team including Jon Kleinberg, the Tisch University Professor of Computer Science, developed an artificially intelligent chess engine that doesn’t necessarily seek to beat humans – it’s trained to play like a human. This not only creates a more enjoyable chess-playing experience, it also sheds light on how computers make decisions differently from people, and how that could help humans learn to do better.
“Chess sits alongside virtuosic musical instrument playing and mathematical achievement as something humans study their whole lives and get really good at. And yet in chess, computers are in every possible sense better than we are at this point,” Kleinberg said. “So chess becomes a place where we can try understanding human skill through the lens of super-intelligent AI.”
Kleinberg is a co-author of “Aligning Superhuman AI With Human Behavior: Chess as a Model System,” presented at the Association for Computing Machinery SIGKDD Conference on Knowledge Discovery and Data Mining, held virtually in August. In December, the Maia chess engine, which grew out of the research, was released on the free online chess server lichess.org, where it was played more than 40,000 times in its first week. Agadmator, the most-subscribed chess channel on YouTube, talked about the project and played two live games against Maia.
“Current chess AIs don’t have any conception of what mistakes people typically make at a particular ability level. They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can’t separate out what you should work on,” said co-author Ashton Anderson, assistant professor at the University of Toronto. “Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn’t, because they are still too difficult.”
The paper’s other co-authors are Reid McIlroy-Young, doctoral student at the University of Toronto, and Siddhartha Sen of Microsoft Research.
As artificial intelligence approaches or surpasses human abilities in a range of areas, researchers are exploring how to design AI systems with human collaboration in mind. In many fields, AI can inform or improve human work – for example, in interpreting the results of medical imaging – but algorithms approach problems very differently from humans, which makes learning from them difficult or, potentially, even dangerous.
In this project, the researchers sought to develop AI that reduced the disparities between human and algorithmic behavior by training the computer on the traces of individual human steps, rather than having it teach itself to successfully complete an entire task. Chess – with hundreds of millions of recorded moves by online players at every skill level – offered an ideal opportunity to train AI models to do just that.
“Chess been described as the `fruit fly’ of AI research,” Kleinberg said. “Just as geneticists often care less about the fruit fly itself than its role as a model organism, AI researchers love chess, because it’s one of their model organisms. It’s a self-contained world you can explore, and it illustrates many of the phenomena that we see in AI more broadly.”
Training the AI model on individual human chess moves, rather than on the larger problem of winning a game, taught the computer to mimic human behavior. It also created a system that is more adjustable to different skill levels – a challenge for traditional AI.
Within each skill level, Maia matched human moves more than 50% of the time, with its accuracy growing as skill increases – a higher rate of accuracy than two popular chess engines, Stockfish and Leela. Maia was also able to capture what kinds of mistakes players at specific skill levels make, and when people reach a level of skill where they stop making them.
To develop Maia, the researchers customized Leela, an open-source system based on Deep Mind’s AlphaZero program, that makes chess decisions with the same kinds of neural networks used to classify images or language. They trained different versions of Maia on games at different skill levels in order to create nine bots designed to play humans with ratings between 1100 and 1900 (ranging from the skill of more novice players to strong amateur players).
“Our model didn’t train itself on the best move – it trained itself on what a human would do,” Kleinberg said. “But we had to be very careful – you have to make sure it doesn’t search the tree of possible moves too thoroughly, because that would make it too good. It has to just be laser-focused on predicting what a person would do next.”
The research was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a Multidisciplinary University Research Initiative grant, a MacArthur Foundation grant, a Natural Sciences and Engineering Research Council of Canada grant, a Microsoft Research Award and a Canada Foundation for Innovation grant.
Monday, January 25, 2021
Thursday, January 21, 2021
Una mia patta con un giocatore con ELO 2425
Wednesday, January 20, 2021
Saturday, January 16, 2021
Friday, January 15, 2021
Sunday, January 10, 2021
Saturday, January 9, 2021
Greatest Queen Sacrifice of 2021! - Potter variation (C45)
Max Warmerdam vs Annmarie Mütsch
Vargani Cup
- Scotch, Potter variation (C45)
1. e4 e5 2. Nf3 Nc6 3. d4 exd4 4. Nxd4 Bc5 5. Nb3 Bb6 6. Nc3 Nf6 7. Qe2 O-O 8. Be3 d5 9. O-O-O d4 10. Kb1 a5 11. Nb5 dxe3 12. Rxd8 Rxd8 13. f3 Be6 14. Nc1 Nd7 15. Qe1 Nde5 16. Be2 Nc4 17. Bxc4 Bxc4 18. Nc3 Rd2 19. Nd1 Rxd1 20. Qxd1 Rd8 21. Nd3 h6 22. Qc1 Nb4 23. Nxb4 axb4 24. Rd1 Rxd1 25. Qxd1 e2 26. Qe1 Be3 27. a3 c5 28. axb4 cxb4 29. g3 Bd4 30. Ka1 b5 31. c3 bxc3 32. bxc3 Be3 33. Kb2 Bd3 34. h4 Kf8 35. Kb3 Ke7 36. Kb4 Bc4 37. Ka3 Ke6 38. Kb2 Bd3 39. Kb3 g6 40. Kb4 Bc4 41. Ka3 Ke7 42. Kb2 Ke6 43. Kc2 Ke5 44. Kb2 Bd3 45. Kb3 f6 46. Kb4 Bc4 47. Ka3 g5 48. hxg5 hxg5 49. Kb2 Bd3 50. Kb3 Bc4+ 51. Kc2 Kd6 52. Kb2 Bd3 53. Kb3 Kc6 54. Ka3 Kb6 55. Kb3 Bc5 56. Kb2 Kc6 57. Kb3 Be3 58. Ka3 Kd6 59. Kb3 Ke5 60. Kb2 g4 61. f4+ Ke6 62. Kb3 Kd6 63. Kb2 Kc6 64. Kb3 Kb6 65. Ka3 Kc5 66. Kb3 Kd6 67. Kb2 Ke6 68. Kb3 Bc5 69. Kb2 Bb6 70. Kb3 f5 71. e5 Bc5 72. Kb2 Kd5 73. Qh1+ Kc4 74. e6 Be4 75. Qe1 Kd3 76. Qb1+ Ke3 77. Qg1+ Kf3
Tuesday, January 5, 2021
Ioannis Simeonidis: Carlsen’s Neo-Møller
Ioannis Simeonidis
Carlsen’s Neo-Møller
A Complete and Surprising Repertoire Against the Ruy Lopez
New In Chess 2020
Here by March 21, 2021: https://www.amazon.it/Carlsens-Neo-m%C3%B8ller-Complete-Surprising-Repertoire/dp/9056919377