Friday, January 29, 2021

La finale de La regina degli scacchi commentata dal campione del mondo |...

Everyone should know this Rook Sacrifice Endgame Tactic

Tuesday, January 26, 2021

A chess engine trained to play like a human

 

 Chess engine sacrifices mastery to mimic human play

When it comes to chess, computers seem to have nothing left to prove.

Since IBM’s Deep Blue defeated world chess champion Garry Kasparov in 1997, advances in artificial intelligence have made chess-playing computers more and more formidable. No human has beaten a computer in a chess tournament in 15 years.

In new research, a team including Jon Kleinberg, the Tisch University Professor of Computer Science, developed an artificially intelligent chess engine that doesn’t necessarily seek to beat humans – it’s trained to play like a human. This not only creates a more enjoyable chess-playing experience, it also sheds light on how computers make decisions differently from people, and how that could help humans learn to do better.

“Chess sits alongside virtuosic musical instrument playing and mathematical achievement as something humans study their whole lives and get really good at. And yet in chess, computers are in every possible sense better than we are at this point,” Kleinberg said. “So chess becomes a place where we can try understanding human skill through the lens of super-intelligent AI.”

Kleinberg is a co-author of “Aligning Superhuman AI With Human Behavior: Chess as a Model System,” presented at the Association for Computing Machinery SIGKDD Conference on Knowledge Discovery and Data Mining, held virtually in August. In December, the Maia chess engine, which grew out of the research, was released on the free online chess server lichess.org, where it was played more than 40,000 times in its first week. Agadmator, the most-subscribed chess channel on YouTube, talked about the project and played two live games against Maia.

“Current chess AIs don’t have any conception of what mistakes people typically make at a particular ability level. They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can’t separate out what you should work on,” said co-author Ashton Anderson, assistant professor at the University of Toronto. “Maia has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn’t, because they are still too difficult.”

The paper’s other co-authors are Reid McIlroy-Young, doctoral student at the University of Toronto, and Siddhartha Sen of Microsoft Research.

As artificial intelligence approaches or surpasses human abilities in a range of areas, researchers are exploring how to design AI systems with human collaboration in mind. In many fields, AI can inform or improve human work – for example, in interpreting the results of medical imaging – but algorithms approach problems very differently from humans, which makes learning from them difficult or, potentially, even dangerous.

In this project, the researchers sought to develop AI that reduced the disparities between human and algorithmic behavior by training the computer on the traces of individual human steps, rather than having it teach itself to successfully complete an entire task. Chess – with hundreds of millions of recorded moves by online players at every skill level – offered an ideal opportunity to train AI models to do just that.

“Chess been described as the `fruit fly’ of AI research,” Kleinberg said. “Just as geneticists often care less about the fruit fly itself than its role as a model organism, AI researchers love chess, because it’s one of their model organisms. It’s a self-contained world you can explore, and it illustrates many of the phenomena that we see in AI more broadly.”

Training the AI model on individual human chess moves, rather than on the larger problem of winning a game, taught the computer to mimic human behavior. It also created a system that is more adjustable to different skill levels – a challenge for traditional AI.

Within each skill level, Maia matched human moves more than 50% of the time, with its accuracy growing as skill increases – a higher rate of accuracy than two popular chess engines, Stockfish and Leela. Maia was also able to capture what kinds of mistakes players at specific skill levels make, and when people reach a level of skill where they stop making them.

To develop Maia, the researchers customized Leela, an open-source system based on Deep Mind’s AlphaZero program, that makes chess decisions with the same kinds of neural networks used to classify images or language. They trained different versions of Maia on games at different skill levels in order to create nine bots designed to play humans with ratings between 1100 and 1900 (ranging from the skill of more novice players to strong amateur players).

“Our model didn’t train itself on the best move – it trained itself on what a human would do,” Kleinberg said. “But we had to be very careful – you have to make sure it doesn’t search the tree of possible moves too thoroughly, because that would make it too good. It has to just be laser-focused on predicting what a person would do next.”

The research was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a Multidisciplinary University Research Initiative grant, a MacArthur Foundation grant, a Natural Sciences and Engineering Research Council of Canada grant, a Microsoft Research Award and a Canada Foundation for Innovation grant.

 

Thursday, January 21, 2021

Una mia patta con un giocatore con ELO 2425

 

 

 

[Event "Weekly Rapid Arena"] 
[Site "https://lichess.org/ROUY3i8k"] 
[Date "2021.01.21"] 
[White "MoneyDoctor"] 
[Black "ivo94"] 
[Result "1/2-1/2"] 
[UTCDate "2021.01.21"] 
[UTCTime "20:23:05"] 
[WhiteElo "2139"] 
[BlackElo "2425"] 
[WhiteRatingDiff "+4"] 
[BlackRatingDiff "-5"] 
[Variant "Standard"] 
[TimeControl "600+0"] 
[ECO "B20"] 
[Opening "Sicilian Defense: Wing Gambit, Marshall Variation"] 
[Termination "Normal"] 
[Annotator "lichess.org"] 
1. e4 c5 2. b4?! { (0.20 → -0.69) Inaccuracy. Nc3 was best. } (2. Nc3 Nc6 3. Nf3 g6 4. Bb5 Bg7 5. O-O Nd4 6. Nxd4 cxd4) 2... cxb4 3. a3 { B20 Sicilian Defense: Wing Gambit, Marshall Variation } e5 4. axb4 Bxb4 5. c3 Bc5? { (-0.68 → 0.46) Mistake. Be7 was best. } (5... Be7 6. d4 Nf6 7. dxe5 Nxe4 8. Nf3 Nc6 9. Bd3 Nc5 10. Bc2) 6. Nf3 Nc6 7. Nxe5 Nxe5 8. d4 Qh4?! { (-0.27 → 0.36) Inaccuracy. d5 was best. } (8... d5 9. dxc5 Nf6 10. exd5 Qxd5 11. Qxd5 Nxd5 12. Nd2 O-O 13. Ne4 a5 14. Nd6 Bd7 15. Bb2) 9. Qe2 Ne7? { (0.53 → 1.90) Mistake. d6 was best. } (9... d6 10. dxc5) 10. dxc5 O-O 11. g3 Qg4?! { (1.92 → 2.83) Inaccuracy. Qf6 was best. } (11... Qf6 12. f4) 12. h3?? { (2.83 → 0.73) Blunder. f4 was best. } (12. f4 Qxe2+) 12... Qxe2+?! { (0.73 → 1.50) Inaccuracy. Qg6 was best. } (12... Qg6) 13. Bxe2 b6 14. cxb6 Bb7 15. Nd2? { (1.64 → 0.13) Mistake. Rxa7 was best. } (15. Rxa7 Bxe4 16. O-O Nd5 17. Rxa8 Rxa8 18. Bf4 Nxf4 19. gxf4 Ng6 20. Nd2 Bb7 21. f5 Nf4) 15... axb6 16. Rxa8 Rxa8 17. O-O Ra2 18. f4 N5c6 19. Bd3 d5?? { (0.00 → 3.25) Blunder. Ba6 was best. } (19... Ba6 20. Bxa6 Rxa6 21. Rd1 Ra1 22. Nb3 Rb1 23. Nd2) 20. exd5 Nd8 21. Re1 Kf8 22. c4 b5?! { (3.43 → 4.80) Inaccuracy. Bc8 was best. } (22... Bc8 23. g4) 23. d6 Nc8?! { (4.65 → 6.49) Inaccuracy. Ng8 was best. } (23... Ng8 24. cxb5) 24. c5? { (6.49 → 3.56) Mistake. d7 was best. } (24. d7) 24... Bc6 25. Be4 Bxe4 26. Nxe4 Rc2? { (3.89 → 6.44) Mistake. b4 was best. } (26... b4 27. f5 f6 28. Bd2 Ra4 29. g4 h6 30. d7 Ne7 31. Nd6 Ra7 32. Rxe7 Kxe7 33. Nc8+) 27. Be3?? { (6.44 → 2.28) Blunder. f5 was best. } (27. f5) 27... f5? { (2.28 → 4.05) Mistake. f6 was best. } (27... f6 28. d7) 28. Nd2?! { (4.05 → 2.78) Inaccuracy. Ng5 was best. } (28. Ng5 h6 29. Nf3 Na7 30. Nd4 Rxc5 31. Ne2 Rd5 32. Bxa7 Ne6 33. Kf2 Kf7 34. Be3 b4) 28... Rc3 29. Nf3?! { (3.27 → 2.49) Inaccuracy. g4 was best. } (29. g4 fxg4) 29... Ne6?? { (2.49 → 9.88) Blunder. Na7 was best. } (29... Na7 30. d7 Rd3 31. Ne5 Rd5 32. Bf2 b4 33. Kf1 b3 34. Nc4 Rxd7 35. Nd6 Re7 36. Rb1) 30. d7 Ne7 31. Kf2? { (10.33 → 4.82) Mistake. Bd4 was best. } (31. Bd4 Rc1 32. Rxc1 Nc6 33. Re1 Ke7 34. Ng5 Nxd4 35. Nxe6 Nc6 36. Rd1 Kxe6 37. d8=Q Nxd8) 31... Nc6 32. Rd1? { (4.45 → 2.49) Mistake. Nd4 was best. } (32. Nd4 Nxc5) 32... Ke7 33. Ne5 Nxe5 34. fxe5 Kd8?? { (2.52 → 5.08) Blunder. h6 was best. } (34... h6 35. Rd6) 35. Rd6 Nxc5?? { (5.87 → Mate in 4) Checkmate is now unavoidable. Rxc5 was best. } (35... Rxc5) 36. Bxc5?? { (Mate in 4 → 4.82) Lost forced checkmate sequence. Bg5+ was best. } (36. Bg5+ Kc7 37. d8=Q+ Kb7 38. Qb6+ Kc8 39. Rd8#) 36... Rxc5 37. e6 Re5 38. Kf3 g5 39. h4 h6 40. e7+?? { (7.14 → 0.00) Blunder. Ra6 was best. } (40. Ra6 Rd5 41. Ra8+ Ke7 42. Re8+ Kf6 43. h5 Ke5 44. d8=Q Rxd8 45. Rxd8 g4+ 46. Ke3 f4+) 40... Rxe7 41. Rxh6 gxh4 42. gxh4 Rxd7 43. Rb6 Rd4 44. Rxb5 Rxh4 45. Rxf5 Rd4 { The game is a draw. } 1/2-1/2
Analisi del computer

Saturday, January 9, 2021

Greatest Queen Sacrifice of 2021! - Potter variation (C45)

 

Max Warmerdam vs Annmarie Mütsch 

Vargani Cup - Scotch, Potter variation (C45)

1. e4 e5 2. Nf3 Nc6 3. d4 exd4 4. Nxd4 Bc5 5. Nb3 Bb6 6. Nc3 Nf6 7. Qe2 O-O 8. Be3 d5 9. O-O-O d4 10. Kb1 a5 11. Nb5 dxe3 12. Rxd8 Rxd8 13. f3 Be6 14. Nc1 Nd7 15. Qe1 Nde5 16. Be2 Nc4 17. Bxc4 Bxc4 18. Nc3 Rd2 19. Nd1 Rxd1 20. Qxd1 Rd8 21. Nd3 h6 22. Qc1 Nb4 23. Nxb4 axb4 24. Rd1 Rxd1 25. Qxd1 e2 26. Qe1 Be3 27. a3 c5 28. axb4 cxb4 29. g3 Bd4 30. Ka1 b5 31. c3 bxc3 32. bxc3 Be3 33. Kb2 Bd3 34. h4 Kf8 35. Kb3 Ke7 36. Kb4 Bc4 37. Ka3 Ke6 38. Kb2 Bd3 39. Kb3 g6 40. Kb4 Bc4 41. Ka3 Ke7 42. Kb2 Ke6 43. Kc2 Ke5 44. Kb2 Bd3 45. Kb3 f6 46. Kb4 Bc4 47. Ka3 g5 48. hxg5 hxg5 49. Kb2 Bd3 50. Kb3 Bc4+ 51. Kc2 Kd6 52. Kb2 Bd3 53. Kb3 Kc6 54. Ka3 Kb6 55. Kb3 Bc5 56. Kb2 Kc6 57. Kb3 Be3 58. Ka3 Kd6 59. Kb3 Ke5 60. Kb2 g4 61. f4+ Ke6 62. Kb3 Kd6 63. Kb2 Kc6 64. Kb3 Kb6 65. Ka3 Kc5 66. Kb3 Kd6 67. Kb2 Ke6 68. Kb3 Bc5 69. Kb2 Bb6 70. Kb3 f5 71. e5 Bc5 72. Kb2 Kd5 73. Qh1+ Kc4 74. e6 Be4 75. Qe1 Kd3 76. Qb1+ Ke3 77. Qg1+ Kf3

Tuesday, January 5, 2021

Ioannis Simeonidis: Carlsen’s Neo-Møller

Ioannis Simeonidis
Carlsen’s Neo-Møller
A Complete and Surprising Repertoire Against the Ruy Lopez
New In Chess 2020

Here by March 21, 2021: https://www.amazon.it/Carlsens-Neo-m%C3%B8ller-Complete-Surprising-Repertoire/dp/9056919377