Review with AI
- You can download an AI, preferably with an Interface. Lizzie is the best know example, wrapping the engines KataGo and LeelaZero into an interface.
- you can upload your games online, using software and hardware as a service:
- Blue move: In Lizzie and other interfaces the move the AI prefers in a given situation, at a given moment during the analysis, is rendered in turquoise blue. Hence the term for the AI's favorite. The blue move can vary, depending on the time or the power taken by the AI to do its computations, and of course between different AI.
- Green move: next to the blue move there's a main candidate AI will research with more intensity to evaluate if it can take over the blue status, usually one with a higher winning probability
- A Candidate is any (other) move which the AI will evaluate acoording to its policy for candidate selection.
- The winning probability is expressed in percentages and featured in most interfaces.
- The expected score is expressed in points with decimals. Lizzie shows it for the KataGo AI but not for LeelaZero.
- An error (of degree N) is the difference between the winning probability or expected scores between (N) consecutive blue moves. For example, the expected score in a situation, associated to the blue move, can be Black +1.7. After playing out the blue move, the expected score can be Black +1.5 for the next blue move. The error (of degree 1) is then 0.2.
- A mistake (of K points) is candidate with an expected score which differs from the blue move's expected score by more than the error + K. For example, assuming the error is 0.2, candidates with an expected score difference of 1.7 would be mistakes of 1.5 points.
(secion added by Dieter)
When reviewing you may want to review without AI first and then with AI. In both cases you will observe "mistakes".
- a (confirmed) good move is one for which you and the AI found no mistake
- a (confirmed) mistake is one which you and the AI both identified as one
- a blind spot is a move which you thought was good but which AI identifies as a (big) mistake*
- a chastice is a move which you thought was a (big) mistake but which AI sees as good enough or only a small mistake
Then there's the respective move selection. Some of your moves won't feature among the AI's candidates. In some cases these are mistakes but in other cases they are incidental to the result.
And you can compare your positional judgment with the AI's. You might remember from your game you were feeling confidently ahead or badly behind, and either of those can be confirmed or denied by the AI.
Advice on reviewing with AI bots, coming from the author of Katago
Extreme interactivity is key. If you just look passively at the moves or variations they suggest you'll notice sequences and shapes that don't make sense to you or are surprising. So "interrogate" the bot for why. E.g.
- it says your move is bad and you should tenuki and attack something else, but you don't see why that move works at all, and it even has the opponent "concede" that the move works and give up stones. Okay, so play down that line, but instead of having the opponent concede, play the move you'd expect the opponent to respond with and see how the bot responds.
- it says you should play move X to threaten some stones but that the opponent should confusingly tenuki? Go ahead and have the opponent respond and see why - maybe it shows you that the opponent's saving the stones doesn't work, or works but becomes too heavy.
- it says you should tenuki when you defended in game, and it has the opponent "agree" by also playing elsewhere, but you don't see why as the shape just sits there unsettled. Tenuki as it says and then have the opponent play the move you were afraid of. Maybe the bot shows you that you can just tenuki again it's because the stones aren't big enough. Maybe it shows you that actually your shape was lighter than you thought and the opponent's threat isn't a big deal. Maybe it shows you tactically that the move you were afraid of simply doesn't work.
- Also a big one is to pay attention to who has sente. I've often seen some kyu players try to analyze their moves with a bot and be super-confused as to why the bot says they should prefer a given result when it's locally worse than the result they got in the game, not realizing that they have sente in this variation and gote in the other.
Interactively "asking the bot questions" like this is way better than staring at the numbers or even looking at the PV that the bot gives you on any of the moves in your game.