Review with AI
Modes
- You can download an AI, preferably with an interface. There are several free open source programs available. Lizzie is the best known example, wrapping the engines KataGo and LeelaZero into an interface. Nowadays KaTrain is possibly more popular.
- You can upload your games online, using software and hardware as a service:
Terms
- Blue move: In Lizzie and other interfaces the move the AI prefers in a given situation, at a given moment during the analysis, is rendered in turquoise blue. Hence the term for the AI's favorite. The blue move can vary, depending on the time or the power taken by the AI to do its computations, and of course between different AI.
- Green move: next to the blue move there's a main candidate AI will research with more intensity to evaluate if it can take over the blue status, usually one with a higher margin of error, and therefore potentially a higher winrate.
- A Candidate is any (other) move which the AI will evaluate according to its policy for candidate selection.
- The winrate is expressed as a percentage and featured in most interfaces. Winrate is often referred to incorrectly as "winning probability". It is not a probability, but a statistic calculated from the Monte Carlo tree search process.
- The expected score is expressed in points with decimals. Lizzie shows it for the KataGo AI but not for LeelaZero.
Don't overestimate AI's quasi-scientific decimal numbers. The true score of a position is always an integer (plus 0.5 with usual komi), so there can be many moves of equal value on the board in quiet positions. For most positions, it's wrong to assume there is only one "best move". To say that one move is "better" than another because KataGo scores it as 0.2 points higher is to read too much into this number. Likewise, the true winrate with perfect play would always be either 0 or 100% (or 50% if you allow whole number komi), and in-between numbers reflect the fact that the algorithm is not perfect.
Proposed terms
(by Dieter)
- An error (of degree N) is the difference between the winning probability or expected scores between (N) consecutive blue moves. For example, the expected score in a situation, associated to the blue move, can be Black +1.7. After playing out the blue move, the expected score can be Black +1.5 for the next blue move. The error (of degree 1) is then 0.2.
- A mistake (of K points) is candidate with an expected score which differs from the blue move's expected score by more than the error + K. For example, assuming the error is 0.2, candidates with an expected score difference of 1.7 would be mistakes of 1.5 points.
Self review vs AI review
(secion added by Dieter)
When reviewing you may want to review without AI first and then with AI. In both cases you will observe "mistakes".
- a (confirmed) good move is one for which you and the AI found no mistake
- a (confirmed) mistake is one which you and the AI both identified as one
- a blind spot is a move which you thought was good but which AI identifies as a (big) mistake*
- a chastice is a move which you thought was a (big) mistake but which AI sees as good enough or only a small mistake
Then there's the respective move selection. Some of your moves won't feature among the AI's candidates. In some cases these are mistakes but in other cases they are incidental to the result.
And you can compare your positional judgment with the AI's. You might remember from your game you were feeling confidently ahead or badly behind, and either of those can be confirmed or denied by the AI.
Advice on reviewing with AI
Advice on reviewing with AI bots, coming from the author of Katago
Extreme interactivity is key. If you just look passively at the moves or variations they suggest you'll notice sequences and shapes that don't make sense to you or are surprising. So "interrogate" the bot for why. E.g.
- it says your move is bad and you should tenuki and attack something else, but you don't see why that move works at all, and it even has the opponent "concede" that the move works and give up stones. Okay, so play down that line, but instead of having the opponent concede, play the move you'd expect the opponent to respond with and see how the bot responds.
- it says you should play move X to threaten some stones but that the opponent should confusingly tenuki? Go ahead and have the opponent respond and see why - maybe it shows you that the opponent's saving the stones doesn't work, or works but becomes too heavy.
- it says you should tenuki when you defended in game, and it has the opponent "agree" by also playing elsewhere, but you don't see why as the shape just sits there unsettled. Tenuki as it says and then have the opponent play the move you were afraid of. Maybe the bot shows you that you can just tenuki again it's because the stones aren't big enough. Maybe it shows you that actually your shape was lighter than you thought and the opponent's threat isn't a big deal. Maybe it shows you tactically that the move you were afraid of simply doesn't work.
- Also a big one is to pay attention to who has sente. I've often seen some kyu players try to analyze their moves with a bot and be super-confused as to why the bot says they should prefer a given result when it's locally worse than the result they got in the game, not realizing that they have sente in this variation and gote in the other.
Interactively "asking the bot questions" like this is way better than staring at the numbers or even looking at the PV that the bot gives you on any of the moves in your game.
See also:
- Go-Playing Programs
- https://www.youtube.com/watch?v=Ajc9SLHn0Nk and subsequent, where Andrew Simons shows how to use Leela Zero for review.