The main idea of the check extensions is to extend search when there’s a check. Because it’s a forced move, we assume that a lot is going on
in that branch and it’s a good idea to search deeper so we avoid horizon effects.
Gets the check extension, for now it’s constant 1.
The main idea of the internal iterative reduction is that nodes without hash moves are potentially less important, so we
try to save time here by reducing depth a little bit. This is not always true, but some inaccuracies should be recompensated by deeper search.
Gets the internal iterative depth reduction, based on depth
. The further from the horizon we are, the more reduction will be applied.
The main idea of the late move pruning is to prune all nodes, which are near the horizon and were scored low by the history table.
We assume here that there’s a little chance that move being near the end of the list will improve score, so no point of spending time here.
The main idea of the late move reduction is to reduce search depth of all quiet moves, which aren’t promising and with high chance won’t improve score.
This is the least risky type of pruning (used inside PVS framework which cares about re-search when the move is better than expected),
so it’s also applied in PV nodes.
Gets the late move depth reduction, based on move_index
. The lower the move was scored, the larger reduction will be returned.
The main idea of the null move pruning is to prune all nodes, for which the search gives us score above beta even if we skip a move (which allows
the opposite color to make two of them in a row). This is based on the null move observation, which says that there’s always a better alternative than
doing nothing (except zugzwang).
Gets the null move pruning depth reduction, based on depth
. The further from the horizon we are, the more reduction will be applied.
The main idea of the razoring is to detect and prune all nodes, which (based on static evaluation) are hopeless compared to the current alpha and
the chance to improve the score is too small to spend time here. To make it more safe and not to skip positions where we’re somewhere in the
middle of capture sequence, there’s the quiescence search performed to verify if the final score is still below alpha - margin.
Gets the razoring margin, based on depth
. The further from the horizon we are, the more margin we should take to determine if node can be pruned.
Aspiration window wrapper for the entry point of the regular search, look at run_internal
for more information.
Entry point of the regular search, with generic ROOT
parameter indicating if this is the root node where the moves filterigh might happen, and PV
parameter
determining if the current node is a PV (principal variation) node in the PVS framework. The implementation contains a typical alpha-beta approach, together with
a bunch of reductions and prunings to optimize search. The most important parameter here, context
, contains the current state of the search, board state,
statistics, and is passed by reference to all nodes. Besides obvious parameters like depth
, ply
, alpha
and beta
, there’s also allow_null_move
which prevents two null move checks in a row, and friendly_king_checked
which is used to share friendly king check status between nodes (it’s always
calculated one depth earlier, as it’s used as one of the LMR constraints).
The main idea of the static null move pruning (also called as reverse futility pruning) is to prune all nodes, which (based on static evaluation) are too
good compared to the current beta, and will very likely be a cut-node. To save time, we skip move loop entirely and return beta + some margin score.
The concept is very similar to null move pruning, but without performing any search.
Gets the static null move pruning margin, based on depth
. The further from the horizon we are, the more margin should we take to determine
if node can be pruned.