question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Confused about MakeMove and UndoMove

See original GitHub issue

Hello!

I’ve implemented Minimax with Alpha-Beta pruning to determine the best moves, however my bot sometimes does very silly things and I am a bit puzzled about board.MakeMove().

If I make a call to my recursive Minimax function after a call to board.MakeMove(m) will it actually retrieve the updated board state as input, or will it use the input board from the input parameter in the Minimax function itself?

So for example this code which is inside the Minimax function:

           ```
            board.MakeMove(m);
            (float eval, Move _) = Minimax(board, depth - 1, false, alpha, beta);

            // Checking if the evaluation of this move is better than the current maxEval
            if (eval > maxEval)
            {
                maxEval = eval;
                bestMove = m;
            }
            board.UndoMove(m);
            ```

Will my Minimax evaluation call at line 2 of the code snippet retrieve the new board state after calling board.MakeMove(m)? I am using this approach to evaluate all the possible child positions from a certain board state, but if the board state isn’t properly updated, this will completely fail of course.

Thanks a lot! Fun challenge so far 😃

Issue Analytics

  • State:closed
  • Created 2 months ago
  • Comments:14

github_iconTop GitHub Comments

3reactions
RedBedHedcommented, Jul 24, 2023

Assuming your Minimax is good, you are probably suffering from the horizon effect.

Theoretical Minimax is indeed gigachad. It gives the definitive perfect evaluation of any position in the Chess game tree. It will find the principal variation and unequivocally prove the value of a position.

However, Minimax in practice is not so gigachad.

Because there are more nodes in the game tree than there are atoms in the universe, (and enough unique positions to comprise the matter of a large planet in atoms) Minimax search for chess must be depth limited.

So there is a horizon. A ply that the search cannot not see past. We must approximate the value of nodes here with a “static” evaluation. The values we backpropagate are estimates of what a deeper search would show… But the only true tactical evaluation is a deeper search, and static evaluation-- try as it might-- cannot see past the horizon. The search, then, cannot prove the value at the root and there is no hope of finding the definitive principal variation (Otherwise Chess would be solved like tic tac toe and no one would be writing Chess engines).

But not all is lost. Although it is completely unreliable at loud positions (positions where tactical moves can be made), static evaluation is somewhat reliable at quiet positions (moves where no tactical “attack” moves can be made). A linear sum of the material and positional weights can describe the value of these positions with an acceptable level of accuracy.

So we make it our goal to only evaluate quiet positions.

How do we do this? The answer is Quiescence Search. The idea is very simple: Stand pat with a static evaluation. If it raises alpha, make a bet that this evaluation is good and set alpha to the eval, otherwise bet on alpha. If the evaluation happens to fail high, just return beta, otherwise, try out ONLY attack moves to try to improve alpha (now either the static eval or itself). If there are no attack moves, or if we can’t do better than alpha, just return alpha. Otherwise return the improved value between the original alpha and beta, or beta in the case of a fail high. This is called the fail-hard implementation. You can easily change it to fail soft.

Here is the basic pseudocode: https://www.chessprogramming.org/Quiescence_Search

Try it out! It should fit in the 1024 chars. You will also want to add code to allow Qsearch to escape check as well (try out all moves at a node), handling draws and checkmates.

1reaction
RedBedHedcommented, Jul 24, 2023

Code_HTLODrHBUm

Am I mistaken or does this show that the MakeMove function doesn’t give me a new board state?

It does give a new board state. Sharpy (my bot) uses the function and plays a decent game with it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

chess - What's the point of "make move" and "undo ...
The answer to your question depends heavily on a few factors, such as how you are storing the chess board, and what your...
Read more >
Perft - Chessprogramming wiki
Perft is mostly for debugging purposes. It works mainly with functions: move generators, make move, unmake move. They all are very basic and...
Read more >
C++ vs Java Engine move generation performance
The C++ version takes ~8.5 seconds (my perft stops at depth 1, and zobrist keys are being updated on makeMove and undoMove).
Read more >
[Guest Post] How to Write a Chess Variant Website in Six ...
The fastest way to generate moves for any piece is to just look up the possible moves in a precomputed table. For jumping...
Read more >
Why is this slow? - Update : r/rust
is confused with exponentiation in the real world. ... makeMove(move) r += 1 + moveGen(it, depth - 1) board.undoMove(move) } return r }...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found