Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shouldnt this return beta instead? #18

Open
BinaryInitials opened this issue May 29, 2023 · 2 comments
Open

Shouldnt this return beta instead? #18

BinaryInitials opened this issue May 29, 2023 · 2 comments

Comments

@BinaryInitials
Copy link

Hey, I know you probably haven't maintained this project in a while, I found your youtube channel recently when I was in desperate need to improve my own chess engine... Your code helped me with the quiescence portion I was struggling with so I am already grateful!

I was going through your code and I noticed that in the transposition table you have some logic that mimics the alpha beta pruning logic in negamax (which is clever since it shaves off a bit more time). But I noticed that the beta portion of your logic i.e. the portion for when node type is set to lower bound i.e. beta pruning, you are returning the true score instead of the beta score. But shouldn't it be capped at the beta level? In other words, the maximum score the opponent would allow to be scored?
This is the line I am referring to
https://github.com/SebLague/Chess-AI/blob/d0832f8f1d32ddfb95525d1f1e5b772a367f272e/Assets/Scripts/Core/TranspositionTable.cs#L70

It's a bit curious because you do it correctly in the actual alpha-beta pruning evaluation of your code, i.e.

https://github.com/SebLague/Chess-AI/blob/main/Assets/Scripts/Core/AI/Search.cs#L183

Thanks in advance!

@mcthouacbb
Copy link

There are 2 types of AB pruning, fail-soft and fail-hard.
In the fail-hard variant, returned scores are guaranteed to be in between alpha and beta.
In the fail-soft variant, returned scores can be outside the bounds of alpha and beta.
In general, there aren't many differences between the results of fail-soft and fail-hard. Fail-soft can result in less nodes per search and introduce more search instability, but nothing major changes between the 2.

The implementation seen in the transposition table mimics the behavior of fail-soft, whereas the main search loop uses a fail-hard approach. Both versions are correct implementations of AB pruning, they just use a different variant.

Note that the transposition table using a fail-soft variant means that the entire search can be treated as fail-soft, since there aren't anymore guarantees that returned scores will be in between the alpha and beta bounds.

@BinaryInitials
Copy link
Author

Excellent explanation, thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants