How to make a chess negamax algorithm prefer captures and other good moves found shallower in the decision tree?

Suppose we have the following position: 8/1K6/8/4q2P/8/8/5k2/8 b - - 3 2.

My chess engine produces the correct move of Qxh5 when the search depth is below 3. After that, it seems the problem is that it thinks the capture can be made later (considers Qh2 as the best move). I cannot see any obvious ways to prefer the branches where the capture is made earlier in the evaluation algorithm, since that would break the evaluation symmetry needed for negamax (and minimax) to work.

Just for reference, here is my actual negamax code (copied from wikipedia):

int Search::negamaxSearch (Board& positionToSearch, int depth, int alpha, int beta) {
    std::vector<Move> moves = positionToSearch.getMoves();

    if (moves.empty()) {
        if (positionToSearch.isCheck()) {
            return EvaluationConstants::LOSE;
        } else {
            return EvaluationConstants::DRAW;
        }
    }

    if (depth == 0) {
        return BoardEvaluator::evaluateSimple(positionToSearch);
    }

    orderMoves(positionToSearch, moves, depth);

    int positionValue = -1e9;
    for (auto move : moves) {
        positionToSearch.executeMove(move);
        int newValue = -negamaxSearch(positionToSearch, depth - 1, -beta, -alpha);
        positionToSearch.unmakeMove();

        positionValue = std::max(positionValue, newValue);
        alpha = std::max(alpha, newValue);

        if (alpha >= beta) {
            ++cutoffs;
            break;
        }
    }

    return positionValue;
}

And the evaluation function:

int BoardEvaluator::evaluateSimpleOneSide (const Board& board, PieceColor perspective) {
    if (board.isCheckmate()) return EvaluationConstants::LOSE;

    int value = 0;
    for (int pieceType = 0; pieceType < PieceTypes::KING; ++pieceType) {
        value += board.getPieces(perspective).boards[pieceType].popCount() * pieceValues[pieceType];
    }

    return value;
}

int BoardEvaluator::evaluateSimple (const Board& board) {
    return evaluateSimpleOneSide(board, board.getTurn()) - evaluateSimpleOneSide(board, flip(board.getTurn()));
}

Is there something obvious wrong that I haven't noticed?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/kaapipo
πŸ“…︎ Jan 09 2022
🚨︎ report
STUCK : Understanding negamax with transposition tables

Note: I understand how min-max works and I understand alpha beta pruning

Even if you can only answer one of my questions below I would be infinitely appreciative

No matter how much I try look at it and research I just cannot understand tranposition tables, specifically why we cannot always use the exact value for the SAME position.

I am reffering to the psuedocode here here

I read that we also cannot store a move for the UPPERBOUND. Why would this be the case? We already explored ALL the children for that note so can't we be guaranteed to know the optimal move? Why would we not be able to store the best move?

On the contrary we can store the best move for the LOWERBOUND? The branch was pruned and we weren't able to get the best possible response so why would this be the case?

Finally, I underestand why we can only use a table at a depth closer to the leaf than the root (since we have more accurate information from a deeper search). What I don't get is for a node that is the SAME as the previous node why we can't return the value that was found? At least in the case of the UPPERBOUND don't we already have the optimal score we will achieve?

Thanks for any help, this has been frustrating me for so long and I cannot seem to find anything that clarifies these for me online

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Brussel01
πŸ“…︎ Sep 02 2021
🚨︎ report
Help with Negamax with AB pruning - AI not playing imminent win

First time poster here. I'm implementing negamax with AB pruning using wikipedia pseudo code. I'm a little confused because it isn't behaving as it should, in some cases. This is for a simple chess-variant that I'm making.

Specifically, the AI is not taking an imminent winning move - say if AI is playing black, and it can play a move that will lead to an immediate win, it is still "holding on" to that move, continuing to evaluate other positions and eventually picking a different move that would eventually lead to a win (with the same move that could've been played first). I'm trying to debug but I'd be grateful if someone can verify that my algorithm looks right.

I'm also a little confused about the evaluation function -

  1. My eval function returns a positive score to indicate advantage for white and negative for black. Absolute value of the score determines how big the advantage is. For e.g, if white is one pawn ahead, the score would be +10 (10 per extra pawn) and if black is two pawns ahead, it would return -20.

  2. In my negamax function, where I call the evaluator.Score(), you can see that I'm confused whether or not to multiply the result by +/-1 or if it is already taken care of by my eval function (as said in my previous point). I tried both in different runs, and it did not change the problem I've described. But it'd be good to know what the correct code should be.

    // ScoredMove is a struct with Move and score as fields // My evaluator.Score() method returns a positive number if white has advantage, or a negative number if black has advantage. Higher the absolute value, better the advantage.

     ScoredMove Negamax(BoardState position, int depth, int low, int hi, bool isBlack)
     {
         if (depth == 0 || GameOver() != RESULT_GAME_NOT_OVER)
         {
             // I'm not clear if I should multiply by +/-1 or if the Score() method already returns what is necessary
             //int ret = evaluator.Score(position) * (isBlack ? -1 : 1);
             int ret = evaluator.Score(position);
             return new ScoredMove(null, ret);
         }
         List<Move> moves = engine.GetAllMoves(isBlack);
         ScoredMove best = new ScoredMove(null, int.MinValue);
         foreach (var move in moves)
         {
             engine.MakeMove(move);
             ScoredMove sm = Negamax(engine.GetBoardState(), depth - 1, -hi, -low, !isBlack);
             engine.UndoMove();
    
... keep reading on reddit ➑

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/tsunamisugumar
πŸ“…︎ May 01 2020
🚨︎ report
Few questions about implementation of Minmax with alpha beta pruning and negamax.

I have been following this engine as a reference to make my own engine though I didn't get how min-max(negamax ?) is implemented in it. I looked up some pseudo code and tried to modify it for my program and ended up with this though it is resulting in an error.

File "ajax.py", line 76, in <module>
    print (minimax(3, -123456, +123456, whitemove))
  File "ajax.py", line 48, in minimax
    eval = minimax(depth - 1, alpha, beta, False)
  File "ajax.py", line 60, in minimax
    eval = minimax(depth - 1, alpha, beta, True)
  File "ajax.py", line 49, in minimax
    game.pop(i)
TypeError: pop() takes 1 positional argument but 2 were given

I have used the python-chess module. game.pop() is to revert a move. I can't figure out how to solve this error since i am only passing one argument (i) in the function.

  • Can someone please direct me to a better more readable implementation of minmax or explain this one or explain what is wrong with my code. *How is negamax different? I read a bit from here but i can't seem to grasp it.
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ajx_711
πŸ“…︎ May 25 2020
🚨︎ report
[JS] Help with understanding negamax pseudo-code

I'm learning about graphs and trees and the algorithms to traverse them. As a fun exercise, I decided to build a Tic-Tac-Toe game tree using negamax with alpha-beta pruning to evaluate the next best move. I have watched some YT videos and read articles on minimax and negamax and alpha-beta pruning and I understand the theory. I'm having a bit of a hard time putting it into practice and I'm hoping you can help me understand the pseudo code I have found on Wikipedia. Look for my specific questions below the code.

function negamax(node, depth, Ξ±, Ξ², color)
     if depth = 0 or node is a terminal node
         return color * the heuristic value of node

     childNodes := GenerateMoves(node)
     childNodes := OrderMoves(childNodes)
     bestValue := βˆ’βˆž
     foreach child in childNodes
         v := βˆ’negamax(child, depth βˆ’ 1, βˆ’Ξ², βˆ’Ξ±, βˆ’color)
         bestValue := max( bestValue, v )
         Ξ± := max( Ξ±, v )
         if Ξ± β‰₯ Ξ²
             break
     return bestValue

Initial call for Player A's root node

rootNegamaxValue := negamax( rootNode, depth, βˆ’βˆž, +∞, 1)

Line 5 – I’m confused. What is this variable childNodes? Is it local to this function or is it a property of node that contains node’s child nodes? Is this line basically creating child nodes for node and assigning those to node? In other words, in Java Script, would this be written something like this:

node.childNodes = GenerateMoves(node);

where GenerateMoves(node) is a function potentially returning an array of node objects representing the next possible game states?

Line 6 – A bit confused again. It looks like it is assigning the childNodes variable the value returned by the OrderMoves(childNodes) function. I guess it will all make sense once I understand line 5.

Line 7 – Is bestValue a local variable or again a property of node and would be expected to be written

node.bestValue = -Infinity;

in Java Script ?

Line 9 – Again, is v a local variable or is it a property of the node object? What does v stand for?

Line 10 – I don’t understand what the max() function does. Can anyone explain?

Initial call – what is the rootNegamaxValue variable ? Is it a property of the node object ?

I feel that I’m lacking context to better understand this algorithm. If anyone feels generous, would it be possible to expand on that pseudo code to see how it would all fit with the tree and node environment. Ideall

... keep reading on reddit ➑

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Neoflash_1979
πŸ“…︎ Dec 26 2016
🚨︎ report
Why is my minmax (negamax, with alpha-beta pruning) algorithm so slow?

I'm trying to implement an AI for the board game Pentago in Haskell. I have previously done this in Python (but lost the code) and if I remember correctly my Python implementation was faster. In Python I was searching at least 4 or 5 plys deep in a reasonable amount of time, but this Haskell implementation takes very long to reach 3 plys. Maybe my memory is wrong? Whatever, the case, I'm hoping to speed up the following implementation of negamax with alpha-beta pruning:

negamaxScore :: Int -> Space -> Board -> Int
negamaxScore depth color = abPrune depth color (-(8^8)) (8^8)
    where abPrune depth color a b board
              | depth == 0 = let s = scoreBoard board in if color == White then s else negate s
              | otherwise = (\(a,_,_) -> a) $ foldl' f (-(8^8), a, b) (nub $ possibleMoves color board)
              where f :: (Int, Int, Int) -> Board -> (Int, Int, Int)
                    f x@(bestVal, a, b) board = if a >= b then x
                                                else let val = abPrune (depth-1) (otherColor color) (negate b) (negate a) board
                                                         bestVal' = max bestVal val
                                                         a' = max a val
                                                     in (bestVal', a', b)

I would appreciate any suggestions on how I can improve the style or performance of this code.

Relevant Links:

http://en.wikipedia.org/wiki/Negamax#NegaMax_with_Alpha_Beta_Pruning

http://en.wikipedia.org/wiki/Pentago

Full code: https://github.com/DevJac/haskell-pentago/blob/master/src/Main.hs

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/Buttons840
πŸ“…︎ Dec 14 2014
🚨︎ report
[C++] Having trouble with my depth limited negamax search function for my connect 4 game.

Here is the code, a few things like color are disabled cause they wouldn't run on that site but it should be understandable.

The two functions where the problem may be occurring are negamax() and pick_best_move() Everything else works great.

For some reason my negamax appears to only react to immediate victories or losses rather than taking the best overall move it will instead stack pieces on the right side of the board whether the move is benificial or not. It only breaks out of this behavior to stop an enemy attempt at a win or (maybe it's hard to tell) to secure an immediate win. I'm not sure what causes this behavior though since it should be trying to win from the start shouldn't it?

HERE Is a link to my thread on stackoverflow where there's a bit more info although I haven't recieved any answers or help there yet either.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Pixelwind
πŸ“…︎ Jul 30 2017
🚨︎ report
minimax-rs--a generic implementation of Negamax github.com/kinghajj/minim…
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/kinghajj
πŸ“…︎ Sep 27 2015
🚨︎ report
Help with negamax algorithm with alpha-beta pruning

Hi guys, I need your help understanding negamax algorithm with alpha-beta pruning.

I know how ordinary alpha beta pruning works. In short, beta can be changed on min nodes, alpha on max nodes, and whenever alpha >= pruning occurs.

But how does alpha beta pruning with negamax works? I tried finding some YT clips, but internet is not rich with explanations about this algorithm ;D.

I found the explanation on Wikipedia, but I don't understand it.

How do I determine alpha and beta, and how do I pass determined values to parent / child nodes while traversing the tree? Any help is appreciated.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/djphilosopher
πŸ“…︎ Apr 19 2015
🚨︎ report
Negamax algorithm help - just the general idea

I've been working on writing a negamax algorithm for a tic-tac-toe game I've written. The problem is, I can't quite get it to work. The main problem I'm having is with what value I should be returning each time. I know the function is negated each time it is called recursively so should I always return a positive value (regardless of who wins) or how does that work? Right now my function runs as such:

`- (int) nega_max: (int) u_depth { int best_val = (int)-INFINITY; int val, i; int win = 0; // X_PIECE [win], O_PIECE [win], 0 [not done / tie]

win = [self check_win];

if ( u_depth <= 0 || win != 0 )
{
	return [self evaluate: win cur_depth: u_depth];
}

for( valid_moves )
{
	[self make_next_move: i];
	[self swap_player]; // change between min and max
	// since player is switched it's now ran from opposite players perspective and negated
	val = -[self nega_max: (u_depth - 1) ];
	[self undo_last_move: i];
	if (val > best_val)
	{
		best_val = val;
	}
}
return best_val;

}`

My check_win functions returns 1 if max wins and -1 if min wins or 0 if the game is not finished/a tie.

My evaluate function returns the check_win value (1 or -1) multiplied by a value to adjust for depth. If there wasn't a win it returns a heuristic value which basically accounts for the number of potential wins and is positive for max and negative for min.

So, is my check_win/evaluate function returning what it should be or should I not be returning negatives/values specific to one perspective (max v min).

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/NSLogan
πŸ“…︎ Feb 27 2011
🚨︎ report
Minimax/Negamax with limited knowledge

I'm newly familiar with minimax-style AI algorithms, and very exited about the possibilities. My main issue with them, other than efficiency of course, is that it seems like they inherently operate on perfect knowledge; they "know" for a fact what effect each edge in the game tree is going to have. There's algorithms like expectiminimax, but that only really deals with things like dicerolls, where the outcomes and probabilities of such are firmly defined.

Are there similar algorithms that can deal with games where the AI player has "fog of war" or otherwise limited information about the state of the world?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/TOASTEngineer
πŸ“…︎ Jun 24 2015
🚨︎ report
Negamax implementation doesn't appear to work with tic-tac-toe

I've implemented Negamax as it can be found on wikipedia, which includes alpha/beta pruning.

However, it seems to favor a losing move, which should be an invalid result.

The game is Tic-Tac-Toe, I've abstracted most of the game play so it should be rather easy to spot an error within the algorithm.

int evaluate(Board& board) {
    int score = board.isWon() ? 100 : 0;

    for(int row = 0; row < 3; row++)
        for(int col = 0; col < 3; col++)
            if (board.board[row][col] == 0)
                score += 1;

    return score;
}

int negamax(Board& board, int depth, int player, int alpha, int beta) {
    if (board.isWon() || depth <= 0) {
        return player * evaluate(board);
    }

    list<Move> allMoves = board.getMoves();

    if (allMoves.size() == 0)
        return player * evaluate(board);

    for(list<Move>::iterator it = allMoves.begin(); it != allMoves.end(); it++) {
        board.do_move(*it, -player);
        int val = -negamax(board, depth - 1, -player, -beta, -alpha);
        board.undo_move(*it);

        if (val >= beta)
            return val;

        if (val > alpha)
            alpha = val;
    }

    return alpha;
}

void nextMove(Board& board) {
    list<Move> allMoves = board.getMoves();
    Move* bestMove = NULL;
    int bestScore = INT_MIN;

    for(list<Move>::iterator it = allMoves.begin(); it != allMoves.end(); it++) {
        board.do_move(*it, 1);
        int score = -negamax(board, 100, 1, INT_MIN + 1, INT_MAX);
        board.undo_move(*it);

        if (score > bestScore) {
            bestMove = &*it;
            bestScore = score;
        }
    }

    if (!bestMove)
        return;

    cout << bestMove->row << ' ' << bestMove->col << endl;    
}

Giving this input:

O
X__
___
___

The algorithm chooses to place a piece at 0, 1, causing a guaranteed loss, do to this trap(nothing can be done to win or end in a draw):

XO_
X__
___

Perhaps it has something to do with the evaluation function? If so, how could I fix it?

Here's the full source code: http://ideone.com/Zihwf

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/niGhTm4r3
πŸ“…︎ Sep 15 2012
🚨︎ report
minimax tutorial (in hopes of following up with negamax) ai-depot.com/articles/min…
πŸ‘︎ 16
πŸ’¬︎
πŸ‘€︎ u/Thistleknot
πŸ“…︎ Jan 03 2013
🚨︎ report
Blind Girl Here. Give Me Your Best Blind Jokes!

Do your worst!

πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/Leckzsluthor
πŸ“…︎ Jan 02 2022
🚨︎ report
This subreddit is 10 years old now.

I'm surprised it hasn't decade.

πŸ‘︎ 13k
πŸ’¬︎
πŸ‘€︎ u/frexyincdude
πŸ“…︎ Jan 14 2022
🚨︎ report
Dropped my best ever dad joke & no one was around to hear it

For context I'm a Refuse Driver (Garbage man) & today I was on food waste. After I'd tipped I was checking the wagon for any defects when I spotted a lone pea balanced on the lifts.

I said "hey look, an escaPEA"

No one near me but it didn't half make me laugh for a good hour or so!

Edit: I can't believe how much this has blown up. Thank you everyone I've had a blast reading through the replies πŸ˜‚

πŸ‘︎ 19k
πŸ’¬︎
πŸ‘€︎ u/Vegetable-Acadia
πŸ“…︎ Jan 11 2022
🚨︎ report
What starts with a W and ends with a T

It really does, I swear!

πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/PsychedeIic_Sheep
πŸ“…︎ Jan 13 2022
🚨︎ report
What do you call quesadillas you eat in the morning?

Buenosdillas

πŸ‘︎ 12k
πŸ’¬︎
πŸ‘€︎ u/FarronKeepSucks
πŸ“…︎ Jan 14 2022
🚨︎ report
What is a a bisexual person doing when they’re not dating anybody?

They’re on standbi

πŸ‘︎ 11k
πŸ’¬︎
πŸ‘€︎ u/Toby-the-Cactus
πŸ“…︎ Jan 12 2022
🚨︎ report
Geddit? No? Only me?
πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/shampy311
πŸ“…︎ Dec 28 2021
🚨︎ report
I wanna hear your best airplane puns.

Pilot on me!!

πŸ‘︎ 3k
πŸ’¬︎
πŸ‘€︎ u/Paulie_Felice
πŸ“…︎ Jan 07 2022
🚨︎ report
E or ß?
πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/Amazekam
πŸ“…︎ Jan 03 2022
🚨︎ report
No spoilers
πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/Onfour
πŸ“…︎ Jan 06 2022
🚨︎ report
Covid problems
πŸ‘︎ 7k
πŸ’¬︎
πŸ‘€︎ u/theincrediblebou
πŸ“…︎ Jan 12 2022
🚨︎ report
These aren't dad jokes...

Dad jokes are supposed to be jokes you can tell a kid and they will understand it and find it funny.

This sub is mostly just NSFW puns now.

If it needs a NSFW tag it's not a dad joke. There should just be a NSFW puns subreddit for that.

Edit* I'm not replying any longer and turning off notifications but to all those that say "no one cares", there sure are a lot of you arguing about it. Maybe I'm wrong but you people don't need to be rude about it. If you really don't care, don't comment.

πŸ‘︎ 12k
πŸ’¬︎
πŸ‘€︎ u/Lance986
πŸ“…︎ Dec 15 2021
🚨︎ report
SERIOUS: This subreddit needs to understand what a "dad joke" really means.

I don't want to step on anybody's toes here, but the amount of non-dad jokes here in this subreddit really annoys me. First of all, dad jokes CAN be NSFW, it clearly says so in the sub rules. Secondly, it doesn't automatically make it a dad joke if it's from a conversation between you and your child. Most importantly, the jokes that your CHILDREN tell YOU are not dad jokes. The point of a dad joke is that it's so cheesy only a dad who's trying to be funny would make such a joke. That's it. They are stupid plays on words, lame puns and so on. There has to be a clever pun or wordplay for it to be considered a dad joke.

Again, to all the fellow dads, I apologise if I'm sounding too harsh. But I just needed to get it off my chest.

πŸ‘︎ 2k
πŸ’¬︎
πŸ‘€︎ u/anywhereiroa
πŸ“…︎ Jan 15 2022
🚨︎ report
I had a vasectomy because I didn’t want any kids.

When I got home, they were still there.

πŸ‘︎ 10k
πŸ’¬︎
πŸ‘€︎ u/demotrek
πŸ“…︎ Jan 13 2022
🚨︎ report
Spi__
πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/Fast_Echidna_8520
πŸ“…︎ Jan 11 2022
🚨︎ report
What did 0 say to 8 ?

What did 0 say to 8 ?

" Nice Belt "

So What did 3 say to 8 ?

" Hey, you two stop making out "

πŸ‘︎ 9k
πŸ’¬︎
πŸ‘€︎ u/designjeevan
πŸ“…︎ Jan 03 2022
🚨︎ report
I dislike karma whores who make posts that imply it's their cake day, simply for upvotes.

I won't be doing that today!

πŸ‘︎ 15k
πŸ’¬︎
πŸ‘€︎ u/djcarves
πŸ“…︎ Dec 27 2021
🚨︎ report
The Ancient Romans II
πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/mordrathe
πŸ“…︎ Dec 29 2021
🚨︎ report
I'd like to dedicate this joke to my wisdom teeth.

[Removed]

πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/ThoughtPumP
πŸ“…︎ Jan 14 2022
🚨︎ report
How do you stop Canadian bacon from curling in your frying pan?

You take away their little brooms

πŸ‘︎ 6k
πŸ’¬︎
πŸ‘€︎ u/Majorpain2006
πŸ“…︎ Jan 09 2022
🚨︎ report
I did it, I finally did it. After 4 years and 92 days I went from being a father, to a dad.

This morning, my 4 year old daughter.

Daughter: I'm hungry

Me: nerves building, smile widening

Me: Hi hungry, I'm dad.

She had no idea what was going on but I finally did it.

Thank you all for listening.

πŸ‘︎ 17k
πŸ’¬︎
πŸ‘€︎ u/Sk2ec
πŸ“…︎ Jan 01 2022
🚨︎ report
It this sub dead?

There hasn't been a post all year!

πŸ‘︎ 13k
πŸ’¬︎
πŸ‘€︎ u/TheTreelo
πŸ“…︎ Jan 01 2022
🚨︎ report
School Was Clothed
πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/Kennydoe
πŸ“…︎ Jan 08 2022
🚨︎ report
Letting loose with these puns
πŸ‘︎ 6k
πŸ’¬︎
πŸ“…︎ Jan 13 2022
🚨︎ report
Couch potato
πŸ‘︎ 8k
πŸ’¬︎
πŸ“…︎ Dec 31 2021
🚨︎ report
All dad jokes are bad and here’s why

Why

πŸ‘︎ 7k
πŸ’¬︎
πŸ‘€︎ u/LordCinko
πŸ“…︎ Jan 13 2022
🚨︎ report
Baka!
πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/ridi86
πŸ“…︎ Jan 09 2022
🚨︎ report
concrete πŸ—Ώ
πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/Fast_Echidna_8520
πŸ“…︎ Jan 07 2022
🚨︎ report
My name is ABCDEFGHIJKMNOPQRSTUVWXYZ

It’s pronounced β€œNoel.”

πŸ‘︎ 14k
πŸ’¬︎
πŸ‘€︎ u/beef_fried_rice
πŸ“…︎ Dec 25 2021
🚨︎ report
Why are people so surprised and angry about Djokovic being an anti-vaxxer?

After all his first name is No-vac

πŸ‘︎ 4k
πŸ’¬︎
πŸ‘€︎ u/hangryman23
πŸ“…︎ Jan 06 2022
🚨︎ report
That’s Michelle
πŸ‘︎ 5k
πŸ’¬︎
πŸ‘€︎ u/FLEXSEALBREAKER
πŸ“…︎ Jan 10 2022
🚨︎ report
If Korean pop is shortened to Kpop and Korean Drama is Kdrama...

What, then, is Chinese rap?

Edit:

Notable mentions from the comments:

  • Spanish/Swedish/Swiss/Serbian hits

  • French/Finnish art

  • Country/Canadian rap

  • Chinese/Country/Canadian rock

  • Turkish/Tunisian/Taiwanese rap

πŸ‘︎ 3k
πŸ’¬︎
πŸ‘€︎ u/hootanahalf
πŸ“…︎ Jan 09 2022
🚨︎ report
Help with negamax algorithm with alpha-beta pruning

Hi guys, I need your help understanding negamax algorithm with alpha-beta pruning.

I know how ordinary alpha beta pruning works. In short, beta can be changed on min nodes, alpha on max nodes, and whenever alpha >= pruning occurs.

But how does alpha beta pruning with negamax works? I tried finding some YT clips, but internet is not rich with explanations about this algorithm ;D.

I found the explanation on Wikipedia, but I don't understand it.

How do I determine alpha and beta, and how do I pass determined values to parent / child nodes while traversing the tree? Any help is appreciated.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/djphilosopher
πŸ“…︎ Apr 19 2015
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.