Minimax algorithm and alpha-beta pruning from a beginner's perspective mathspp.com/blog/minimax-…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/RojerGS
πŸ“…︎ Nov 21 2021
🚨︎ report
Prolog Alpha Beta pruning - pls help!!!

Trying to implement alpha beta pruning for a tree in Prolog and do not understand how to update values of alpha and beta since Prolog doesn't allow updates to variables which already have values within a clause, and here we need to start with assigned values for alpha, beta(- inf, inf). I read that this is doable using backtracking which I am able to implement for minmax but don't know how to start for alpha beta. Please help!!!

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/Budget-Bobcat3870
πŸ“…︎ Oct 10 2021
🚨︎ report
[OC] Tree diagram showing how effective the alpha beta pruning technique is for chess engines. At depth 4, the search with alpha beta searched only 0.28% of the positions that the naive brute force (minimax) approach searched.
πŸ‘︎ 78
πŸ’¬︎
πŸ‘€︎ u/haddock420
πŸ“…︎ Jul 01 2021
🚨︎ report
Need help with minimax function and alpha-beta-pruning

Hey everybody!

I am working on a Tic Tac Toe game with an minimax function and alpha-beta-pruning in python.

It works not exactly like it should.... Like it makes weird moves sometimes?

If you have ideas how to improve the minimax function, please let me know! :)

Minimax():

# Minimax algorithm for evaluating moves on the board for the bot
# when called first time for move making depth = 3, alpha = - infinity, beta = +inf
def minimax(depth, is_maximizing, alpha, beta):
    
    # Checking win, loos or tie on the deepest level
    if check_win(player_Symbol, bot_symbol) == -10:
        return (-10-depth)
    if check_win(player_Symbol, bot_symbol) == 10:
        return (10+depth)
    if (check_board_full() == True) or (depth == 0):
        return 0

    # Maximizing player
    if is_maximizing:
        best_value = -math.inf
        for row in range(3):
            for col in range(3):
                if board[row][col] == " ":
                    board[row][col] = bot_symbol
                    current_value = minimax(depth-1, False, alpha, beta)
                    best_value = max(best_value, current_value)
                    alpha = max(best_value, alpha)
                    board[row][col] = ' '

                    # alpha beta pruning
                    if beta <= alpha:
                        break
        return best_value

    # Minimizing player
    else:
        best_value = math.inf
        for row in range(3):
            for col in range(3):
                if board[row][col] == " ":
                    board[row][col] = player_Symbol
                    current_value = minimax(depth-1, True, alpha, beta)
                    best_value = min(best_value, current_value)
                    beta = min(best_value, beta)
                    board[row][col] = ' '

                    # alpha beta pruning
                    if beta <= alpha:
                        break
        return best_value

Link for whole code:

https://github.com/Babunator/Tic-Tac-Toe-Game

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/Babunator
πŸ“…︎ Jul 12 2021
🚨︎ report
Tic-Tac-Toe AI using Minimax algorithm with alpha-beta pruning

Hi 😊! I made a Tic-Tac-Toe game AI which plays using Minimax algorithm ( with alpha-beta pruning ). Please play it out at: https://jatin-47.github.io/Tic-Tac-Toe/

If you like it ⭐ it on GitHub : https://github.com/jatin-47/Tic-Tac-Toe

You can also build one using the resources mentioned in the README file of the Repo.

https://preview.redd.it/cqgsrngvuxu61.png?width=1920&format=png&auto=webp&s=6248b6025c06516669f6864e62834c66cec105c0

πŸ‘︎ 21
πŸ’¬︎
πŸ‘€︎ u/whistlingtongue
πŸ“…︎ Apr 23 2021
🚨︎ report
Alpha beta pruning question

Hi

Can alpha beta pruning occur at node C and D?

Link to diagram: https://ibb.co/C7SLW2b

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Ejijiji
πŸ“…︎ Apr 21 2021
🚨︎ report
Prolog Chess AI, Alpha Beta Pruning

Hey, so I am relatively new to Prolog. Currently I am trying to do an AI for a Minichess Game, named Diana chess or Ladies chess(https://en.wikipedia.org/wiki/Minichess). It's basically chess but with a 6x6 Board where you dont have a queen and only one knight.

For my AI, I am mainly looking at the book "Prolog programming for artificial intelligence" by Ivan Bratko(3rd Edition). There he shows an implementation of the alpha-beta algorithm.I am also looking at the Prolog-Code someone posted on Github, where he implemented the same Code fpr his Checkersgame. (https://github.com/migafgarcia/prolog-checkers/blob/master/checkers.pl)

The Code should think a certain number of turns ahead and choose the best move out of them.Before I startet programming in Prolog I did a lot oh Java programming, thats why "Zug" in the first line of my Code is the return of NextMove. I hope this all makes some sense.

My Problem is, that when I run my Code and for example I let the AI run against a human(me), when I start as white and do my first move (b2b3), the AI just doesnt work, the backtracking kinda stops right before NextMove and that way I can't geht the best move back.Furthermore when the AI thinks for the minimizing player and needs to choose hos move, he always takes the move with the highest value, but, when my understanding from the algorithm is not wrong, shouldn't it take the lowest Value?

Here is the code:
https://pastebin.com/sc4gLdQe

I am grateful for every comment on this or for any feedback at all.Thanks, Jonas.

πŸ‘︎ 21
πŸ’¬︎
πŸ‘€︎ u/TheDroppingBomb
πŸ“…︎ Feb 26 2021
🚨︎ report
I’m thinking to make an AI game computer using Minmax alto with alpha/beta pruning

Algo* Do you guys have any links to premade console game(CLI Game). Please let me know.

I’m thinking to make a YouTube video on it and write a blog too.

GitHub links are appreciated.

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/preetsc27
πŸ“…︎ Mar 08 2021
🚨︎ report
minmax evaluation with alpha beta pruning

Posted hereas well.

If it is not visible there:

I'm making some small board game and wanted to code a very simple greed AI with this algorithm. It turns out it doesn't play the most greedy moves, it is simply not working. I'd appreciate any comments around this code.

First, the position evaluation functions are:

uint8_t get_piece_value(PKind kind)
{
    switch (kind) {
    case PKind::FIRST:
        return 10;
    case PKind::SECOND:
        return 30;
    case PKind::THIRD:
        return 35;
    case PKind::FORTH:
        return 100;
}

int get_position_value(const Position& position)
{
    int value;

    for (auto var : position.pieces) {

        if (var.id.color == position.turnToPlay) {
            value += get_piece_value(var.id.kind);
            continue;
        }
        
        value -= get_piece_value(var.id.kind);

    }

    return value;
}

Now this is the function I use to get the valid moves:

std::vector<PMove> get_ALL_valid_moves(const Position& position)
{
    std::vector<PMove> allMoves;

    for (auto piece : position.pieces)
    {
        if (piece.id.color == position.turnToPlay) {
            auto validMoves = get_valid_moves(piece);

            if (validMoves.size() == 0) {
                continue;
            }
            for (auto var : validMoves) {
                allMoves.push_back(var);
            }
        } else {
            assert(("Wrong color passed to get ALL valid moves!!\n"));
        }
    }

    return allMoves;
}

Next, here are the minmax functions:

constexpr int MAX_EVAL = 9999;
constexpr int MIN_EVAL = -MAX_EVAL;

///Minmax evaluation with alpha beta pruning
int minmax_ab(const Position newposition, int depth, int alpha, int beta, bool isMaximizer) 
{
    if (depth == 0) {
        return get_position_value(newposition);
    }

    std::vector<PMove> validMoves;

    validMoves = get_ALL_valid_moves(newposition);

    if (validMoves.size() == 0) {
        return get_position_value(newposition);
    }

    if (isMaximizer) {
        for (auto move : validMoves) {
            alpha  = st
... keep reading on reddit ➑

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/rdar1999
πŸ“…︎ Mar 10 2021
🚨︎ report
Using `Alpha-Beta Pruning` algorithm to find Max or Min values in a list of lists

If we have the following data structure:

values = [
[1, 5, -1, 3],
[-6, 2, 5, 7],
[9, 1, 5, 2],
[1, 2, 8, 4],
]

How can I use `Alpha-Beta Pruning` algorithm to find the Max or Min value and return its index "using Python"?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/FriendlyRadio6798
πŸ“…︎ Jan 22 2021
🚨︎ report
My minimax even with alpha beta pruning for a five by five tic tac toe board is so slow...

How would I improve my minimax algorithm for a 5 by 5 tic tac toe board? It works near instantaneously for a 3 by 3 tic tac toe board and takes a little bit of time for a 4 by 4 tic tac toe board but is so slow for 5 by 5, meaning it takes 4-5 minutes for a single move when I am looking 6 moves ahead.

πŸ‘︎ 4
πŸ’¬︎
πŸ“…︎ Nov 07 2020
🚨︎ report
The Fate timeline is a minimax search with alpha-beta pruning

Minimax

Pruning

Alpha-beta pruning Timeline pruning
Max prunes branches that will inevitably lead to a lower value than the current theoretical max Worlds too ruined get pruned
Min prunes branches that will inevitably lead to a higher value than the current theoretical min Worlds too prosperous get pruned
πŸ‘︎ 38
πŸ’¬︎
πŸ‘€︎ u/nanashi_shino
πŸ“…︎ May 12 2020
🚨︎ report
Tic-tac-toe with minimax algorithm and alpha-beta pruning github.com/Iheb-Haboubi/t…
πŸ‘︎ 27
πŸ’¬︎
πŸ‘€︎ u/iheb-haboubi
πŸ“…︎ Apr 12 2020
🚨︎ report
Few questions about implementation of Minmax with alpha beta pruning and negamax.

I have been following this engine as a reference to make my own engine though I didn't get how min-max(negamax ?) is implemented in it. I looked up some pseudo code and tried to modify it for my program and ended up with this though it is resulting in an error.

File "ajax.py", line 76, in <module>
    print (minimax(3, -123456, +123456, whitemove))
  File "ajax.py", line 48, in minimax
    eval = minimax(depth - 1, alpha, beta, False)
  File "ajax.py", line 60, in minimax
    eval = minimax(depth - 1, alpha, beta, True)
  File "ajax.py", line 49, in minimax
    game.pop(i)
TypeError: pop() takes 1 positional argument but 2 were given

I have used the python-chess module. game.pop() is to revert a move. I can't figure out how to solve this error since i am only passing one argument (i) in the function.

  • Can someone please direct me to a better more readable implementation of minmax or explain this one or explain what is wrong with my code. *How is negamax different? I read a bit from here but i can't seem to grasp it.
πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ajx_711
πŸ“…︎ May 25 2020
🚨︎ report
Othello - Hill Climbing always winning against fixed depth alpha beta pruning ??

I simulated a game of reversi in Python. The logic for the opponnent or say bot 1 was "If there are no corner moves available, then the bot will select the move that claims the most tiles." The bot 2 or the player in my case uses a fixed depth alpha beta pruning algorithm, (I tested with fixed depth 5) at each move using a heuristic function that takes into account: coin parity, mobility, corners captured and stability, basically from here yet the bot 1 seems to be winnng in all the runs. Is there any plausible explanation for this or have I made some mistake while programming?

EDIT: The number of times bot 2 wins is extremely less although I see that bot 2 is the one that captures the corner positions most of the times.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/ishan_srivastava
πŸ“…︎ Sep 19 2019
🚨︎ report
What kind of game trees can be alpha-beta pruned?

I know for sure zero-sum game trees in minimax can be, but for games which are not zero-sum games, (e.g. utility is represented by A,B, player a maximizes utility A, b maximizes B), what conditions are necessary to make alpha-beta pruning possible?

πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/Winstonp00
πŸ“…︎ Oct 08 2021
🚨︎ report
Tic-Tac-Toe Minimax Search with alpha/beta pruning

I'm very new to python and am having trouble implementing an alpha/beta pruning function. I posted my alpha/beta pruning function code below:

def makeCompMove(self):
    def minimax(self, depth, nodeIndex, maximizingPlayer, values, alpha, beta):
    # Terminating condition. i.e leaf node is reached

        if depth == self.boardSize:
            return values[nodeIndex]

        if maximizingPlayer:
            best = self.MIN

        # Recur for left and right children
        for i in range(0, 2):
            val = minimax(depth + 1, nodeIndex * 2 + i, False, values, alpha, beta)
            best = max(best, val)
            alpha = max(alpha, best)

        # Alpha Beta Pruning
            if beta <= alpha:
                break
            return best

        else:
            best = self.MAX

        # Recur for left and right children
            for i in range(0, 2):
                val = minimax(depth + 1, nodeIndex * 2 + i, True, values, alpha, beta)
                best = min(best, val)
                beta = min(beta, best)
            # Alpha Beta Pruning
                if beta <= alpha:
                    break
                return best 
        minimax(0, 0, True, self.marks, self.MIN, self.MAX)

Currently, the program accepts the correct user input and returns the correct marking on the tic-tac-toe board with an X. I am trying to have the program search for the best countermove but it just asks the user for their next move.

I've tried searching previous posts to figure out my issue, but I'm not sure if my function is just flat out wrong or how close I am. I'm not looking for someone to solve it, I'm hoping someone could help at least steer me in the right direction. This is my first time experimenting with writing alpha/beta pruning functions so any hints would be appreciated.

Here is the entire program code:

#Gen-Tic-Tac-Toe Minimax Search with alpha/beta pruning
import numpy as np
import math

# self class is responsible for representing the game board
class GenGameBoard: 

# Constructor method - initializes each position variable and the board size
def __init__(self, boardSize):
    self.boardSize = boardSize  # Holds the size of the board
    self.marks = np.empty((boardSize, boardSize),dtype='str')  # Holds the mark for each position
... keep reading on reddit ➑

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/soul_mane
πŸ“…︎ Sep 23 2019
🚨︎ report
Why is Monte Carlo tree search being used in turn based strategy games when advances in alpha beta pruning algorithms have proven to be so satisfying?

Reference: https://ieeexplore.ieee.org/abstract/document/7860427

>These difficulties are frequently overcome by adopting Monte-Carlo tree search variants for computer players in TBS games, whereas minimax search variants such as Ξ±Ξ² search are rarely used. However, TBS games have basic game structures similar to those of chess and Shogi, for which Ξ±Ξ² search is known to be effective.

So this research paper published in ieee in 2016 explores application of alpha beta pruning methods to turn based strategy games because there structures were very similar to those of chess and Shogi.

Q1: Aren't chess and Shogi turn based strategy (TBS) games themselves?

Alpha beta pruning methods have proven to give promising experimental results on games like chess and theoretically work on a game tree of size even lesser then the minimal search tree (thanks to transposition tables).

Q2: Then why still use Monte Carlo tree search for these TBS games?

Q3: Why were they not switched back immediately to alpha beta after the discoveries of killer heuristics and transposition tables?

πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/ishan_srivastava
πŸ“…︎ Jul 16 2019
🚨︎ report
[Python] Can someone please check my alpha-beta pruning code?

Hello, I have been trying to implement alpha-beta pruning for a personal project and I am having no luck. I copied the pseudocode from wikipedia, but simple tests show that I must have gotten something wrong. I have quadruple-checked that I copied everything correctly, so I am at a loss for what might be wrong.

My best guess is that the break lines are incorrect.

Please save my sanity and help me figure it out. Thank you in advance!

import math

class Node:
    def __init__(self,value,children):
        self.value = value
        self.children = children

def abprune(node,depth,alpha,beta,maximizingPlayer):
    if(depth==0 or len(node.children)==0):
        return(node.value)
    if(maximizingPlayer):
        value = -math.inf
        for child in n.children:
            value = max(value,abprune(child,depth-1,alpha,beta,False))
            alpha = max(alpha,value)
            if(alpha>=beta):
                return(value)
    else:
        value = math.inf
        for child in node.children:
            value = min(value,abprune(child,depth-1,alpha,beta,True))
            beta = min(beta,value)
            if(alpha>=beta):
                return(value)

a0=Node(1,[])
a1=Node(0,[])
b0=Node(None,[a0,a1])

print(abprune(b0,10,-math.inf,math.inf,True)) #returns None, should return 1
πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Jul 24 2019
🚨︎ report
Alpha-beta pruning worst case?

Hi there,

I've been reading Knuth's analysis of alpha-beta pruning, and in section 7 he asserts that "given any finite tree, it is possible to find a sequence of values for the terminal positions so that the alpha-beta procedure will examine every node of the tree". He then goes on to say "there are game trees with distinct terminal values for which the alpha-beta procedure will always find some cutoffs no matter how the branches are permuted".

Aren't these two statements contradictory? I'm pretty sure I'm just being really stupid, so I'd appreciate any help. Thanks in advance!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/imgoingintobattle
πŸ“…︎ Sep 05 2018
🚨︎ report
Dynamic Depth adjustment with alpha beta pruning

I have an assignment to make a bot for playing the game of Othello. It is going to compete with the bots of other students and we are going to get graded on the number of wins we get (also by how many coins we win).

The bot has to make a decision in 2s and it gets disqualified for that game if it takes longer than that.

Since the branching factor through out the game might vary, I am confused on how to dynamically adjust the depth so that I am able to explore upto that much level within the time frame and also make sure I am maximising the the number of states that I am exploring within that time frame

Anything that I can read or explore are welcome, direct answers with links to explanations would be the best!

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/ishan_srivastava
πŸ“…︎ Apr 06 2019
🚨︎ report
ELI5:Neural Network vs Alpha-beta pruning

What are the key differences between an 'Artificial Neural Network' and 'Alpha-beta pruning', in terms of how they function?

What are the pros and cons for each?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/StubbornWaffle
πŸ“…︎ Jan 14 2018
🚨︎ report
Is Alpha-Beta pruning flawed when naively applied to chess engines?

Hello All, I'm a beginner programmer working on making a chess engine in C. I've experimented with my program quite a bit and have seen some odd things occur as a result of AB pruning. I just played a game against the engine in which the engine decided to capture a king pawn with a queen while the pawn was protected by the king. This shouldn't happen when I set the minimax depth to 4 but in hindsight -- it's exactly what should occur: the computer sees that in one of his possible moves it can capture a pawn and attack the king, which makes this move better than all other moves at depth 1. With AB pruning, the tree is immediately cut, meaning that only variations that start with giving up a queen for a pawn are searched.

Another problem that comes up has to do with sacrifices: a variation that ends in mate but starts off with a queen sacrifice won't be searched beyond the first move.

I've tried googling this issue but found nothing on the subject. Does anyone have any experience with this? I know that the use of AB pruning is widely used in engines -- and they don't make those silly moves. I would appreciate any insight!

πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/policemank
πŸ“…︎ Aug 17 2014
🚨︎ report
Why is my minmax (negamax, with alpha-beta pruning) algorithm so slow?

I'm trying to implement an AI for the board game Pentago in Haskell. I have previously done this in Python (but lost the code) and if I remember correctly my Python implementation was faster. In Python I was searching at least 4 or 5 plys deep in a reasonable amount of time, but this Haskell implementation takes very long to reach 3 plys. Maybe my memory is wrong? Whatever, the case, I'm hoping to speed up the following implementation of negamax with alpha-beta pruning:

negamaxScore :: Int -> Space -> Board -> Int
negamaxScore depth color = abPrune depth color (-(8^8)) (8^8)
    where abPrune depth color a b board
              | depth == 0 = let s = scoreBoard board in if color == White then s else negate s
              | otherwise = (\(a,_,_) -> a) $ foldl' f (-(8^8), a, b) (nub $ possibleMoves color board)
              where f :: (Int, Int, Int) -> Board -> (Int, Int, Int)
                    f x@(bestVal, a, b) board = if a >= b then x
                                                else let val = abPrune (depth-1) (otherColor color) (negate b) (negate a) board
                                                         bestVal' = max bestVal val
                                                         a' = max a val
                                                     in (bestVal', a', b)

I would appreciate any suggestions on how I can improve the style or performance of this code.

Relevant Links:

http://en.wikipedia.org/wiki/Negamax#NegaMax_with_Alpha_Beta_Pruning

http://en.wikipedia.org/wiki/Pentago

Full code: https://github.com/DevJac/haskell-pentago/blob/master/src/Main.hs

πŸ‘︎ 10
πŸ’¬︎
πŸ‘€︎ u/Buttons840
πŸ“…︎ Dec 14 2014
🚨︎ report
Need help understanding alpha beta pruning algorithms :

I was planning on making a game of checkers for my Computer Science Final (I'm in grade 11), and when searching around for some algorithms to make my AI, I kept on coming across alpha beta pruning algorithms. I've tried to wrap my head around these, but goddamn am I confused about them.

  • thanks in advance
πŸ‘︎ 17
πŸ’¬︎
πŸ“…︎ May 31 2016
🚨︎ report
Confusion of alpha-beta pruning in <why functional programming matters>

I am reading this paper recently, and I found the implementation of alpha-beta pruning is different from the implementation of prune n

The minmax tree is defined as follow

	moves :: Position -&gt; [Position]
	reptree f a = Node a (map (reptree f) (f a))
	gametree p = reptree moves p

	maximize (Node n Nil) = n
	maximize (Node n sub) = max (map minimize sub)
	minimize (Node n Nil) = n
	minimize (Node n sub) = min (map maximize sub)

	static :: Position -&gt; Number
	evaluate = maximize . maptree static . gametree

Implementation of prune n (only go n step down the search tree) is elegant:

	prune 0 (Node a x) = Node a Nil
	prune (n+1) (Node a x) = Node a (map (prune n) x)

	evaluate5 = maximize . maptree static . prune 5 . gametree

But the implementation of alpha-beta pruning has to mess up with the implementation of maximize (and minimize as well)

	maximize = max . maximize'
	minimize = min . minimize'
	maximize' (Node n Nil) = Cons n Nil
	maximize' (Node n l) = map (min . minimize') l = map min (map minimize' l)
		= mapmin (map minimize' l) 
	mapmin (Cons nums rest) = Cons (min nums) (omit (min nums) rest)
	
	omit pot Nil = Nil
	omit pot (Cons nums rest)
		| minleq nums pot = omit pot rest
		| otherwise = Cons (min nums) (omit (min nums) rest)
		
	minleq Nil pot = False
	minleq (Cons n rest) pot 
		| n &lt;= pot True
		| otherwise = minleq rest pot
		
	evaluateAB = max . maxmize' . maptree static . prune 8 . gametree

I think the alpha-beta pruning is not as elegant as prune n, isn't the idea of functional programming not to mess up with your old function? Can the alpha-beta pruning be defined in compositional way just like the prune n?

And in omit min nums seems to be calculated inefficiently because each time it has to go through the list. In C or python it would be easy, just give a variable the min value of nums and keep updating it while iterating.

πŸ‘︎ 14
πŸ’¬︎
πŸ‘€︎ u/Sherlockhlt
πŸ“…︎ Nov 17 2013
🚨︎ report
The AI for our board game, Twistago, part 2: the Normal AI (minimax, alpha-beta pruning, multiple players)

> This is the second part of a three-part series in which I explain how the artificial intelligence works in my latest game, Twistago. In case you missed the first part, you can catch up on it here. > > As you may recall, the Easy AI works by applying a value function to the end state resulting from each possible move, then picks the move that gives the highest possible value. The main problem with this is that the AI doesn’t look ahead: sometimes it should make a suboptimal move now, in order to get a higher gain in the future. This β€œhigher gain” can either be a gain in the literal sense, or the avoidance of a loss (for instance, being sunk into the black hole by an opponent).

The article builds up to the idea of minimax and goes on to explain alpha-beta pruning. Finally, it considers how to extend these to games with more than 2 players, which is an interesting area of research where the AI community doesn't seem to have clear answers yet.

>> Full article here. <<

πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/thomastc
πŸ“…︎ Jul 03 2016
🚨︎ report
Tic-Tac-Toe game using Minimax algorithm with alpha-beta pruning

Hi 😊! I made a Tic-Tac-Toe game AI which plays using Minimax algorithm ( with alpha-beta pruning ). Please play it out at: https://jatin-47.github.io/Tic-Tac-Toe/

If you like it ⭐ it on GitHub : https://github.com/jatin-47/Tic-Tac-Toe

You can also build one using the resources mentioned in the README file of the Repo.

Game Screen

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/whistlingtongue
πŸ“…︎ Apr 23 2021
🚨︎ report
My minimax even with alpha beta pruning for a five by five tic tac toe board is so slow...

How would I improve my minimax algorithm for a 5 by 5 tic tac toe board? It works near instantaneously for a 3 by 3 tic tac toe board and takes a little bit of time for a 4 by 4 tic tac toe board but is so slow for 5 by 5, meaning it takes 4-5 minutes for a single move when I am looking 6 moves ahead.

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Nov 07 2020
🚨︎ report
My minimax even with alpha beta pruning for a five by five tic tac toe board is so slow...

How would I improve my minimax algorithm for a 5 by 5 tic tac toe board? It works near instantaneously for a 3 by 3 tic tac toe board and takes a little bit of time for a 4 by 4 tic tac toe board but is so slow for 5 by 5, meaning it takes 4-5 minutes for a single move when I am looking 6 moves ahead.

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Nov 07 2020
🚨︎ report
Tic-Tac-Toe Minimax Search with alpha/beta pruning in python

I'm very new to python and am having trouble implementing an alpha/beta pruning function. I posted my alpha/beta pruning function code below:

def makeCompMove(self):
    def minimax(self, depth, nodeIndex, maximizingPlayer, values, alpha, beta):
    # Terminating condition. i.e leaf node is reached

        if depth == self.boardSize:
            return values[nodeIndex]

        if maximizingPlayer:
            best = self.MIN

        # Recur for left and right children
        for i in range(0, 2):
            val = minimax(depth + 1, nodeIndex * 2 + i, False, values, alpha, beta)
            best = max(best, val)
            alpha = max(alpha, best)

        # Alpha Beta Pruning
            if beta &lt;= alpha:
                break
            return best

        else:
            best = self.MAX

        # Recur for left and right children
            for i in range(0, 2):
                val = minimax(depth + 1, nodeIndex * 2 + i, True, values, alpha, beta)
                best = min(best, val)
                beta = min(beta, best)
            # Alpha Beta Pruning
                if beta &lt;= alpha:
                    break
                return best 
        minimax(0, 0, True, self.marks, self.MIN, self.MAX)

Currently, the program accepts the correct user input and returns the correct marking on the tic-tac-toe board with an X. I am trying to have the program search for the best countermove but it just asks the user for their next move.

I've tried searching previous posts to figure out my issue, but I'm not sure if my function is just flat out wrong or how close I am. I'm hoping someone could help at least steer me in the right direction. This is my first time experimenting with writing alpha/beta pruning functions so any hints would be appreciated.

Here is the entire program code:

#Gen-Tic-Tac-Toe Minimax Search with alpha/beta pruning
import numpy as np
import math

# self class is responsible for representing the game board
class GenGameBoard: 

# Constructor method - initializes each position variable and the board size
def __init__(self, boardSize):
    self.boardSize = boardSize  # Holds the size of the board
    self.marks = np.empty((boardSize, boardSize),dtype='str')  # Holds the mark for each position
    self.marks[:,:] = ' '

# Prin
... keep reading on reddit ➑

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/soul_mane
πŸ“…︎ Sep 23 2019
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.