Visualization for nash equilibria - literally

Hello, to start off I know that game theory/ or even algorithmic game theory is not necessarily "math" but do hear my question out. Algorithmic game theory is MATH. For a project, this term I must visualize what these theorems are graphically and how to design graphics for them to communicate them better. I know that it is easier to encode this information in set notation but that isn’t the most intuitive coming into the field without mathematical maturity.

Having said that would you guys know any good geometric interpretations for Nash/or any equilibria for market designs. May not be specific but if there was a similar graphic related to equilibriums that just made things β€œclick together” I would be grateful if you directed me to the same!

Edit: I have been corrected that algorithmic game theory is in fact math.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/range_et
πŸ“…︎ Mar 14 2021
🚨︎ report
Confused regarding mixed strategy nash equilibria
πŸ‘︎ 30
πŸ’¬︎
πŸ‘€︎ u/notmulder
πŸ“…︎ Feb 23 2021
🚨︎ report
Nash Equilibria vs Symmetry

There's something that's been bugging me for a long time about the concept of a Nash Equilibrium.

Consider the game called "Battle of the Sexes". In case anyone here is unfamiliar with it, there are two players called the "man" and the "woman". They both would like to do something togeter, but have different preferences as to what that something should be. The man would prefer to go to a sports event, and the woman would prefer to go to the opera. The utility payoffs for the game are, if they both go to the sporting event the man gets 2 and the woman gets 1, if they both go to the opera the woman gets 2 and the man gets 1, and if they do different thing both score 0.

To someone who hasn't studied game theory, assuming the players don't communicate with each other beforehand but might have some insight into each others' probably behavior, there are three plausible outcomes: They both choose S, they both choose O, or they both flip fair coins and hope they wind up together.

To someone who has studies game theory, there are 3 Nash equilibria, but they are slightly different. They both may choose S, or they both may choose O,or they both do something like rolling a die, each choosing his or her preferred activity 2/3 of the time.

I understand the argument for the mixed Nash equilibrium: If M thinks W is going to pick S50% of the time (or any amount of time more than 1/3) he should pick S 100% of the time, and similarly for W.

Here's my problem: the argument that a rational player would assume that the other player would make the same move he would make in their situation and therefore he should pick the symmetric solution makes sense to me. The argument that 2/3 1/3 is internally consistent seems like it doesn't really have any practical significance. And the 1/2-1/2 solution is better for both players than the 2/3-1/3 since they meet up 1/2 the time instead of 4/9.

Any thoughts?

πŸ‘︎ 8
πŸ’¬︎
πŸ‘€︎ u/gmweinberg
πŸ“…︎ Feb 01 2021
🚨︎ report
Finding the set of Nash equilibria when the payoffs vary

Consider the following game:

f o
f x1, x2 0,0
o 0, 0 1,2

I need to find the set of all Nash equilibria for the game depending on the values of x1 and x2. My initial thought is to construct a bunch of matrices that vary the values of x1 and x2 and see what happens. However, is there a more efficient way to tackle this problem? If so, how? I would appreciate any advice.

πŸ‘︎ 3
πŸ’¬︎
πŸ“…︎ Mar 10 2021
🚨︎ report
What, if any, subgame perfect Nash equilibria exist for this game?
  1. Each of the following questions pertains to a treatment of a Signaling game. In this game, the Proposer shows a signal of either Beer or Quiche, each of which is associated by probability with the Proposer being either Strong or Weak. The Respondent responds with Flee or Fight based on this signal.

The payoffs for Treatment 1 of the Signaling game are given as follows:

Treatment 1: 6 rounds, Probability of Strong = 0.67

Payoffs matrix: Proposer, Responder

FleeΒ Β Β Β  Fight

Beer (Strong)Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β  $2.00, $1.25Β Β Β  $1.20, $0.75

Quiche (Strong)Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β  $1.00, $1.25Β Β Β  $0.20, $0.75

Beer (Weak)Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β  $1.00, $0.75Β Β Β  $0.20, $1.25

Quiche (Weak)Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β Β  $2.00, $0.75Β Β Β  $1.20, $1.25

  1. (10 points) Draw the extensive form of this game.
  2. (10 points) What, if any, subgame perfect Nash equilibria exist for this game?
πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/ekamemrinkosovar
πŸ“…︎ Feb 17 2021
🚨︎ report
Does anyone know of the pure Nash Equilibria for the Centipede Game?
πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/danisdatman01
πŸ“…︎ Apr 16 2020
🚨︎ report
Nash Equilibria in second-price sealed-bid auctions

I'm trying to understand how nash equilibria can be found in auctions.

I understand that a dominant strategy is one where players bid so their bid should equal their valuation.

I want to understand the following example, which should be a nash equilibria when n players 𝑛β‰₯2 bid over one object and player two wins the auction

(b1, . . . , bn) = (v2, v1, 0, . . . , 0)

I think I am misunderstanding the notation, for my understanding at present would be that b1 is the bid of player 1 which is higher than all other bidders, however v2 (player 2's valuation, is higher than all other players including player 1?) in which case how would player 2 win the auction?

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/Qwerty1571
πŸ“…︎ Jan 18 2020
🚨︎ report
Question about Nash Equilibria in non-atomic congestion games (routing games)

In non-atomic congestion game, each player is assumed to be infinitesimally small. As per Krichene et al., a single player set is null set and mass of a single player set is zero. Hence, there is no effect on joint strategy if a single player changes its strategy unilaterally. Given this, if a single player changes its strategy, it will not affect costs of other players. In that case,

change in sum of costs = change in cost of the agent which changed its strategy unilaterally

I was wondering then why sum of costs of individual players is not considered as potential function of the game? Any insight is greatly appreciated.

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/Busy_Stranger
πŸ“…︎ Nov 15 2019
🚨︎ report
Having trouble with finding Nash equilibria in a sequential game

Hey everyone, I'm having trouble finding Nash equilibria in a sequential game for my study. I kinda understand how to find a Nash equilibrium in a simultaneous game but can't find any explanation on how to do this in a sequential game. The question I have to answer is:

Two people need to select a movie by alternately eliminating movies until only one remains. First person 1 eliminates a movie. Then, person 2 then eliminates a movie. The movie that has not been eliminated is selected in the end, and the two people go to see that movie. Suppose there are three possible movies, A, B, and C. Person 1 receives a payoff of 3, 2 and 1 from movies A, B and C, respectively. Person 2 receives a payoff of 2, 1 and 0 from the movies C, B and A, respectively.

Question: Find all Nash equilibria of this game. Does the game have any Nash equilibrium that is not a subgame perfect equilibrium?

The subgame perfect equilibrium I found was: 2,1 (player 1 chooses C, player 2 chooses A, B remains). From what I understand about Nash equilibria, a Nash equilibrium is a equilibrium where both players make their choices with the other players' choice in mind. I can't see how there could be a different outcome in this sequential game than the subgame perfect equilibrium.

I would really appreciate it if someone can help me out a bit.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/_Victator
πŸ“…︎ Nov 07 2019
🚨︎ report
[Research] RaBVIt-SG, an algorithm for solving Feedback Nash equilibria in Multiplayers Stochastic Differential Games researchgate.net/publicat…
πŸ‘︎ 12
πŸ’¬︎
πŸ‘€︎ u/tatitomate
πŸ“…︎ Jul 19 2019
🚨︎ report
Nash equilibria with 3 players

Hi,

I was wondering if anyone could help me with this question:

https://preview.redd.it/uuku0sd0goe41.png?width=616&format=png&auto=webp&s=085d23a69669eb72e20d6b770c1ca2cec4b6a60c

Thank you in advance!

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/L_Cage
πŸ“…︎ Feb 03 2020
🚨︎ report
The Creation of Nash Equilibria in Bitcoin - Jon Gulson - Medium medium.com/@jongulson/the…
πŸ‘︎ 15
πŸ’¬︎
πŸ‘€︎ u/ArtofBlocks
πŸ“…︎ Jul 20 2019
🚨︎ report
The Stabilising of Nash Equilibria in Bitcoin - Jon Gulson - Medium medium.com/@jongulson/the…
πŸ‘︎ 11
πŸ’¬︎
πŸ‘€︎ u/ArtofBlocks
πŸ“…︎ Jul 22 2019
🚨︎ report
[Game theory] Why are all dominant strategy equilibria also nash equilibria?

A dominant strategy equilibrium is the scenario in which each player has an optimal strategy which is independent of the current state of the game or the expected actions of other players.

A nash equilibrium is the scenario in which each player knows the optimal strategy of all players and a natural equilibrium forms which no player has incentive to depart from.

If my understanding is correct, I feel like all the information is there to extrapolate that all dominant strategy equilibria are nash equilibria, but I can't quite string the words together. Can anyone help? This source has confirmed that the statement is true: https://policonomics.com/lp-game-theory2-dominant-strategy/

πŸ‘︎ 20
πŸ’¬︎
πŸ‘€︎ u/Xefoxmusic
πŸ“…︎ Jan 10 2019
🚨︎ report
Game of Thrones clip relating to multiple Nash equilibria

>Power resides where men believe it resides.

Why do swordsmen obey the king? Because men believe that swordsmen obey the king. If you think all the other swordsmen will obey the king, it is in your interest to do so as well. If a game has Nash equilibria of (A,X) and (B,Y) and we are in the first, the reason is because the players believe we are in the first.

Imagine we are in an equilibrium where everyone expects that everyone will obey the king's trueborn son after the king dies. But, after the king's death there is doubt concerning whether the person claiming to be the trueborn son really is such. Suddenly, there is no agreement over which Nash equilibrium we are in and war could break out as different swordsmen obey different so-called kings.

πŸ‘︎ 6
πŸ’¬︎
πŸ‘€︎ u/sargon66
πŸ“…︎ Sep 17 2019
🚨︎ report
Multiple pure Nash equilibria

Hello,

I have been looking through books but I could not find a solid approach to apply when facing this problem.

In some games it is possible to have multiple Nash equilibria. In my case it is not easy to pick the "most profitable" one for the players because it is a competitive game and when a player gain the other loose. So How can I pick between the multiple Nash equilibria I obtained for my game!?

Thanks!

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/HF54
πŸ“…︎ Oct 15 2019
🚨︎ report
[R] [1708.08819] Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields <-- SotA on LSUN and celebA; seems to solve mode collapse issue arxiv.org/abs/1708.08819
πŸ‘︎ 82
πŸ’¬︎
πŸ‘€︎ u/evc123
πŸ“…︎ Aug 30 2017
🚨︎ report
Nash Equilibria for mixed/pure strategies in incomplete Information game

I have the following assignment and am quite confused about it.

The text was deliberatly written more complicated than necessary so it is not too easy.

As this is basically the last and only graded assignment, any help would be greatly appreciated.

At a first clance, the game looks quite similar to the Card Game (Osborne, p. 316)

Following situation:

  • Nature decides if Player 1 is of type H or L
  • This move cannot be observed by Player 2
  • Player 1 can declare she is of type L
  • In this case she has to pay 1$ to Player 2 // resulting in (-1,1)
  • If P1 claims to be type H, P2 can either play either C (concede) or N (not-concede).
  • If P2 plays C, he has to pay 1$ to P1 // resulting in (1,-1)
  • If P2 plays N, he gets 4$ if P1 == L // (-4,4)
  • Is P1 of type H, P2 has to pay 4$ // (4,-4)
  1. Determine Nash Equilibria in pure and mixed strategies
  2. How does the game change if P2 can observe Natures move. What NE would you expect?
  3. Find a use case/ real world example for the described model.

As this is the whole information about the game we have the following strategies:

P1 can {declare L, claim H} abbreviated with "d" and "c"

P2 has {C, N}

The Game in extensive Form:

                   Nature
                   /\
                H /  \ L
                 /    \
     (-1,1)__d__/  P1  \__d__ (-1,1)
               /        \
            c /          \ c
             /            \
            /      P2      \
           O ============== O 
          /\                /\
      C  /  \ N         C  /  \  N
        /    \            /    \
     (1,-1)  (4,-4)   (1,-1)   (-4,4)

We can see, it is a zero-sum game, complete but imperfect information. (As players know the strategies and outcomes, but P2 does not know all the prior moves.)

While determining NE through the use of Normal form would be quite simple, afaik I am not allowed to do that here?

     ---------------------------
               |   C   |   N   |
     ---------------------------     
        H  | c | (1,-1)  (4,-4)
     ---------------------------
           | d | (-1,1)  (-1,1)  
        L  ---------------------
           | c | (1,-1)  (-4,4)
     ---------------------------

I omitted (d|H) for player 1, as the strategy is strictly dominated by c, and therefore never played.
With this, it is possible to solve (2), we can dire

... keep reading on reddit ➑

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/MerityKasteen
πŸ“…︎ Feb 03 2019
🚨︎ report
[Research] RaBVIt-SG, an algorithm for solving Feedback Nash equilibria in Multiplayers Stochastic Differential Games researchgate.net/publicat…
πŸ‘︎ 5
πŸ’¬︎
πŸ‘€︎ u/tatitomate
πŸ“…︎ Jul 19 2019
🚨︎ report
IOTA Nash equilibria simulations explained (Equilibria in the Tangle part 2) blog.iota.org/equilibria-…
πŸ‘︎ 109
πŸ’¬︎
πŸ‘€︎ u/meet_laugh
πŸ“…︎ Dec 26 2017
🚨︎ report
Game Theory: Can asymmetric games have symmetric Nash equilibria?

I know that symmetric games can have asymmetric equilibria, but I can't find any information on asymmetric games. Can they have symmetric Nash equilibria?

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/QuixoticaKJH
πŸ“…︎ Mar 31 2019
🚨︎ report
Are Nash Equilibria usually the best strategies? Are they even usually good strategies?

I work in deep learning research, and was recently reading a paper about generative adversarial networks, or GANs, which frame the problem of learning to sample from an arbitrary distribution as a two player game.

The paper centers its analysis on finding Nash Equilibria and suggests that many of the techniques used to train GANs are, in essence, trying to modify the problem to increase the likelihood that nearby Nash Equilibria exist.

I understand what a Nash Equilibrium is, and I can see how they would be a useful formalism for studying games, but what I struggle to find a clear answer to is why we should want to find them, and if they are even good strategies at all.

The paper points out some reasons why they might be useful in this specific context since if you're training the networks with gradient descent, you could (theoretically) end up with a situation in which the strategies chase each other endlessly around in strategy space and other such weirdness.

But more broadly, I remember reading a book a while back that dealt with the iterated Prisoner's Dilemma, and I think two players following what they called the "ALLD" (defect every time) strategy would be in equilibrium, right? But beyond the fact that neither player has anything to gain from changing strategies, it doesn't seem like such a state would have any other redeeming attributes--since neither players score would be particularly good.

I suspect the answer will be something of the form "that's because Prisoner's Dilemma is game of class X while GANs are games of class Y and therefore the Nash Equilibria are always good in class Y", but if someone could provide me with some intuition--or perhaps even pointers to information suited to a total neophyte like myself--that would be great!

πŸ‘︎ 7
πŸ’¬︎
πŸ‘€︎ u/PURELY_TO_VOTE
πŸ“…︎ Oct 26 2017
🚨︎ report
Review of "Inadequate Equilibria," Eliezer Yudkowsky's extended essay on efficient markets, expertise, contrarianism, and suboptimal-but-persistent Nash equilibria slatestarcodex.com/2017/1…
πŸ‘︎ 17
πŸ’¬︎
πŸ‘€︎ u/envatted_love
πŸ“…︎ Feb 06 2018
🚨︎ report
Does this game have 2 Nash Equilibria?

Sry to ask such a basic question and if its the wrong subreddit but I need it as tomorrow I have an exam and the professor answers emails only until 18:00 ://

I think there is a mistake in the solutions in the script.

So here is the game:

- Left Right
Up 5;5 -5;-4
Down 8;-5 1;1

So In my opinion only 1;1 is the nash equilibrium as its the only possible outcom in this game. The book says 5;5 is also a Nash Equilibrium. But how would you even arrive at Up/Left. Player with Up Down has a dominant strategy by playing Down. So if Player Left/Right knows this, his only choice is to play Right. He would never play left as he would have -5 instead of 1 payout if he plaied Left...

So how is it a Nash equilibrium?

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/ohRyZze
πŸ“…︎ Dec 18 2017
🚨︎ report
Game Theory: Don't count on players getting themselves to Nash equilibria quantamagazine.org/in-gam…
πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Mar 28 2018
🚨︎ report
Nash Equilibria and Schelling Points lesswrong.com/lw/dc7/nash…
πŸ‘︎ 8
πŸ’¬︎
πŸ“…︎ Oct 24 2015
🚨︎ report
Nash equilibria of two-person zero-sum games with random additive payoff noise

Nash equilibria can get nasty. Even in this nicest of settings, the set of equilibria generally forms a convex polytope, and it's a pain to deal with that instead of a single point. But if you only have empirical observations of payoffs with (continuous, real) random noise in them, I felt that the nastiness almost surely ought not to occur. Is that actually the case? And if so, does it make sense to be glad that there's randomness? Maybe even introduce randomness to a case where there's none and look at the expected equilibrium?

If only pure equilibria were allowed, it would suffice for the payoff matrix M to have distinct entries (which a.s. occurs) for unique equilibrium (if any), because the game has a single value and there can only be one entry in M where it is found. Here it's easy to see how randomness gets rid of any ambiguities. To deal with mixed equilibria, I tried the following:

The entries of M are a.s. algebraically independent over the rationals, so assume that they are. Suppose that x and y are equilibrium strategies, and the value of the game is v = x^(T)My. Let v = x'^(T)M'y' be the same, except zero elements of x and y removed. The "best response condition" says that M'y' = v and x'^(T)M' = v, i.e. every pure strategy that x and y make use of is a best response against the equilibrium strategy (e.g. in rock-paper-scissors it's optimal to always go "scissors" if your opponent randomizes; if it weren't, you'd improve your probability of winning by never going "scissors").

Suppose wlog that M' is tall or square and consider M'y' = v (otherwise you'd consider M'^(T)x' = v instead). There's the additional constraint e^T y' = 1 for the probability distribution y', where e is a vector of all ones. If M' is not square, then even with free variables y' and v there are too many constraints and no solutions (algebraic independence), so only the square case is possible. If M' is square, then v = 1/(e^(T)M'^(-1)e) = 1/sum(M'^(-1)). If two different square submatrices M' of M cannot have equal v, then there is only one solution because (like with the pure case) the game has a single value v and there can be only one M' that produces it. Given (M', v) pair the solution is x' = vM'^(-T)e, y' = vM'^(-1)e.

Given two arbitrary distinct square submatrices A and B of M, does algebraic independence of M's elements imply that sum(A^(-1)) β‰  sum(B^(-1)), though? Both sides are rational expressions and you can trivially get a "neat" (owww) polynomial that y

... keep reading on reddit ➑

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/Coffee2theorems
πŸ“…︎ Nov 22 2015
🚨︎ report
Would anyone be willing to have a look at my strategic game and let me know if my Nash equilibria is correct?

Please let me know, I would just need some quick feedback from more experienced game theorists! Thank you.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/Bostima
πŸ“…︎ Nov 24 2015
🚨︎ report
A python library for computing 2 player Nash equilibria (a single dependency: numpy). github.com/drvinceknight/…
πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/drvinceknight
πŸ“…︎ Nov 06 2016
🚨︎ report
Distance metrics for approximate nash equilibria

I have a program that attempts to find approximate nash equilibria in 2-player games in situations where the exact equilibria are unknown.

I'm attempting to quantify and measure how close a given approximate solution is to the real solution.

I've been using a metric that considers how much the payoff for player 1 in the approximate equalibrium differs from the payoff when player 2 best-responds to player 1's approximate equalibrium strategy. And the same for player 1 best responding to player 2.

In rock paper scissors for example, if in the approximate nash, player 1 played (0.34, 0.32, 0.33) and player 2 played (0.33, 0.34, 0.32) in (R,P,S) space my metric would be as follows.

EV of P1 approx nash vs p2 approx nash = .34 * .33 * 0 + .34 * .34 * -1 + .34 * .32 * 1 + .32 * .33 * 1 + .32 * .34 * 0 + .32 * .32 * -1 + .33 * .33 * -1 + .33 * .34 * 1 + .33 * .32 * 0 = -0.0003

Best response for P2 vs P1 = 100% scissors, which pays P1, -.01.

Best response for P1 vs P2 = 100% paper which pays P1 .01.

So my distance would be base on sqrt((-0.1 - -0.003)^2 + (0.1 - -0.0003)^2);

In my application (poker) this is a nice metric because it is based on the maximum your opponent might gain by deviating from the approximate nash strategy.

But I wanted to ask if there is a different widely used metric that might be more appropriate for testing my solution algorithm for convergence.

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/asuth
πŸ“…︎ Mar 19 2014
🚨︎ report
How to find mixed strategy nash equilibria in a three person game?

Hi, here's the question in question: http://i.imgur.com/WeCVpT1.png

I've found the pure nash equilibria but I also need to find all nash equilibria in which two players play a pure strategy and the third plays a mixed strategy. I've no idea how to do this and my textbooks aren't helping. Any help would be appreciated, thanks.

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/OGofRivia
πŸ“…︎ Nov 07 2015
🚨︎ report
Nash Equilibria and Schelling Points - Lesswrong lesswrong.com/lw/dc7/nash…
πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Faceh
πŸ“…︎ Jun 19 2015
🚨︎ report
What Is The Difference, In Simple Terms, Between Nash Equilibria and Subgame Perfect Nash Equilibria?
πŸ‘︎ 9
πŸ’¬︎
πŸ‘€︎ u/_Iro_
πŸ“…︎ Mar 30 2020
🚨︎ report
Nash equilibria of two-person zero-sum games with random additive payoff noise (X-post from r/math)

(X-post: It was suggested that this was the more appropriate subreddit)

Nash equilibria can get nasty. Even in this nicest of settings, the set of equilibria generally forms a convex polytope, and it's a pain to deal with that instead of a single point. But if you only have empirical observations of payoffs with (continuous, real) random noise in them, I felt that the nastiness almost surely ought not to occur. Is that actually the case? And if so, does it make sense to be glad that there's randomness? Maybe even introduce randomness to a case where there's none and look at the expected equilibrium?

If only pure equilibria were allowed, it would suffice for the payoff matrix M to have distinct entries (which a.s. occurs) for unique equilibrium (if any), because the game has a single value and there can only be one entry in M where it is found. Here it's easy to see how randomness gets rid of any ambiguities. To deal with mixed equilibria, I tried the following:

The entries of M are a.s. algebraically independent over the rationals, so assume that they are. Suppose that x and y are equilibrium strategies, and the value of the game is v = x^(T)My. Let v = x'^(T)M'y' be the same, except zero elements of x and y removed. The "best response condition" says that M'y' = v and x'^(T)M' = v, i.e. every pure strategy that x and y make use of is a best response against the equilibrium strategy (e.g. in rock-paper-scissors it's optimal to always go "scissors" if your opponent randomizes; if it weren't, you'd improve your probability of winning by never going "scissors").

Suppose wlog that M' is tall or square and consider M'y' = v (otherwise you'd consider M'^(T)x' = v instead). There's the additional constraint e^T y' = 1 for the probability distribution y', where e is a vector of all ones. If M' is not square, then even with free variables y' and v there are too many constraints and no solutions (algebraic independence), so only the square case is possible. If M' is square, then v = 1/(e^(T)M'^(-1)e) = 1/sum(M'^(-1)). If two different square submatrices M' of M cannot have equal v, then there is only one solution because (like with the pure case) the game has a single value v and there can be only one M' that produces it. Given (M', v) pair the solution is x' = vM'^(-T)e, y' = vM'^(-1)e.

Given two arbitrary distinct square submatrices A and B of M, does algebraic

... keep reading on reddit ➑

πŸ‘︎ 4
πŸ’¬︎
πŸ‘€︎ u/Coffee2theorems
πŸ“…︎ Nov 23 2015
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.