Monday, November 30, 2020

Who has an advantage in Risk, attacker or defender?

Let's first assume that we're talking about the frequent case where the attacker is rolling 3 dice, while the defender can only roll 2 dice, but ties go to the defender. More dice is better. Winning ties is better. But which of those wins out? Let's see.

One thing that's tricky here is that we compare the highest attacker die to the highest defender die, and the second highest attacker die to the lowest defender die. So we have to consider the joint probability distribution of the two attacker dice and the two defender dice. We can treat the attacker and defender separately, though, since they are independent.


Let's start by working with the defender, and looking at the CDF of the better roll of 2d6. Recall the CDF $F_X(x) = P(X \leq x)$. Let's say $D_1$ is the the higher die of the defender, with $D_2$ being the lower die. Let's say that X is the result of rolling a 1d6.

\begin{align} F_{D_1}(d) = P(D_1 \leq d) = P(X \leq d)^2 \end{align}

That is, the probability that the highest die is less than or equal to $d$ is the probability that both dice are less than or equal to $d$.

\begin{align} F_{D_1}(d) = \left ( \frac{d}{6} \right)^2 \end{align}

What about $F_{D_2}(d)$? Similar to above, we can say that

\begin{align} F_{D_2}(d) &= P(D_2 \leq d) \\ &= 1 - P(D_2 > d) \\ &= 1 - P(X > d)^2 \\ &= 1 - \left ( \frac{6-(d+1)+1}{6} \right)^2 \\ &= 1 - \left ( \frac{6-d}{6} \right)^2 \\ &= \frac{6^2}{6^2} - \frac{6^2-12d+d^2}{6^2} \\ &= \frac{6^2-6^2+12d-d^2}{6^2} \\ &= \frac{12d-d^2}{6^2} \\ &= \frac{d}{6}\cdot \frac{12-d}{6} . \end{align}

Those make sense, though we should check that they work out at say $d=1$ and $d=6$. But those are assuming that they're independent. Given that the highest die comes out to be $d_1$, what's the distribution of $D_2$? Or what is $P(D_2 = d_2 | D_1 = d_1)$. Well, it's certainly 0 if $d_2 > d_1$.


This seems to be a hard one.


It may just be easier to calculate the joint probability directly.

\begin{align} P(D_1=d_1, D_2=d_2) = \begin{cases} 0 & d_1 < d_2 \\ P(d_1\text{ and }d_2\text{ on 2d6}) & d_1 = d_2 \\ P(d_1\text{ and }d_2\text{ on 2d6}) & d_1 > d_2 \end{cases} \end{align}

Let's drill down more, in the $d_1=d_2$ case, it's just the probability of getting both of that value when rolling 2d6.

\begin{align} P(d_1\text{ on 1d6}) &= \frac{1}{6}\\ P(d_1\text{ on both of 2d6}) &= P(d_1\text{ on 1d6})^2 \\ &=\left ( \frac{1}{6} \right ) ^2 \\ &=\frac{1}{36} \\ P(D_1=d_1, D_2=d_2) & = \frac{1}{36}, \quad d_1=d_2 \end{align}

Okay, now the $d_1 > d_2$ case. If we have two different values, there's two ways we could get them. Let's say we have two dice $A$ and $B$. We roll $d_1$ on $A$ and $d_2$ on $B$, or $d_1$ on $B$ and $d_2$ on $A$. The probability of rolling $d_1$ on a particular die is $1/6$, similarly for $d_2$. Thus, the probability of getting $D_1=d_1$ and $D_2=d_2$ is twice the probability of rolling $d_1$ and then $d_2$.

\begin{align} P(D_1=d_1, D_2=d_2) & = 2\cdot\frac{1}{6}\cdot\frac{1}{6}, \quad d_1>d_2\\ & = 2\cdot\frac{1}{36}, \quad d_1>d_2\\ & = \frac{1}{18}, \quad d_1>d_2 \end{align} \begin{align} P(D_1=d_1, D_2=d_2) = \begin{cases} 0 & d_1 < d_2 \\ \frac{1}{36} & d_1 = d_2 \\ \frac{1}{18} & d_1 > d_2 \end{cases} \end{align}

This makes some sense, and we can see that the probabilities are going to add up to one, though we should probably check more explicitly. You can think of this as rolling two dice in order. Then, if the second one came out larger, we switch the order. Thus, all the probabilities for $d_1 < d_2$ went from $1/36$ to 0, and all the probabilities for $d_1 > d_2$ went from $1/36$ to $2/36=1/18$. I think, ultimately, this last explanation is probably the way to think about it.


So let's take that kind of thinking and apply it to the attacker. Here we have three dice, but only use two. If we use this algorithmic approach we can conceive of rolling three dice, and then reordering them to be from largest to smallest. With two dice, we reordered, or we didn't. With three dice, there are more cases. Let's assume that we have three distinct results on our rolls. Then there are $3! = 3 \cdot 2 \cdot 1 = 6$ ways to order them. Thus,

\begin{align} P(A_1=a_1, A_2=a_2, A_3=a_3) &= 3! \cdot \left ( \frac{1}{6} \right ) ^3\\ &= 6 \cdot \frac{1}{6^3}\\ &= \frac{1}{6^2}\\ &= \frac{1}{36}. \end{align}


Interestingly, we only care about two of the values. So we want to sum up for all possible $a_3$. However, we've assumed here that $a_1 > a_2 > a_3$. Thus, the number of cases to sum is limited. The number of cases where $a_3 < a_2$ is equal to $a_2 - 1$.

\begin{align} P(A_1=a_1, A_2=a_2) &= \sum_{a_3 < a_2} P(A_1=a_1, A_2=a_2, A_3=a_3) \\ &= (a_2-1) \cdot \frac{1}{36} \\ &= \frac{a_2-1}{36} \end{align}

Oops, we left out one above. We left one case out. We were assuming $a_3 < a_2$, but that doesn't show up in the equation as we wrote it. We only really need to assume $a_3 \leq a_2$ (otherwise, it won't occur at all and the probability is zero). Thus, we can fix our equation as follows.

\begin{align} P(A_1=a_1, A_2=a_2) &= \sum_{a_3 \leq a_2} P(A_1=a_1, A_2=a_2, A_3=a_3) \\ &= a_2 \cdot \frac{1}{36} \\ &= \frac{a_2}{36} \end{align}

Oops, now I think I've done some double counting. Because if $a_1 > a_2 = a_3$ then there aren't $3!=6$ orderings There are only $(3!)/(2!) = 6/2 = 3$ ways. Thus,

\begin{align} P(A_1=a_1, A_2=a_2, A_3=a_2) &= 3 \cdot \frac{1}{6^3}\\ &= \frac{1/2}{36}. \end{align}

So

\begin{align} P(A_1=a_1, A_2=a_2) &= P(A_1=a_1, A_2=a_2, A_3 < a_2) + P(A_1=a_1, A_2=a_2, A_3=a_2) \\ &= \sum_{a_3 < a_2} P(A_1=a_1, A_2=a_2, A_3=a_3) + P(A_1=a_1, A_2=a_2, A_3=a_2) \\ &= (a_2-1) \cdot \frac{1}{36} + \frac{1/2}{36}\\ &= \frac{a_2-1/2}{36} \end{align}

We now have more cases to consider. As before, we need to consider the all equal case $a_1=a_2=a_3$, which, similar to before, occurs with probability $1/6^3 = 1/216$. We also have to consider all the cases where two of the dice are equal, but not to the third. We considered the $a_1 > a_2=a_3$ case above. But we also have $a_1=a_2 > a_3$. Again, we only have 3 orderings, so

\begin{align} P(A_1=a_1, A_2=a_1, A_3 < a_1) &= 3 \cdot \left ( \frac{1}{6} \right ) ^3\\ &= \frac{1/2}{36} . \end{align}

Note: I'm writing this weird fraction to keep a common denominator of 36. Perhaps we should really be working with a denominator of 72. Again we combine to get the probability independent of $a_3$.

\begin{align} P(A_1=a_1, A_2=a_1) &= P(A_1=a_1, A_2=a_1, A_3 < a_1) + P(A_1=a_1, A_2=a_1, A_3=a_1) \\ &= \sum_{a_3 < a_1} P(A_1=a_1, A_2=a_1, A_3=a_3) + P(A_1=a_1, A_2=a_1, A_3=a_1) \\ &= (a_1-1) \cdot \frac{1/2}{36} + \frac{1}{216}\\ &= (a_1-1) \cdot \frac{3}{6^3} + \frac{1}{6^3}\\ &= \frac{3\cdot (a_1-1) + 1}{6^3} \\ &= \frac{3 a_1-2}{6^3} \end{align}

Do these all add up to 1?

\begin{align} \sum_{a_1} P(A_1=a_1, A_2=a_1) + \sum_{a_1 > a_2} P(A_1 = a_1, A_2 = a_2) &= 1 \\ \sum_{a_1=1}^6 \frac{3 a_1 - 2}{6^3} + \sum_{a_1=2}^6 \sum_{a_2=1}^{a_1-1} \frac{a_2-1/2}{36} &= 1 \\ \sum_{a_1=1}^6 \frac{3 a_1 - 2}{6^3} + \sum_{a_1=2}^6 \sum_{a_2=1}^{a_1-1} \frac{a_2-1/2}{36} &= 1 \end{align}

Recall that,

\begin{align} \sum_{k=1}^n k = \frac{n(n+1)}{2}. \end{align}

Thus, continuing on, we have as follows.

\begin{align} \frac{3 \cdot \frac{6(6+1)}{2} - 2\cdot 6}{6^3} + \sum_{a_1=2}^6 \left (\frac{(a_1-1)a_1}{2}-\frac{1}{2}\cdot (a_1-1)\right) \cdot \frac{1}{36} &= 1 \\ \frac{51}{6^3} + \sum_{a_1=2}^6 \frac{a_1^2-2a_1+1}{2} \cdot \frac{1}{36} &= 1 \end{align}

Now we have the sum of a squared term, which is given by,

\begin{align} \sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}. \end{align} \begin{align} \frac{51}{6^3} + \frac{\left(\frac{6(6+1)(12+1)}{6}-1\right)-2 \cdot \left(\frac{6(6+1)}{2}-1\right) + 5}{2} \cdot \frac{1}{36} &= 1 \\ \frac{51}{6^3} + \frac{\left((6+1)(12+1)-1\right)-2 \cdot \left(3(6+1)-1\right)+5}{2} \cdot \frac{1}{36} &= 1 \\ \frac{51}{6^3} + \frac{(90)-2 \cdot (20)+5}{2} \cdot \frac{1}{36} &= 1 \\ \frac{51}{6^3} + \frac{(90)-(40)+5}{2} \cdot \frac{1}{36} &= 1 \\ \frac{51}{6^3} + \frac{55}{2 \cdot 36} &= 1 \\ \frac{51}{6^3} + \frac{55}{2 \cdot 36} &=1 \\ \frac{3 \cdot 17}{6^3} + \frac{55}{2 \cdot 36} &=1 \\ \frac{17}{2 \cdot 6^2} + \frac{55}{2 \cdot 36} &=1 \\ \frac{17+55}{2 \cdot 6^2} &=1 \\ \frac{72}{72} &=1 \end{align}

Wow, we've made a big mistake here somewhere!  Yay, I fixed it. 


Note, that the first time I did this, I made a few mistakes. First, I copied the wrong expression in for one of the terms, then I made some algebra and arithmetic mistakes. That's why it's helpful to do these kinds of checks. It is also possible that I could have made such a mistake deriving the expressions themselves instead of in the check combining them.


Let's summarize our result for the attacker.

\begin{align} P(A_1=a_1, A_2=a_2) = \begin{cases} 0 & a_1 < a_2 \\ \frac{3a_1 - 2}{6^3} & a_1 = a_2 \\ \frac{a_2 - 1/2}{36} & a_1 > a_2 \end{cases} \end{align}

Recall the result for the defender.

\begin{align} P(D_1=d_1, D_2=d_2) = \begin{cases} 0 & d_1 < d_2 \\ \frac{1}{36} & d_1 = d_2 \\ \frac{1}{18} & d_1 > d_2 \end{cases} \end{align}



Also, I could use a computer to help me evaluate the expression to confirm that my mistake was in the evaluation and not in deriving the probabilities. A computer simulation could also be helpful, but less conclusive. I draw the distinction as this. You can use a computer to evaluate or to simulate. When you have a computer evaluate, you're using it as a glorified calculator or a plotting tool. Maybe you use it to solve some transcendental equations. If you use a computer to simulate, then you aren't doing much thinking, you're just setting up the problem, running one or more experiments, and hoping the result tells you something both correct and meaningful. Here, though, we can also brute force all the combinations. This is still using the computer like a calculator, but not bothering to simplify before chugging away.


With the probability distributions for both the attacker's and the defender's dice rolls, we can do the comparison and see what the outcome of a round of combat is. In this case there are three outcomes,

1) attacker loses two units,

2) defender loses two units, or

3) attacker and defender each lose one unit.

These cases come from the results of our die rolls.

\begin{align} P(\text{attacker loses two units}) &= P(A_1 \leq D_1, A_2 \leq D_2) \\ P(\text{defender loses two units}) &= P(A_1 > D_1, A_2 > D_2) \\ P(\text{both lose one unit}) &= P(A_1 \leq D_1, A_2 > D_2) + P(A_1 > D_1, A_2 \leq D_2) \end{align}

Note the use of $\leq$, corresponding to the fact that ties go to the defender. Also note that there are two ways that both can lose one unit: attacker can win with the top die and lose with the lower die, and vice versa.  We can compute these by looking at all the cases.

\begin{align} P(A_1 \leq D_1, A_2 \leq D_2) &= \sum_{d_1=1}^6 \sum_{d_2=1}^{d_1} P(A_1 \leq d_1, A_2 \leq d_2) \cdot P(D_1 = d_1, D_2 = d_2) \end{align} \begin{align} P(A_1 \leq d_1, A_2 \leq d_2) = \sum_{a_1=1}^{d_1} \sum_{a_2=1}^{d_2} P(A_1 = a_1, A_2 = a_2) \end{align} \begin{align} P(A_1 \leq D_1, A_2 \leq D_2) &= \sum_{d_1=1}^6 \sum_{d_2=1}^{d_1} \sum_{a_1=1}^{d_1} \sum_{a_2=1}^{d_2} P(A_1 = a_1, A_2 = a_2) \cdot P(D_1 = d_1, D_2 = d_2) \end{align}


Because our expressions for $P(A_1=a_1, A_2=a_2)$ and $P(D_1 = d_1, D_2 = d_2)$ depend on whether $a_1=a_2$ and whether $d_1=d_2$, respectively, we can break up the sum to look at the four cases.

\begin{align} 1: & \quad a_1=a_2 \quad d_1=d_2 \\ 2: & \quad a_1=a_2 \quad d_1>d_2 \\ 3: & \quad a_1>a_2 \quad d_1=d_2 \\ 4: & \quad a_1>a_2 \quad d_1>d_2 \end{align}

We can label the summations related to these cases at $s_1$--$s_4$.

\begin{align} P(A_1 \leq D_1, A_2 \leq D_2) &= s_1 + s_2 + s_3 + s_4 \end{align}

In many of the cases, collapsing some terms to be equal simplifies the summation.

\begin{align} s_1 &= \sum_{d_1=1}^6 \sum_{d_2=d_1}^{d_1} \sum_{a_1=1}^{d_1} \sum_{a_2=a_1}^{a_1} P(A_1 = a_1, A_2 = a_2) \cdot P(D_1 = d_1, D_2 = d_2) \\ &=\sum_{d_1=1}^6 \sum_{a_1=1}^{d_1} P(A_1 = a_1, A_2 = a_1) \cdot P(D_1 = d_1, D_2 = d_1) \\ &=\sum_{d_1=1}^6 \sum_{a_1=1}^{d_1} \left (\frac{3a_1-2}{6^3} \right) \cdot \frac{1}{36} \\ &=\sum_{d_1=1}^6\frac{3 \frac{d_1 \cdot (d_1+1)}{2}-2d_1}{6^5}\\ &=\sum_{d_1=1}^6\frac{\frac{3}{2}d_1^2 -\frac{1}{2}d_1}{6^5}\\ &= \frac{1}{6^5} \cdot \left ( \frac{3}{2}\cdot \frac{6 \cdot (6+1) \cdot (2\cdot 6 +1 )}{6} - \frac{1}{2} \cdot \frac{6 \cdot (6+1)}{2} \right ) \\ &=\frac{6\cdot 7 \cdot 13 - 6 \cdot 7 }{2^2 \cdot 6^5}\\ &=\frac{6\cdot 7 \cdot 2 \cdot 6 }{2^2 \cdot 6^5}\\ &= \frac{7}{2 \cdot 6^3}\\ &\approx 0.0162 \end{align}

Now on to the next term.

\begin{align} s_2 &= \sum_{d_1=1}^6 \sum_{d_2=1}^{d_1-1} \sum_{a_1=1}^{d_1} \sum_{a_2=a_1}^{a_1} P(A_1 = a_1, A_2 = a_2) \cdot P(D_1 = d_1, D_2 = d_2) \\ &=\sum_{d_1=1}^6 \sum_{d_2=1}^{d_1-1} \sum_{a_1=1}^{d_1} P(A_1 = a_1, A_2 = a_1) \cdot P(D_1 = d_1, D_2 = d_2) \\ &=\sum_{d_1=1}^6 \sum_{d_2=1}^{d_1-1} \sum_{a_1=1}^{d_1} \left (\frac{3a_1-2}{6^3} \right) \cdot \frac{1}{18} \\ &=\sum_{d_1=1}^6 \sum_{d_2=1}^{d_1-1}\frac{3 \frac{d_1 \cdot (d_1+1)}{2}-2d_1}{3 \cdot 6^4}\\ &=\sum_{d_1=1}^6 (d_1-1) \cdot \frac{3 d_1^2 -d_1}{6^5}\\ &=\sum_{d_1=1}^6 \frac{3 d_1^3 -4d_1^2+d_1}{6^5}\\ \end{align}

To solve this part, we need to know what the sum of a sequence of cubed terms is.

\begin{align} \sum_{k=1}^n k^3 = \frac{n^2(n+1)^2}{2^2}. \end{align}

I will avoid proving the above, as I did with the previous summations. This is not one I had known existed, but it was easy to find. A proof by induction should work here easily as well.

\begin{align} s_2 &= \frac{3 \cdot\frac{6^2 \cdot 7^2}{2^2} - 4\cdot\frac{6 \cdot 7 \cdot 13}{6} + \frac{6\cdot7}{2}}{6^5}\\ &= \frac{3 \cdot 3^2\cdot7^2 - 4\cdot7\cdot13+3\cdot7}{6^5}\\ &=\frac{2^2\cdot5\cdot7^2}{6^5}\\ &\approx 0.126 \end{align}


Alright, I'm trying to make a point, but this is getting too tiresome. We haven't even completed one of the scenarios! Let's just write a program to run all the cases, there aren't that many, only $6^5=7776$, and then be done with it.


The construction is this: we'll run 5 loops, one for each of the 5 dice, each taking the values from 1 to 6 to cover all possible cases. The first three loops correspond to the attacker's dice, the last two to the defender's dice. Then we sort the values and compare to see which of the three scenarios we end up in. We'll initialize three counters to zero before the loops and use them to keep track of how many times each case occurs. After the loops complete, we can check that they sum to the required 7776 cases as a basic check that we didn't skip any outcomes. From the counts, we can divide by 7776 to get the probabilities of each case.


\begin{align} P(\text{attacker loses two units}) &= \frac{2275}{7776} \approx 0.293 \\ P(\text{defender loses two units}) &= \frac{2890}{7776} \approx 0.372 \\ P(\text{both lose one unit}) &= \frac{2611}{7776} \approx 0.336 \end{align}


As we can see, all three outcomes are quite likely, but the probability that the defender loses two units is the highest, which gives the edge to the attacker.


Note that while I have used a computer to obtain this result (and indeed to compute several values and generate many plots in the past), we have not performed a simulation. That is, we are not limited here by the randomness of the computer and the number of trials run. On the other hand, we've used the same technique as when we first looked at the distribution of 2d6: counting all the cases. This can be an exhaustive tedious method, but in cases where we can make the computer do it for us it is often the most efficient path to a solution. It doesn't necessarily give us a lot of insight here, but neither was all the analysis we did above.

Monday, November 23, 2020

The Haunt Roll in Betrayal Games

I'll get to the Betrayal games in a minute, but let me share some background for how I'm approaching it.

There was a recent question on BoardGameGeek that asked about a mechanic for finding a spy amidst a group of senators. The proposal was to use a bag with 11 tokens, 2 of which are spy tokens, with the remaining 9 being senators. Both spy tokens need to be drawn to find the spy.

I would like the average resolution to be about 6-7 attempts, hopefully landing in the neighborhood of 4-9 attempts, beyond some extreme flukes.

First, let me say that I love the framing here in terms of the desired average and extremes. We can evaluate the mechanic much better with a design criteria as described above.


As to the problem, we can simplify the problem by turning it around, which avoids having to consider the combinatorics of order. This approach is similar to how I've solved the game length in High Society. Let's analyze the problem as if the order the chits is known or determined ahead of time (even though it isn't). The question then becomes, what is the probability that the second spy chit is in round $r$ counting from the front? That's the same as looking for the first spy chit from the back of the line. For the last round,$r=9$, that's easy,

\begin{align} P(r=9) = \frac{2}{11}, \end{align}

as there are 2 spies out of 11 total tokens. Let's count this as round $n=1$ from the back.

\begin{align} P(n=1) = \frac{2}{11} \end{align}

For earlier rounds, we must have drawn all senators "first" (again, we're starting at the back). The probability of drawing a senator in a round $n$ from the back is based on the number of tokens that weren't drawn yet. If we've yet to draw a spy, in round $n$, there are $11-(n-1)=12-n$ total tokens. Two of them are spies, so $12-n-2$ are spies. Thus the probability of drawing a senator in round $n$ (from the back) given no previous spies is as follows.

\begin{align} P(\text{senator in round } n | \text{ no spies in rounds} < n) & = \frac{11 - (n-1) - 2}{11 - (n-1)} \\ &= \frac{10-n}{12-n} \end{align}

So first $\frac{9}{11}$, then $\frac{8}{10}$, and so on. Similarly, the probability of drawing the "first" spy in round $n$ from the back is,

\begin{align} P(\text{spy in round } n | \text{ no spies in rounds} < n) & = \frac{2}{11 - (n-1)} \\ &= \frac{2}{12-n}. \end{align}


We can combine the above in order to get the probability of finding a spy in round $n$. To be able to write the probabilities in a more compact way, such that we can fit our equations on one line, let's make use of the following events.

\begin{align} S_n &= \text{senator in round } n \\ Y_n &= \text{spy in round } n \end{align}

We can rewrite the above probabilities using these events.

\begin{align} P\left(S_n \:\middle\vert\: \bigcap_{i<n} S_i\right) &= P(\text{senator in round } n \mid \text{no spies in rounds} < n)\\ P\left(Y_n \:\middle\vert\: \bigcap_{i<n} S_i\right) &= P(\text{spy in round } n \mid \text{no spies in rounds} < n) \end{align}

Here $\bigcap_{i<n} S_i$ refers to the intersection of all events $S_i$ when $i<n$. Basically, that means $S_1$, $S_2$, $\ldots$, $S_{n-2}$, and $S_{n-1}$ all occur. Note that because senators and spies are mutually exclusive and the only options,

\begin{align} P(Y_n) = 1- P(S_n). \end{align}


Using this more compact notation, we can combine our previous equations to get the total probability of drawing the first spy in round $n$ from the back.

\begin{align} P(Y_n) &= P\left(Y_n \:\middle\vert\: \bigcap_{i<n} S_i\right) \cdot P\left(\bigcap_{i < n} S_i\right) \end{align}

Recall that $P(S_i)$ depends on what was previous drawn, meaning that $S_i$ are not independent. Because of this, we cannot simply break up $P\left(\bigcap_{i < n} S_i\right)$ as a product of the probabilities.

\begin{align} P\left(\bigcap_{i < n} S_i\right) &\neq \prod_{i=1}^n P(S_i) \\ \end{align}

Instead, we use the conditional probabilities that we found earlier.

\begin{align} P(Y_n) &=P\left(Y_n \:\middle\vert\: \bigcap_{i<n} S_i\right) \cdot\prod_{i=1}^n P\left(S_i \:\middle\vert\: \bigcap_{k<i} S_k\right) \\ &= \frac{2}{12-n}\frac{9}{11} \cdot \frac{8}{10} \cdot \ldots \cdot \frac{10-(n-1)}{12-(n-1)} \\ &= \frac{2 \cdot 9 \cdot 8 \cdot \ldots \cdot (10-(n-1))}{11 \cdot 10 \cdot \ldots \cdot (12-n)} \\ &= 2 \cdot \frac{9!}{11!} \cdot \frac{(12-(n+1))!}{(10-n)!} \end{align}

At this point, we can turn things back around to count from the front. We know that $n=1$ is equivalent to $r=11$ and $n=11$ is equivalent to $r=1$. From this we can write the equations defining the relationship between the number of rounds from the front, $r$, and the number of rounds from the back, $n$.

\begin{align} n = 12-r \end{align}

Thus the probability of finding the spy chit in round $r$ is as follows.

\begin{align} P(\text{spy revealed in round } r) &= P(Y_{12-r}) \\ &= 2 \cdot \frac{9!}{11!} \cdot \frac{(12-(12-r+1))!}{(10-(12-r))!}\\ &= 2 \cdot \frac{9!}{11!} \cdot \frac{(r-1)!}{(r-2)!} \\ &= 2 \cdot \frac{9!}{11!} \cdot (r-1) \\ &= \frac{2}{11\cdot 10} \cdot (r-1) \end{align}

The result of this equation is shown in Figure 1, which shows a linearly increasing probability as each new token is drawn, starting with a probability of 0 in round 1. This makes sense, as the second spy token must be drawn to actually find the spy. A numerical check confirms that the values for all value $r$ sum to 1. As clearly shown in the plot, this favors the last rounds, which does not match the desired properties.


Figure 1: Probability of finding the spy


Now, let's turn to the haunt roll mechanics found in the various Betrayal games, including Betrayal at the House on the Hill, Betrayal at Balder's Gate, Betrayal Legacy, and Betrayal at Mystery Mansion. We'll actually set the legacy version aside, as the legacy aspects of the game can influence the roll.


In the original game (Betrayal at the House on the Hill), each time an omen is drawn the active player rolls 6 dice and if the result is less than the number of omen cards then the haunt starts (the example in the rulebook actually says equal to or less than number of omen cards, which I'll assume is an error). The dice in all betrayal games are the same: custom six-sided dice with two sides each of 0, 1, and 2. This means the probability mass function (pmf) of each die is $1/3$ for values on the dice, and $0$ elsewhere. We'll denote the result of one of the betrayal dice as a random variable $D$ with pmf $f_D(d)$.

\begin{align} f_D(d) = \begin{cases} \frac{1}{3} \quad & d \in {0, 1, 2} \\ 0 \quad &\text{else} \end{cases} \end{align}

We can get the pmf of the entire hunt roll of six dice, $f_H(h)$ by convolving the pmf of each die with itself enough times to account for the six dice that a player rolls.

\begin{align} f_H(h) = f_D(h) * f_D(h) * f_D(h) * f_D(h) * f_D(h) * f_D(h) \end{align}

Because convolution is associative, we don't have to worry about the order of the convolutions. The result is shown in calculation in Figure 2. We get what we expect after convolving a reasonable number (even if it is only six here) of independent identically distributed (i.i.d.) random variables, which is a roughly Gaussian, or normal-looking distribution.  (Due to the Central Limit Theorem. There are probably some comments I should be making about whether them being identically distributed matters here, but I'll omit them for ease and brevity.) We expect a minimum value of 0 and maximum value of 12, each corresponding to the extreme values on all dice. These are possible, but with low probability. The distribution is also symmetric.


Figure 2: Probability mass function of haunt roll


By computing the cumulative distribution (CDF) of the haunt roll result, $F_H(h)$, we can easily compute the probability that the haunt starts in any given round. Finding the CDF is a straightforward computation based on the pmf.

\begin{align} F_H(h) &= P(H \leq h) \\ &= \sum_{i=0}^h f_H(i) \end{align}
Figure 3: Cumulative distribution function of haunt roll


Note that here, the minimum possible value on a haunt roll is 0. The resulting CDF is plotted in Figure 3. Because the haunt starts whenever the haunt roll is less than the number of omens drawn, the probability of a haunt being triggered when making a haunt roll is equal to the CDF evaluated at one less than the number of omens, $o$.

\begin{align} P(\text{haunt starts} \mid o \text{ omens}) = F_H(o-1) \end{align}

The next step is to assemble the probability distribution of the number of omens drawn before the haunt starts. We use the distribution of the haunt roll to do this, noting that we only consider drawing more omens if the haunt starts. Therefore, the probability that the number of omens when the haunt starts, $O$, is 1 is equal to the probability that we roll less than 1 on the first haunt roll.

\begin{align} f_O(1) &=P(O=1)\\ &= F_H(1-1)\\ & = F_H(0)\\ &=f_H(0)\\ &=f_D(0)^6\\ & \left( \frac{1}{3} \right)^6 \\ &\approx 0.00137 \end{align}

In this case, as shown above, the probability shakes out to the probability of rolling 0 on all six haunt dice, which occurs with a probability of about 0.00137.


When looking at the probability that $O>1$, we must consider the result of all previous haunt rolls. That is, we only make a haunt roll with $o$ omens, where $o>1$, if all previous haunt rolls failed to start the haunt.

\begin{align} P(\text{make haunt roll with $o$ omens}) &= 1 - P(O < o)\\ &= 1- F_O(o-1)\\ & 1 - \sum_{i=0}^{o-1}f_O(i) \end{align}

This gives us an equation to compute the distribution of $O$ where we can compute each term one at a time, using the one term to compute the next.

\begin{align} f_O(o) &= \left ( 1 - \sum_{i=0}^{o-1}f_O(i) \right) \cdot F_H(o-1) \end{align}


Now that we've gone all the way through this exercise, we can turn to the other versions of the game. Betrayal and Balder's Gate and Betrayal at Mystery Mansion both function similarly (though the latter game uses the term clue instead of omen, it functions the same). Instead of rolling a constant number of dice and adjusting the target number, the number of dice rolled is equal to the number of omen cards drawn and a constant target number is used. In Balder's Gate, if the haunt roll is 6 or higher the haunt starts, whereas Mystery Mansion uses a threshold of 5 or higher.


To compute the distribution of dice, we can use a similar method as above, but varying the number of haunt dice pmfs which are convolved. The resulting pmfs and CDFs are plotted in Figures 4 and 5.


Figure 4: Probability mass function of haunt roll by omen

Figure 5: Cumulative distribution function of haunt roll by omen


We can reuse the same equation to find the pmf and CDF of the number of omens drawn before the haunt starts for the three games, which are plotted in Figures 6 and 7. We can note a couple differences. Most evident is that each iteration on the game tends to make the haunt start sooner, as shown most clearly in the CDFs. Second, the possibility of a very early haunt with the original game, with only 1 or 2 drawn, has been completely eliminated in Balder's Gate and Mystery Mansion. For the later games, rolling one or two dice is insufficient to get a sum or 5 or 6 to trigger the haunt. On the other hand, these games have a non-zero probability of an arbitrarily long game according to the probabilities shown. Mystery Mansion has a special rule that the haunt always starts after the 9th omen (clue) card is drawn.



Figure 6: Probability of triggering the haunt when drawing current omen


Figure 7: Cumulative probability of triggering the haunt


We could modify the dice so that they naturally provide both an upper and lower bound, as originally desired for the spy search mechanism, by using a similar mechanism to the later Betrayal games but having a non-zero minimum value on the dice. Doing this sets a maximum number of turns needed for the search, equal to the ratio between the target sum and the minimum value on the dice.


Recall that we want between 4 and 9 searches, averaging 6-7. Let's use a minimum value of 1 on the dice and decide on the number of values to include to achieve the properties above. With a minimum of 1 and max 9 rounds, the target sum must be 9. To prevent 3 searches from being successful, there must be at most 2 sides on the dice. This gives the distributions functions shown in Figures 8 and 9. While these center more on 6 searches, they demonstrate the feasibility of this type of mechanism. The particular type of dice and target numbers can be adjusted to get the desired probability properties.


Figure 8: Probability of finding the spy in the current round (pmf)

Figure 9: Cumulative probability of finding the spy by round (CDF)

Monday, October 19, 2020

Simplifying Combat

Many "dudes on a map'' games, involve combat mini-games that mostly consist of taking a break from the normal game, and taking turns rolling lots of dice, losing some units, and repeating until only one player has units left. Sometimes you may be able to target specific units (e.g. Buck Rogers), play a card that influences the dice rolling that round (e.g. Star Wars: Rebellion), or decide whether to retreat or give up (e.g. War of the Ring).

This is similar to combat in many RPGs, or skill challenges in D&D 4th edition.


However, some games (e.g. Bottom of the 9th) don't offer any choices in between rounds of rolling dice. My contention is that such a game is likely bad design, simply based on the math. That is, you could replace any decision-free process with a single-step process with equivalent probabilities. Even for games with choices in between rounds of combat, the value of having such a choice should outweigh the cost of game time and complexity.


Let's look at a theoretical game. Suppose in this game when you move a number of units into an enemy territory you engage in rounds of combat. Each round you roll one die for each attacker and destroy one defender with probability $p$. Similarly, your opponent rolls one die for each defender and destroys one attacker with the same probability $p$. Each round is simultaneous; that is, remove destroyed units at the end of each round. If only one player has units at the end of a round, then stop. Suppose you start with $A$ attackers, and your opponent starts with $D$ defenders.


The combat will end with either the attacker remaining with 1 to $A$ units, the defender will remain with 1 to $D$ units, or both armies will be destroyed. If we could compute the probability of each of these results, then we could simply use the result instead of all the dice rolling.


What's tricky is that there's a probability of $(1-p)^{(A+D)}$ that no units are destroyed in a round. This comes from all $A+D$ units missing, which occurs with probability $1-p$, since $P(\text{hit}) + P(\text{miss}) = 1$. That means that the combat could go on for a long time. In fact, it's not guaranteed to ever end. So, while it's relatively easy to show the probability distribution of the state of combat after $r$ rounds, the end of the combat is not so clear.


Well, let's start by looking at what happens in round $r$ and see what the distribution is first. Let's denote $a_r$ and $d_r$ as the number of attackers and defenders, respectively, remaining at the end of round $r$. Let's also just say that $a_0 = A$ and $d_0 = D$.


Actually, let's look at the simple case of one attacking unit and one defending unit. After the first round of combat, there are four possible outcomes: both units are destroyed, the defender is destroyed, the attacker is destroyer, or neither are destroyed. In the first three outcomes the combat ends. In the last, another round commences with the same possible outcomes and the same probabilities, since the combat starts in the same state. The probabilities are,

\begin{align} P(\text{both destroyed in first round}) &= p^2 \\ P(\text{attacker destroyed in first round}) &= p\cdot (1-p) \\ P(\text{defender destroyed in first round}) &= p\cdot (1-p) \\ P(\text{neither destroyed in first round}) &= (1-p)^2 . \end{align}

Then, the probability of what happens in the second round is the same as the first, multiplied by the probability that you get to the second round.

\begin{align} P(\text{both destroyed in second round}) &= p^2\cdot (1-p)^2 \\ P(\text{attacker destroyed in second round}) &= p\cdot (1-p)^3 \\ P(\text{defender destroyed in second round}) &= p\cdot (1-p)^3 \\ P(\text{neither destroyed in second round}) &= (1-p)^4 . \end{align}

The probability that the combat ends with both destroyed is equal to the probability that it happens in the first round, plus the probability that it happens in the second round, and so on.

\begin{align} &P(\text{both destroyed at end of combat}) = \sum_{r=1}^\infty P(\text{both destroyed in round } r)\\ &= \sum_{r=1}^\infty P(\text{both destroyed in a round}) \cdot P(\text{get to round } r)\\ &= \sum_{r=1}^\infty p^2 \cdot (1-p)^{2(r-1)} \\ &= p^2 \sum_{r=1}^\infty \cdot \left((1-p)^2\right)^{r-1} \\ &= p^2 \sum_{r=0}^\infty \cdot \left((1-p)^2\right)^r \end{align}

Recall that (Honestly, right now I'm just relying on Wikipedia, but I'm pretty sure I have derived or will derive this.)

\begin{align} \sum_{i=0}^\infty a^i &= \frac{1}{1-a} \end{align}

Thus,

\begin{align} P(\text{both destroyed at end of combat}) &= p^2 \sum_{r=1}^\infty \cdot \left((1-p)^2\right)^r \\ &= p^2 \cdot \frac{1}{1 - (1-p)^2} \end{align}

Similarly,

\begin{align} P(\text{attacker destroyed at end of combat}) &= p \cdot (1-p) \cdot \frac{1}{1 - (1-p)^2} \\ P(\text{defender destroyed at end of combat}) &= p \cdot (1-p) \cdot \frac{1}{1 - (1-p)^2} \end{align}


As a check, do these probabilities add up to one? Let's find out.

\begin{align} &P(\text{both destroyed at end of combat}) \\ &+ P(\text{attacker destroyed at end of combat}) \\ &+ P(\text{defender destroyed at end of combat}) = \\ &=p^2 \cdot \frac{1}{1 - (1-p)^2} + p \cdot (1-p) \cdot \frac{1}{1 - (1-p)^2} + p \cdot (1-p) \cdot \frac{1}{1 - (1-p)^2} \\ &=p \cdot (p + 2\cdot (1-p)) \frac{1}{1 - (1-p)^2} \\ &=p \cdot (p + 2\cdot (1-p)) \frac{1}{1 - (1-2p+p^2)} \\ &=p \cdot (p + 2\cdot (1-p)) \frac{1}{2p-p^2} \\ &=p \cdot (p + 2\cdot (1-p)) \frac{1}{p\cdot(2-p)} \\ &=(p + 2\cdot (1-p)) \frac{1}{2-p} \\ &=(p + 2-2p) \frac{1}{2-p} \\ &=(2-p) \frac{1}{2-p} \\ &=1 \end{align}

So we're correct. (Initially I had $(1-p)^2$ in the numerator because I had $r$ instead of $r-1$ in the early equations above. As indicated by my errata, these checks are important.) That's mostly algebra, but I need a lot more explanation here.


We used an infinite series above, but we could instead argue that we can simply ignore the case where neither are destroyed in a round since we only care about the end state. Getting such a result simply delays the final outcome. Thus, we take the probabilities of the remaining three cases and divide by their sum to get the probability of ending in the corresponding states.

\begin{align} P(\text{both destroyed at end of combat}) &= \frac{P(\text{both destroyed in first round})}{P(\text{something destroyed in first round})} \\ P(\text{attacker destroyed at end of combat}) &= \frac{P(\text{attacker destroyed in first round})}{P(\text{something destroyed in first round})} \\ P(\text{defender destroyed at end of combat}) &= \frac{P(\text{defender destroyed in first round})}{P(\text{something destroyed in first round})} \end{align}

Where $P(\text{something destroyed in first round})$ is the sum of the three first-round probabilities above.

\begin{align} &\begin{split} P(\text{something destroyed in first round}) &= P(\text{both destroyed in first round}) \\ &+ P(\text{attacker destroyed in first round}) \\ &+ P(\text{defender destroyed in first round}) \end{split} \\ &= p^2 + p\cdot (1-p) + p\cdot (1-p) \\ &= p^2 +2\cdot p - 2\cdot p^2 \\ &= 2 \cdot p - p^2 \\ &= p \cdot (2-p) \end{align}

Plugging in, we find that,

\begin{align} P(\text{both destroyed at end of combat}) &= \frac{p^2}{p \cdot (2-p)} \\ P(\text{attacker destroyed at end of combat}) &= \frac{p\cdot (1-p)}{p \cdot (2-p)} \\ P(\text{defender destroyed at end of combat}) &= \frac{p\cdot (1-p)}{p \cdot (2-p)} \end{align}

These are the same probabilities that we calculated earlier. Note that the two different versions of the equations are written a little differently, but $1-(1-p)^2 = p\cdot (2-p)$.


That's enough for now. I'll later expand this same idea to more complicated combat mechanics.

Monday, October 12, 2020

Errata: Are you a Cylon in Battlestar Galactica?

There's an error in my previous analysis of the probability of me being a Cylon if you've gotten to look at one of my loyalty cards randomly and it was a Human card. We seemingly innocently transformed the previous scenario into the condition that I have one or more human cards, which are not quite the same thing. While it is true that I have one or more human cards, you have additional information: that a randomly selected card was human. Actually, even if you made the statement about having one or more Cylon cards, we'd still be in the same situation, as all Cylon detection abilities either allow you to look at all Loyalty cards or one randomly (see this handy chart: https://boardgamegeek.com/filepage/41329/crisis-card-chart). If there were an ability that allowed the target to select the card to be observed, the analysis would hold.

To see what's happening here, let's look at a three player game to make the number of cases simpler. You, Chelsea, and I are playing and you get to look at one of our loyalty cards. All of us have two cards, you know you're human, so there's one Cylon card among the four loyalty cards that Chelsea and I have. I could either have the Cylon card or not, without loss of generality we can say that it's my "first'' card. You randomly get one of my cards. Table 1 shows all the cases, which occur with equal probability.


\begin{align*} \begin{array}{c|c|c|c|c} \text{Scenario #} & \text{Loyalty card 1 } & \text{Loyal card 2} & \text{Card # observed} & \text{Card observed} \\ \hline 1 & \text{Cylon} & \text{Human} & 1 & \text{Cylon} \\ 2 & \text{Cylon} & \text{Human} & 2 & \text{Human} \\ 3 & \text{Human} & \text{Human} & 1 & \text{Human} \\ 4 & \text{Human} & \text{Human} & 2 & \text{Human} \\ \end{array} \end{align*}

Table 1: Cylon Detection in Battlestar Galactica


As we can see here, the probability of me being a Cylon given that you randomly looked at one of my cards, which was human, is $1/3$. There are three cases where you see a human card, one of which involves me having a Cylon card.


Let's go back to the five player case. The differentiating factor is that the probability of me getting one Cylon card is not $1/2$, but instead $24/56=3/7\approx0.429$ as we computed before, and the probability of me getting two human cards is $30/56=15/28\approx0.536$. We don't care about the two Cylon card case, as we're only looking at the scenarios where you see a human card. The probability of that happening is as follows.

\begin{align} P(\text{see human}) &= \underbrace{0.5}_\text{Chance to see each of my cards.} \cdot P(\text{1 Cylon}) + P(\text{human})\\ &=0.5 \cdot \frac{24}{56} + \frac{30}{56} \\ &=\frac{12+30}{56} \\ &=\frac{3}{4}\\ &=0.75 \end{align}

The probability that I am Cylon, given that you see a human card from me uses this probability in the denominator.

\begin{align} P(\text{Cylon} | \text{see human}) &= \frac{P(\text{1 Cylon} \cap \text{see human})}{P(\text{see human})} \\ &= \frac{0.5 \cdot P(\text{1 Cylon})}{P(\text{see human})} \\ &= \frac{0.5 \cdot \frac{24}{56}}{\frac{3}{4}}\\ &=\frac{12}{56} \cdot \frac{4}{3}\\ &=\frac{4}{14}\\ &=\frac{2}{7} \end{align}

This is the same probability as if we saw that the first loyalty card was human before the sleeper agent phase.

Monday, October 5, 2020

Isn't it better to use a dice deck in Catan?

First of all, better is a value judgement. Let's examine the properties and you can evaluate whether you like them better or not.

The idea of a dice deck in Catan is that you have a deck of 36 cards, where instead of rolling 2d6 you flip over a card and it gives you the number for resource generation. (See here, although I'll ignore the events and the New Year card.) The values on the cards match the distribution of 2d6, such that the first draw has the same probability as dice. However, if you draw the deck to exhaustion, then you're guaranteed to get exactly five results of "6".


Let's start with a question about using dice: if the probability of rolling a six is $5/36$, then do we expect to roll 5 sixes out of 36 rolls? First, let me say that in probability, expect is a word with particular meaning. It refers to the average, or mean value, or something. We write it as an operator using a special capital E. It being an operator means that we write it to the left and it operates on whatever is to the right of it. So when we write $\mathbb{E} X$, we're taking the expectation of the random variable $X$. I'm writing with special typesetting tools (Well, just $\LaTeX$.), but if you're writing with pencil and paper, on a blackboard, or on an internet forum, you may not have access to such special symbols. Then just a capital E will do.


So our question, reformulated is whether the expected value of the number of occurrences of the number six is $5/36$—i.e. $P(\text{rolling a six on 2d6})$—or is it some other value? To find out, let's calculate the expected value of the number of occurrences of the number six on 2d6. Wow, that's long winded. Let's assign it a variable name, $Y$.

\begin{align} Y = \text{number of results equal to six across 36 rolls of 2d6}. \end{align}

How do we count this? Let's assign names to these 36 rolls. We'll call the results of the 36 2d6 rolls $X_1$—$X_{36}$.

As we've been saying, we know from our previous analysis of 2d6 that

\begin{align} P(X_i = 6) = \frac{5}{36} \quad i \in \{1,2,\cdots,36\}. \end{align}

Recall that the probability of something not happening is equal to 1 minus the probability of it happening, or

\begin{align} P(\text{not some thing}) = 1 - P(\text{some thing}). \end{align}

So

\begin{align} P(X_i \neq 6) = 1 - P(X_i = 6) = 1 - \frac{5}{36} = \frac{31}{36}. \end{align}

Let's define the random variables $Y_1$—$Y_{36}$ to be 1 if $X_i=6$ and 0 if $X_i \neq 6$. We write this as,

\begin{align} Y_i = \begin{cases} 1 & X_i = 6 \\ 0 & X_i \neq 6 \end{cases} \quad i \in \{1,2,\cdots,36\}. \end{align}

From this definition, we can see that the sum of these $Y_i$ is the count $Y$ we are looking for.

\begin{align} Y = \sum^{36}_{i=1} Y_i \end{align}

What then, is the expected number of sixes $\mathbb{E}Y$? Expectation is a linear operator. That means a few things, that we'll go into elsewhere, but what's important here is that it means that since $Y_i$ are independent, we can distribute the expectation across the different terms in the summation $Y_i$. Basically, we can move it from the left to the right of the summation, like this

\begin{align} \mathbb{E}Y = \mathbb{E} \sum^{36}_{i=1} Y_i = \sum^{36}_{i=1} \mathbb{E}Y_i. \end{align}

As mentioned above, the expected value of a random variable is the average so,

\begin{align} \mathbb{E}Y_i &= \sum^1_{y=0} y \cdot P(Y_i = y). \end{align}

Note that the summation above is from 0 to 1 because those are the only two values possible for $Y_i$. Using the probabilities of $X_i = 6$ and $X_i \neq 6$ above, we can find

\begin{align} \mathbb{E}Y_i &= 0 \cdot P(Y_i = 0) + 1 \cdot P(Y_i = 1)\\ \mathbb{E}Y_i &= 0 \cdot P(X_i \neq 6) + 1 \cdot P(X_i = 6)\\ \mathbb{E}Y_i &= 0 \cdot \frac{31}{36} + 1 \cdot \frac{5}{36}\\ \mathbb{E}Y_i &= \frac{5}{36}. \end{align}

Plugging this back into our equation for $\mathbb{E}Y$ above yields,

\begin{align} \mathbb{E}Y &= \sum^{36}_{i=1}\frac{5}{36} \\ \mathbb{E}Y &= 36 \cdot\frac{5}{36} \\ \mathbb{E}Y &= 5 . \end{align}

So we do expect to get 5 sixes when rolling 2d6 36 times. Note that above, we changed from summing $5/36$ 36 times to multiplies $5/36$ by 36. That's because those are equivalent. That's what multiplication is! We can do this whenever we are summing a constant value.


The same thing also holds for all other result here, not just six.


We can also ask, what's the distribution of $Y$? Or, what's the probability of getting 5 sixes, or 4, or 3, or however many? We could try to follow the approaches we've used to find the distribution of the sums of dice. But we have 36 rolls here, that doesn't sound like a fun calculation. On the other hand, each $Y_i$ has a simple distribution, so maybe it's not so bad.


Fortunately, this is a problem with an existing solution that has been well studied: the binomial distribution. We've used it before, but this time we'll derive what the distribution of $Y$ is. It's probably not that bad. Let's change our notation a little bit, though. The binomial distribution is a parameterized distribution, which means it takes parameters. Basically, there's a bunch of different versions of the distribution. A normal (I mean, regular) single die has a uniform distribution, but the particulars of the uniform distribution depend on the number of faces of the die as a parameter. Similarly a binomial distribution, written as $Y ~ B(n, p)$ takes two parameters: the number of trials, $n$, and the probability of the success of each one, $p$. The number of trials is the number of times that we roll the die. Here we'll use 36 trials to match the 36 cards in the dice deck. The probability, $p$, depends on which face we're focused on. For a result of six, the probability of getting that value in each trial, which we call a success, is $5/36$.


So then in each trial we either get a success, with probability $p$, or failure with probability $1-p$. We're interested in the count of the successes. The extremes are relatively easy to calculate. The probability that we get $n$ successes in $n$ trials is the probability that we succeed on every roll.

\begin{align} f_Y(n) = p^n \end{align}

Similarly, the probability that we get no successes is the probability that we fail on every roll.

\begin{align} f_Y(0) = (1-p)^n \end{align}

Now, let's consider some result in between, where we get $k$ successes. This could happen by the first $k$ rolls succeeding and the last $n-k$ rolls failing.

\begin{align} p^k \cdot (1-p)^{n-k} \end{align}

However, we could also fail the first $n-k$ rolls, and succeed in the last $k$.

\begin{align} (1-p)^{n-k} \cdot p^k \end{align}

Note that these occur with the same probability. We can construct several different distinct orders of success and failure that result in $k$ successes. All of them share this same $p^k \cdot (1-p)^{n-k}$ term, but we need to multiply this by the number of ways that we can construct a sequence with $k$ successes in $n$ trials. We need the number of ways to select $k$ elements out of a set of size $n$. We know that this is $\binom{n}{k} = \frac{n!}{k! (n-k)!}$. Thus, we can write the pmf of $Y$ as follows.

\begin{align} f_Y(k) &= \binom{n}{k} \cdot p^k \cdot (1-p)^{n-k} \\ f_Y(k) &= \binom{36}{k} \cdot \left(\frac{5}{36}\right)^k \cdot \left(1-\frac{5}{36}\right )^{n-k} \end{align}

By replacing $p$ with the probabilities for each valid result on 2d6, we can get the distributions for the number of times that each of those results appear across a sequence of 36 rolls. These distributions are plotted in Fig. 1. Note that these random variables are not independent. We analyzed each separately, but it's clear that in the case when there are an excess of one value, it decreases the probability of getting a large number of another value.


Figure 1: Probabilities of rolling 2d6 

We can contrast that with the certainty of a dice deck expressed in terms of the probability mass functions shown in Fig. 2. Here we know the number of times that each value appears, and so each value has a probability of 1 at the number of times that it occurs in the deck. While these values correspond to both the average and peak values of the distribution of 2d6, it does not capture the sizable variation. This is particularly evidence for the results that come up more rarely. It's almost as likely to get no twos across 36 rolls as to get 1 two. At the other extreme, if we consider the number of sevens, we see that the probability of getting exactly the expected number of sevens is significantly lower than for twos. Getting 4 or 7 of them, instead of 6,may seem close to the average and within desired variation, but neither is possible with the dice deck.


Figure 2: Probabilities using a dice deck


There are a couple of extreme cases where what we get compared to what we want may break down with the dice deck. First, if near the end of the game enough of the deck remains that it's pretty clear that it will not get reshuffled, but certain values have already been exhausted, the benefits of building certain settlements and cities is quite different than with dice. With dice you can build on a twelve and hope that it pays out, but with the dice deck you may know that it never will. The other extreme case is when the deck does get reshuffled very near the end of the game. Through most of the game you've been playing with the knowledge that the deck ensures that the rolls across several turns produce a set distribution. However, if you end shortly after a reshuffle, you now have very similar probability properties to that of dice, where there's the possibility that something more unexpected may come up.


By modifying the simple dice deck that we considered you can adjust the properties between these two extremes. For example, could reshuffle before reaching the end of the deck. By adjusting the time of reshuffling, you can slide on a spectrum between a our pure example and using dice. Of course, if you shuffle after every draw you get the same probabilities as dice. In that case, dice are better, because they're so much faster to use.


So far, we've covered differences between dice and a deck of cards which basically boil down to this: a deck has memory. What we draw on one term influences the next, because the state of the deck is changed. If you draw a seven, it's less likely you'll draw another until the deck is reshuffled. On the other hand, each roll of the dice is independent. There's no influence of one roll on the next.


There's another fundamental difference between the two: the deck is shuffled at the beginning. This means that the result of each draw is predetermined (unless you believe the entire deck can be in a state of superposition). If you make a poor choice, you can wallow in regret, as you knew that if you had made a different choice you could have had a better outcome. Not so with dice. Following the butterfly effect, any decisions you make before a roll influence that roll. This is not in a controllable way, but just that minute differences in the timing or action of a die roll may yield a different result. Thus, no second guessing is necessary, as if you had decided to do something different, the dice may still have been against you.

Monday, September 21, 2020

The Hunt in War of the Ring

First, let's review the rules for the Hunt roll in War of the Ring. When the Fellowship moves (before the Mordor track), the Shadow player rolls a number of dice equal to the number of dice showing an Eye on the Hunt Box, up to a maximum of five. Each roll of a $6+$ is a success. Every previous move of the Fellowship that placed a die in the Hunt Box previously that turn gives a $+1$ to each die roll (an unmodified 1 still misses). There are also three conditions that can grant up to three re-rolls. If there's at least one success, then a tile is drawn to determine damage. 

 Let's first find the probability of a successful hunt, given the following three parameters. 
\begin{align} e & \in \{0, 1, 2, 3, 4, 5\} \qquad &&\text{Number of eyes in the Hunt Box} \\ r &\in \{0, 1, 2, 3\} \qquad &&\text{Number of rerolls} \\ n & \in \{1, 2, 3, 4, 5\} \qquad &&\text{Number of Fellowship moves this turn} \end{align} 
The probability of succeeding on each die during the first move is $1/6$, as the shadow needs to roll a 6. We could look at the pmf of the number of successes, but instead, we can compute the probability of success as the complement of the probability of failure. To fail, all dice must have failed to roll a success. Each die here fails with probability $1-1/6=5/6$. For $e$ dice with no re-rolls, this is, 
\begin{align} P(\text{failed hunt}) = \left ( \frac{5}{6} \right) ^e . \end{align} 
In this framework, incorporating re-rolls is relatively simple also. To fail with re-rolls, first the main roll is failed, then the re-rolls must fail. This means a total of $e+r$ failures are rolled, as long as the number of dice is at least equal to the number of re-rolls. We could restrict $r \leq e$, or we could say that when $r> e$ we look for $2e$ failed dice instead. Ignoring this special case, equivalently, considering $r$ as the effective number of re-rolls, this updates the probability of a failed first hunt as follows. 
\begin{align} P(\text{failed hunt}) = \left ( \frac{5}{6} \right) ^{e+r} \end{align} 
To incorporate the effect of the number move involved, we just look at the probability of failing each roll. Instead of $5/6$, we can write this as, 
\begin{align} \frac{6-n}{6}. \end{align} 
This brings us a comprehensive equation, using the fact that $P(\text{successful hunt}) = 1 - P(\text{failed hunt})$. 
\begin{align} P(\text{successful hunt}) = 1 - \left ( \frac{6-n}{6}\right) ^{e+r} \end{align} 
The result of this equation is plotted in Figures 1–3. 

Figure 1: Probabilities of a successful first Hunt


Figure 2: Probabilities of a successful Hunt by number of moves


Figure 3: Probabilities of a successful Hunt by number of dice

Now that we have the probabilities, there are a number of questions about possible choices in the game that we can answer. First, is it preferable to have another eye in the Hunt Box, or move a Nazgul in order to get a re-roll. In terms of a successful hunt, there's no difference between the two. However, an aspect of the rules we haven't covered here is that if an Eye Tile is drawn, then the Hunt Damage is equal to the number of successes rolled. Thus, we prefer to have eyes to a re-roll. However, we have to consider the cost and how each are obtained. The shadow player can assign eyes to the Hunt Box before the action die roll (with some limits), but may roll more. Shadow players should consider the possibility of getting more eyes than desired and leaving few actions. There are also cases when the Shadow player rolls fewer eyes than expected. In such times, it makes sense to use action dice to obtain one or more re-rolls. This is especially attractive if Nazgul were going to be moved anyways, and one can incidentally be placed with the Fellowship. At worse, this costs one action die (the same), as well as the position of one Nazgul. Army movement costs vary more. Some cases will have an army nearby the Fellowship (possibly with a Nazgul as well). Each army die allows two armies to move, so in some sense the cost is half of that of assigning an eye. However, this likely leaves the army in an otherwise undesirable position (even if it is just one unit), so half of an additional army movement would be needed to put it back into position. 

Another question, which the Free Peoples player may be faced with is whether it is better to move again this turn, or wait until next turn. There are a couple of considerations here. First, the distribution of the number of eyes rolls on the next turn, which also depends on a choice that the Shadow player has yet to make. We'll set this aside here, and focus on what the Free People's player expects. Let's say there are usually two eyes in the Hunt Box with no re-rolls. However, this turn there are four eyes. Using Figure 3, we can compare the probability of a successful hunt on the first move this turn with a second move on the next turn with two dice. We can see that a first move against four dice actually has a lower probability of success (though not by much). Thus, while moving against four dice sounds much worse, because of the diminishing returns of additional dice, it's not as bad as we may think.

Monday, September 14, 2020

Game length scaling by player count

That is, how to tell if the game box is lying by looking at the rules (it almost always is, but still...).

My basic framework here is that for many games, the length is simply the product of the number of turns and the amount of time taken per turn. This obviously doesn't work for games without turns, such as real-time games like Space Cadets Dice Duel.  There also may be overhead that doesn't scale the same way,  e.g. once per day activities in Robinson Crusoe, or once per round activities in Agricola.

Many games are the same length to first order, independent of player count: Battlestar Galactica, Carcassonne, Pandemic, Kingdomino. These games have a roughly fixed number of total turns. Some of these may have strong second-order terms to affect game length, for example Pandemic, which tends to last more turns with more players as it's harder to get cards in a single hand. Other games could take more time per turn with more players due to group discussion, which may be play group dependent.  These games can be identified by the fact that the there's some shared resources (e.g. cards or tiles) that advances the players towards the end (Battlestar Galactica, Pandemic), or ends the game when it runs out (Carcassonne, Kingdomino*).


Let me expand my thoughts on Kingdomino. First, for 2-player I'm focused on the "Mighty Duel" 7x7 variant (which I've almost always played). This alone makes the 2 player and 4 player games take the same amount of time, at least to first order. If instead we looked at 2 player 5x5 games, we'd expect them to be roughly half the length of a 4 player game.  Second, we should probably break up the two aspects of the game: selecting tiles and placing them. In Mighty Duel and 3-4 player games, all tiles/dominos are available to choose at some point. Also, each round three tiles are chosen, and the fourth is forced to whoever is left. Thus, this part of the game is the same whether there are 2, 3, or 4 players. The question is whether placing takes a different amount of time. Perhaps it's slightly different in a 3 player game, but I often think of it as incidental to your choice of tile in the next round (as it happens at the same time). I can see how 4 player games we may expect to take more time, but I don't think it'd be linear.


Many games are proportional to the number of players to first order: Ticket to Ride, Splendor, Agricola. These games have a roughly fixed number of turns per player. Some of these games (e.g. Agricola) are honest and state that the play time is a certain number of minutes per player. Others, like Ticket to Ride, do not.  These games can be identified by an explicit number of rounds (Agricola), the end of the game triggered by a single player's resources depleting (Ticket to Ride), or games which are a race to a victory condition (Splendor).


Sometimes, though, a game doesn't fit neatly into one of the above categories, as some elements are adjusted by player count.  First, let's consider Bohnanza.  There's a shared deck of cards and the game ends as soon as the players exhaust it three times (see more below).  At first, we'd think that this would be a fixed length game, however things get more complicated when we consider the variant rules below, which allow the full 7 players.


Bohnanza, 45 minutes

104 normal cards

expansion: +50 cards: 154

3 player: -4 cards: 150 cards

4-5 players: -24 cards: 126 cards

6-7 players: -4 -6 cards: 144 cards

3 players: only two times through the deck. 3 bean fields

4-7 players: 3 times, 2 bean fields

new rules: flip 2 cards, draw 1 card per player per turn (original: draw three cards)

https://www.riograndegames.com/wp-content/uploads/2013/02/Bohnanza-Rules.pdf


There are three things coming into play here.  First, the number of cards in the deck varies with player count.  Second, with 3 players you only go through the deck twice and each player has three bean fields instead of two.  Third, the number of cards drawn per turn is a function of the number of players.


Let's calculate the number of turns it takes to get through the deck the first time.

3: 150 / (2+3) = 30

4: 126 / (2+4) = 21

5: 126 / (2+5) = 18

6: 144 / (2+6) = 18

7: 144 / (2+7) = 16


If subsequent times through deck has no card removal (almost certainly not true), then we can calculate the following number of total turns.

3 players: 60 turns

4 players: 63 turns

5 players: 54 turns

6 players: 54 turns

7 players: 48 turns


Practically, each time through the deck actually takes much less time than the first.  Some of this may just be as players get into the flow of the game, but there are two structural reasons as well.  First, cards are essentially removed from play as they are sold and converted to coins.  Second, a number of bean cards are still in players' fields as they go through the deck the second and third time.  Let's look at the total number of bean fields in each player count.


3 players: 9 fields

4 players: 8 fields

5 players: 10 fields

6 players: 12 fields

7 players: 14 fields


This effect would further skew decrease the number of turns in large player count games relative to smaller player count games.  However, with more players there are more possible negotiation considerations.  It's more likely that there are multiple other players who have something you want, and similarly more likely that there are multiple players who want something you have.  Three player games can go very quickly, as players get committed to what they are going for and you have more need to manage the large number of plantings you'll do.  But in a 7 player game, your turn is much more rare, so trading is more important.  While I do not have data to quantify it, I suspect that this at least equalizes the length of the game and probably even pushes larger player length games to be longer.  If there's any data on this, it'd be interesting to see.


As another example, let's look at Century: Spice Road.  Here what's interesting is that the game ends when one player gets a certain number of point cards, but that number depends on the number of players in the game.


Century: Spice Road, 30-45 minutes

2-3 players: 6 cards

4-5 players: 5 cards


Thus, if each card takes "x" turns to obtain, then the relative game length is as follows.

2 players: 12x

3 players: 18x

4 players: 20x

5 players: 25x


As we can see, we'd expect a 5 player game to take more than twice as long as a 2 player game.  This ignores the fact that obtaining the 6th point card likely takes fewer turns than the first, as you've already built up an engine.  This clearly indicates that the 30-45 minute ranges is incorrect, as the max range should be closer to twice the minimum.


Some games get shorter with more players: Sushi Go Party, at least to first order, given negligible overhead for scoring more players, actually has fewer cards per player in higher player count games.


Sushi Go Party!

2-3: 10 cards each

4-5: 9 cards each

6-7: 8 cards

8: 7 cards


There are a host of other considerations in game length that I haven't touched on here, many of which have clear implications as to whether they increase or decrease game length, but how to quantify that effect is less clear.

Monday, August 10, 2020

Why run for President in Battlestar Galactica?

Because it's always better when you're in control.

Let's say that it's in the first half of the game, before the sleeper agent phase. It's a 5-player game and you think that everyone is human. Someone else is President. Why would you want to spend the action and cards necessary to make yourself President instead? Because one of you might become a Cylon.

Let's say all players are going to get one more loyalty card. So there are five cards, two of which are Cylon cards. The probability that one (or more) of you turns Cylon is $1 - P(\text{neither Cylon})$. $P(\text{neither Cylon}) = \frac{3}{5}\cdot\frac{2}{4}$. That is, one of you gets a human card ($3/5$), and then the other also gets one of the remaining human cards ($2/4$). Thus,
\begin{align}
P(\text{one or more of you become Cylon}) &= 1 - \frac{3}{5}\cdot\frac{2}{4} \\
&= 1 - \frac{3}{10} \\
&= 0.7.
\end{align}

Don't forget about the case where both of you end up as Cylons. Then resources are "wasted", and maybe you look suspicious. But maybe that's okay, because it's a good cover for depleted resources and allows you to stay hidden. But let's take this out, just assume that it's not valuable to switch here.

\begin{align}
\begin{split}
P(\text{one of you become Cylon}) &= P(\text{one or more of you become Cylon}) \\
& \phantom{= } - P(\text{both of you become Cylon})
\end{split}\\
\end{align}
\begin{align}
P(\text{one of you become Cylon}) &= 0.7 - \frac{2}{5}\cdot\frac{1}{4} \\
&= 0.7 - 0.1 \\
&= 0.6
\end{align}
Thus, it is more likely than not that you and the current President are going to end up on opposite teams, so you'd be better off having more power. The remaining question is whether or not you think the benefit is worth the cost.
Let's assign this a shorter variable, $p_1$, for use later on.
\begin{align}
p_0 &= P(\text{one of you become Cylon}) \\
&= 0.6.
\end{align}

I'll be honest in that I used to think it was inefficient to squabble about the Presidency unless you knew the President was a Cylon. But after a forum post about the importance of control in the game got me thinking about this case. It's interesting to me how the game is structured in this way that encourages loyal humans to do selfish things even without explicit personal goals (a la Dead of Winter). This is an interesting emergent property of the rules.

We've considered the case when there aren't any Cylons out, now let's look at having one Cylon out before the sleeper agent phase. There are three sub-cases: you're the Cylon, the President is the Cylon, someone else is the Cylon. We've already tacitly assumed above that if you or the President is or becomes a Cylon, it's worth it to take the Presidency. So those cases are taken care of. If someone else is a Cylon, then we're similar to the above analysis, but with a smaller probability that one of you becomes a Cylon in the sleeper agent phase. In this case there is only one Cylon card out of 4 left.
\begin{align}
P(\text{one of you become a Cylon}) &= 1 - P(\text{both of you stay human}) \\
&= 1 - \frac{4}{5}\cdot\frac{3}{4}\\
&= 1 - \frac{3}{4} \\
&= \frac{1}{4} = 0.25
\end{align}
Here the probability of one of you becoming a Cylon is much lower. One interesting case to consider here is if the existing Cylon gets the second Cylon loyalty card. In this case, the Cylon can then give this card to one of the humans. This is another opportunity for you or the current President to become a Cylon, and thus on opposite teams. Assuming this happens, and not knowing anything about the Cylon's strategy in selecting a target, there is a probability of $2/4 = 0.5$ that the Cylon selects you or the President out of the four humans. However, the President is arguably a higher-value target because of the powers of the office. For the rest of the analysis, we'll assume that a Cylon with two loyalty cards does give the second card away, with equal probability to all players. This increases the probability that one of you becomes a Cylon, $\Delta p = \Delta P(\text{one of you becomes a Cylon})$, is as follows.
\begin{align}
\Delta p &= P(\text{Cylon with two cards}) \cdot P(\text{Cylon gives card to one of you}) \\
&= \frac{1}{5} \cdot \frac{2}{4} \\
&= \frac{1}{10} = 0.1
\end{align}
Thus, our corrected probability for one of you becoming a Cylon is,
\begin{align}
P(\text{one of you become a Cylon}) &= 0.25 + 0.1 \\
&= 0.35.
\end{align}
As before, let's assign this a shorter variable, $p_1$, for use later on.
\begin{align}
p_1 &= P(\text{one of you become a Cylon}) \\
&= 0.35.
\end{align}

Now, let's consider the case when two Cylons are out, which I'd argue doesn't depend on when in the game it occurs. (I choose my words carefully here. I would argue this if pressed, but I'm not going to because I'm lazy.) There are four sub-cases: you are a Cylon, the President is a Cylon, both of you are Cylons, neither of you are Cylons. In the first two cases, it makes sense to get the Presidency, as you and the President are on opposite teams. If neither of you are Cylons, it doesn't make sense (unless the President isn't using the office well). If both of you are Cylons, I'd say in general it doesn't make sense, unless the President is already suspected and you can keep the office on your team by taking it.

What is the confidence (i.e. assessed probability) you must have that a third party is a Cylon to make it not worth while to go for the presidency? As a break-even point we could assign a probability of 0.5. A better approach would be to have values assigned for the benefit of taking the presidency from the opposing team as well as the cost of moving the presidency. Determining those values is beyond the scope of this analysis, so we'll stick with a probability threshold of 0.5, meaning we're looking for the point at which it's more likely than not that you have incentive to take the presidency. This critical point occurs according the following equation.
\begin{align}
0.5 &= p_0 \cdot P(\text{no Cylons}) + p_1 \cdot P(\text{one Cylon}) + 0 \cdot P(\text{two Cylons})\\
&= 0.6 \cdot P(\text{no Cylons}) + 0.35 \cdot P(\text{one Cylon})
\end{align}
I included the two Cylon case to point out that the no and one Cylon cases are not the only possibilities. There's no $1-p$ type tricks here in general. Now, if we do assume that there is at most one other Cylon, then we condition becomes easier to conceptualize. Then it simplifies as follows.
\begin{align}
0.5 &= 0.6 \cdot (1 - P(\text{one Cylon})) + 0.35 \cdot P(\text{one Cylon}) \\
&= 0.6 + (-0.6 + 0.35) \cdot P(\text{one Cylon}) \\
&= 0.6 + -0.25 \cdot P(\text{one Cylon}) \\
P(\text{one Cylon}) &= \frac{0.6-0.5}{0.25} \\
&= 0.4
\end{align}
Here, as long as we believe the probability of one Cylon is 0.4 or less (and the probability of two is zero), then we're more likely than not to end up and opposite teams as the President, even though we are both presently human, and thus have reason to take it.

As a reference it may be useful to know the probability of each of those cases (and the previous cases we've considered). If you have a human card, and you're sure that the President is also human, you know there are two Cylon cards and six human cards left. Three of those cards have been dealt, while five remain for later.
\begin{align}
P(\text{all humans}) &= \frac{6}{8} \cdot \frac{5}{7} \cdot \frac{4}{6} \approx 0.357 \\
P(\text{one cylon}) &= \frac{6}{8} \cdot \frac{5}{7} \cdot \frac{2}{6} \cdot\underbrace{3}_{{\text{3 ways to assign Cylon}}} \approx 0.536 \\
P(\text{two cylons}) &= \frac{6}{8} \cdot \frac{2}{7} \cdot \frac{1}{6} \cdot (\text{3 choose 2 ways to assign cylons) }\\
& = \frac{6}{8} \cdot \frac{2}{7} \cdot \frac{1}{6} \cdot \underbrace{3}_{= \frac{3!}{2! \cdot 1!}} \approx 0.107
\end{align}
Check, does that add up to one? Yes. We can combine this to find the probability that you and the President will end up on opposite teams, given that you start off as human (without any assumptions about the other players.
\begin{align}
p &= p_0 \cdot P(\text{no Cylons}) + p_1 \cdot P(\text{one Cylon}) + 0 \cdot P(\text{two Cylons})\\
&\approx 0.6 \cdot 0.357 + 0.35 \cdot 0.536 + 0 \cdot 0.107 \\
p&\approx0.402
\end{align}
Thus, without any information pertaining to the loyalty of the other players, you're more likely than not to stay on the same team as the President. However, perhaps counter-intuitively, if you feel you can trust everyone, you're more incentivized to go for the presidency yourself.

And what if you're not certain that the President is human? What if all you know is that you're human. Given that, there's a probability of $P(HH|H) = 7/9$ that the President is human with a probability of $P(CH|H) = 7/9$ that is a Cylon with probability $2/9$. Here $P(HH|H)$ refers to the probability that both the President and you are human given that you are human. Similarly, $P(CH|H)$ is the probability that the President is Cylon (now) and you are human (now) given that you are human (now). We can express the probability that you and the President end up on opposite teams given that you are human, $P(O|H)$ as follows. Here $O$ refers to the event that you and the President end up on opposite teams. We're still focusing on opposite teams, and ignoring possible benefits to the Cylons if resources are expended moving the presidency.
\begin{align}
P(O | H) = P(O | HH) \cdot P(HH | H) + P(O | CH) \cdot P(CH | H)
\end{align}
We previously computed $P(O|HH)$, which is the $p$ immediately above.

Now let's find $P(O|CH)$, the probability that you and the President end up on opposite teams given that the President is a Cylon before the sleeper agent phase while you are human. There are similar cases to before. There are two ways that the two of you can remain on opposite teams. First, another player can receive the remaining Cylon card. Since there are five players and five loyalty card, only one of which is Cylon, everyone has the same probability of receiving the Cylon card. Thus, another player receives the Cylon card with probability $3/5$. Second, the President can receive the Cylon card (probability $1/5$) and give it to another player aside from you. Assuming the President gives it out randomly, the probability that this happens and you remain human is $\frac{1}{5}\cdot\frac{3}{4}$.
\begin{align}
P(O|CH) &= \frac{3}{5} + \frac{1}{5}\cdot\frac{3}{4} \\
&=\frac{12+3}{20}\\
&=0.75
\end{align}
Pulling everything together, we get the following.
\begin{align}
P(O | H) &\approx 0.402 \cdot \frac{7}{9} + 0.75 \cdot \frac{2}{9}\\
&\approx 0.479
\end{align}

This is much closer to 0.5, but still less. So given that you're human before the sleeper agent phase, you're just slightly more likely than not to end up on the same side as the president, given all of the assumptions that we've made. However, once you get some information about how the other people are playing, you can throw out most of the probabilities regarding the current state of the game, and only pay attention to those that affect the future of the game. Individual behavior can be much more revealing than statistics.

Monday, August 3, 2020

Is it possible an event is never drawn?

A deck of cards with values in a 2d3 distribution (so a single card with a value of 2, two value 3 cards, etc) and a 10th reshuffle card. On a reshuffle card, draw a replacement card, until you get a value (ie not the reshuffle) card. I was going to add another card that has a specific in-game event, and was trying to work out how likely it was that card would appear before the game ends (about 50 draws, not tied to the cards): I think it is possible, albeit unlikely, that it may not appear to all (although the longer the game goes on, the less likely that is, from my understanding of probabilities).

—paraphrased from here on BGG

This can be solved with a Markov chain, which is a way to model sequences of probabilistic events, where the probabilities involved can be expressed in terms of the state after the previous step in the sequence. In this case, the state represents what's still in the deck. We'll have a different state for each possible number of cards remaining in the deck 11 (only possible before first turn), 10, 9, ..., and 1. The last we'll use as a special state, primarily not to denote when one card is left, but the fact that an event has been drawn. For simplicity, I'll not keep track of how many events have occurred (although we could expand to include that).


So we need a vector representing the probabilities of the initial state, which is known to be 11 cards, so the state vector, $\mathbf{s}$.

\begin{align} \mathbf{s} = \begin{bmatrix} 1\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \end{bmatrix} \end{align}

Here each subsequent entry in the vector represents a different state, and the values are the probabilities of being in each state.


We also need the transition matrix. That is, given a state, what are the probabilities of moving to another state. Consider when there are $c$ cards left. There are three or four relevant possibilities, depending on how you count.

  1. You could draw a number card, leaving $c-1$ cards, which occurs with probability $(c-2)/c$.
  2. You could draw a shuffle card (probability $1/c$, and then draw a number card (probability $9/10$, as we can ignore drawing the shuffle card again), leaving 10 cards with probability $1/c \cdot 9/10$.
  3. You could draw an event card, with probability $1/c$, and end up in the special event state (by our reckoning).
  4. You could draw a shuffle card (probability $1/c$) and then draw the event card (probability $1/10$), and end up in the special event state with probability $1/c \cdot 1/10$.

We can combine 3 & 4 to find the total probability of getting an event, which is $1/c + 1/c\cdot1/10 = 1/c \cdot 11/10$.


For the first round the probabilities are a little bit special, as we have 11 cards. Thus, the shuffle card doesn't do anything but delay the game. The two possible outcomes are draw a number card (probability $9/10$) and draw an event card (probability 1/10).


Putting this together, we have the transition matrix, $\mathbf{P}$. Each column represents the initial state, and each row represents the next state, in both cases starting with 11 remaining cards, descending to 2, and ending with the special event state.

\begin{align} \mathbf{P}& = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{9}{10} & \frac{9}{10}\cdot\frac{1}{10} & \frac{9}{10}\cdot\frac{1}{9} & \frac{9}{10}\cdot\frac{1}{8} & \frac{9}{10}\cdot\frac{1}{7} & \frac{9}{10}\cdot\frac{1}{6} & \frac{9}{10}\cdot\frac{1}{5} & \frac{9}{10}\cdot\frac{1}{4} & \frac{9}{10}\cdot\frac{1}{3} & \frac{9}{10}\cdot\frac{1}{2} & 0 \\ 0 & \frac{8}{10} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{7}{9} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{6}{8} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{5}{7} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{4}{6} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{3}{5} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{2}{4} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{3} & 0 & 0 \\ \frac{1}{10} & \frac{11}{10}\cdot\frac{1}{10} & \frac{11}{10}\cdot\frac{1}{9} & \frac{11}{10}\cdot\frac{1}{8} & \frac{11}{10}\cdot\frac{1}{7} & \frac{11}{10}\cdot\frac{1}{6} & \frac{11}{10}\cdot\frac{1}{5} & \frac{11}{10}\cdot\frac{1}{4} & \frac{11}{10}\cdot\frac{1}{3} & \frac{11}{10}\cdot\frac{1}{2} & 1 \\ \end{bmatrix} \end{align}

Note a check that we can do is that each column must sum to 1. That is, the probability of transitioning from one state to one of the states is 1. This is because all possible states are incorporated in the matrix.


To find the probability of each state in round $r$, we just compute $P^r \cdot s$. I've done this for 1–100 rounds below. It's quite rare for a game of 50 rounds to have no events. Also note that after about 10 rounds we seem to have reached a stable condition, where the probabilities of being in each of the states isn't changing much (relative to total probability of not having had an event yet).


Figure 1: Probabilities of having some and no events


Figure 2: Probabilities of having no events, log scale


We can expand this solution to not only look at the probability of whether or not an event has occurred, but instead to look at the probability of an event in any given round. The most straightforward application of the previous method would be to (nearly) multiply the number of states by the number of events possible (i.e. the number of rounds). This keeps track of however many events that we may be interested in, just as we've kept track of the binary state of whether any events have occurred. However, this is very inefficient and quickly leads to very large matrices. Instead, we can take note of the fact that if we look at the current round, the only thing that affects whether we draw an event is whether it is still in the deck, not how many times we've drawn an event previously. Thus, we can merely (roughly) double the number of states, with about half corresponding to the cases where the event card is in still in the deck, and the rest corresponding to the cases where the event card has already been drawn. The first state is again one with 11 cards left, which is equivalent to the case when there is 1 card left. If one card is left, it must be the reshuffle card, which, when drawn at the beginning of the next round, then leaves 11 cards just as at the start of the game. The next 9 states are when 10–2 cards remain in the deck, including the event card. The final 9 states are when 10–2 cards are left in the deck, but the event card is not in the deck.



The probabilities are much as before, except that we now broken up cases 3 & 4 from above, and we consider the transitions between the various number of possible cards left in the deck without the event card. The probability of decreasing by one card in the deck, both without the event card, is the probability of not drawing the reshuffle card, $(c-1)/c$. Also note that when there are two cards left there's a probability of $0.5$ that the reshuffle card remains. This is equivalent to the the initial state of having 11 cards, as previously mentioned, as is why the probability of moving from the two 2-card states to the 11-card state is $1/2$.


From any of these states, we can compute the probability of drawing a card. This actually is equal to the sum of elements of the matrix we just constructed. We need to sum the elements which correspond to moving from a state with the event card in the deck to one without it. This includes the cases when the reshuffle card is drawn. To this we must add the cases when the event card is initially out of the deck, the reshuffle card is drawn, and finally the event card is drawn. That is, the 11th row all counts. We must also consider the $1/2$ probability that we go from 2 cards with event to 11 cards left, as this really represents drawing the event card and leaving only the reshuffle card in the deck. Using this we can construct a matrix, $\mathbf{G}$, with all of these probabilities are the elements, such that when we multiply it by any probabilistic state vector, we get the probability of drawing an event card from that state.


Using these three we can find the probability of drawing an event in any round.

\begin{align} P(\text{event in round } r) = G P^r s \end{align}

The result of this equation is plotted in Fig. 3. We see a perhaps somewhat unexpectedly consistent result that appears to be exactly equal to 0.1. But is it? Does this continue forever? We can certainly easily compute the probability for any practical number of rounds, and any deviation from exactly 0.1 in our computations appears to be due to computer rounding error.


Figure 3: Probabilities of drawing an event in current round


So let's prove it. First, we'll quickly dispense with the simple case when there are 11 (or 1) cards left in the deck. Here we must end by drawing a non-reshuffle card, of which there are ten with only one being an event. Thus the probability of drawing an event given that there are 11 cards left is $1/10$.

\begin{align} P(\text{event} | C = 11) = 0.1 \end{align}

Note a slight change in notation here. After writing the above equation, it became obvious that the number of cards left is really a random variable, and hence the switch to a capital $C$. The rest of the cases rely on some observations about the relationship between having an event card in the deck and not when there are given number of cards, $c$, left in the deck. That is, given $C$, what is the probability that the event card is still in the deck?

\begin{align} P(\text{event in deck} | C = c) = ? \end{align}

Let's consider how we get to having $c$ cards left at the beginning of a round. For this to happen, in the rounds immediately preceding the $11-c$ cards not in the deck must have been drawn. What is the probability of doing that while leaving the event card in the deck? Keep in mind that this is conditioned on not drawing the re-shuffle card in any of these rounds.

\begin{align} P(\text{draw value} | \text{don't shuffle}, C=c) = \frac{c-2}{c-1} \end{align}

We need to multiply this term depending on the number of cards left.

\begin{align} P(\text{event in deck} | C = c) &= \prod_{i=11}^{c-1} \frac{i-2}{i-1} \\ \require{cancel}&= \frac{\cancel{9}}{10} \cdot \frac{\cancel{8}}{\cancel{9}} \cdot \frac{\cancel{7}}{\cancel{8}}\cdots \frac{\cancel{c}}{\cancel{c+1}} \frac{c-1}{\cancel{c}} \\ &= \frac{c-1}{10} \end{align}

The probability of drawing an event depends on whether or not the event card is still in the deck. These probabilities largely follow from our initial analysis.

\begin{align} P(\text{event} | \text{event in deck}, C = c) &= \underbrace{\frac{1}{c}}_{{\text{draw event directly}}} + \underbrace{\frac{1}{c}\cdot\frac{1}{10}}_{{\text{draw reshuffle then event}}} \\ P(\text{event} | \text{event not in deck}, C = c) &= \underbrace{\frac{1}{c}\cdot\frac{1}{10}}_{{\text{draw reshuffle then event}}} \end{align}

When the event is not in the deck, the only way to draw it again is to first draw the reshuffle. Putting these together we can find the following.

\begin{multline} P(\text{event in deck} ) = P(\text{event} | \text{event in deck}) \cdot P(\text{event in deck}) \\+ P(\text{event} | \text{event not in deck}) \cdot \left ( 1 - P(\text{event in deck} ) \right ) \end{multline}

In our case (conditioning on $C=c$), we can find what we're looking for.

\begin{align} P(\text{event in deck} | C = c) &= \left (\frac{1}{c} + \frac{1}{c}\cdot\frac{1}{10} \right ) \cdot \frac{c-1}{10} + \frac{1}{c}\cdot\frac{1}{10} \cdot \left ( 1 - \frac{c-1}{10} \right ) \\ &= \frac{1}{c} \cdot \frac{11}{10} \cdot \frac{c-1}{10} + \frac{1}{c} \cdot \frac{1}{10} \cdot \frac{11-c}{10} \\ \require{cancel}&= \frac{11\cancel{c}}{100\cancel{c}} - \cancel{\frac{11}{100c}} + \cancel{\frac{11}{100c}} - \frac{\cancel{c}}{100\cancel{c}} \\ &=\frac{11}{100} - \frac{1}{100} \\ &= \frac{10}{100} \\ &= \frac{1}{10} \end{align}

Here we see that, indeed, regardless of how many cards there are in the deck, the probability of drawing an event always works out to be exactly 0.1. Here two effects exactly cancel each other out. On the one hand, when there are fewer cards, it's more likely to draw an event card that is still in the deck. On the other hand, with fewer cards it's less likely that there the event card still is in the deck.


There is yet a simpler way to see that the probability of drawing an event every round is 0.1. Consider the symmetry of the situation. Every round ends with a non-reshuffle card being drawn. There is no difference in the 10 non-reshuffle cards that impacts the probability of them being drawn. Therefore, the probability that each is drawn in any round must be 0.1. You may ask whether drawing the event card affects the probability of it being drawn in a subsequent round. It does; that effect shows up in certain conditional probabilities.