CasinoCityTimes.com

Gurus
News
Newsletter
Author Home Author Archives Author Books Search Articles Subscribe
Stay informed with the
NEW Casino City Times newsletter!
Newsletter Signup
Stay informed with the
NEW Casino City Times newsletter!
Recent Articles
Best of Donald Catlin
author's picture
 

Independence Day

3 November 1999

    Today is Independence Day.  No, I'm not writing about the Fourth of July; after all it's November.  Rather today is the day we learn about a concept in probability theory known as independence.  This is a notion that is central to lots of calculations in gambling games and we will use it over and over again.  In particular, I'll use this idea to help derive the house edge for the Pass Line in Craps.  Then, next month,  I'm going to return to Cardano's dice question as I promised I would do (see my article Cardano's Gaff Lives On that was posted on this site in June 1999).

    Two events A and B are said to be independent provided it is true that

P(A & B) = P(A)P(B) (1)
Why do we care?  Well, if A and B are independent, then equality (1) provides us with a very easy way to calculate P(A & B).  "Sure," you're thinking, "if (1) is the definition of independence, then in order to see if two events are independent you have to know P(A & B) to start with.  So, what good is it?"  You know, you've got a point.  In fact, let me be very candid with you.  If the only method you have to determine the independence of two events is to use relation (1), then in such an instance independence is a totally useless idea!  So what's the story?

    Let me begin by convincing you that independence is not always easy to spot.  Similar sounding situations can be vastly different when it comes to independence.  Recall the sample space for the single roll of a pair of dice:

Sample Space S

It might be helpful for you to think of the dice as being different colors; say a red and white die, respectively.  Let A represent the event that the red die is even and let B represent the event that the sum of the two dice is 7.  The second, fourth, and sixth rows above represent event A and, of course, the diagonal running from the lower left to the upper right represents B.  The probabilities are P(A) = 1/2 and P(B) = 1/6.  Notice that the event A & B = {(6,1), (4, 3), (2,5)} so that P(A & B) = 3/36 or 1/12.  Thus we see that
P(A & B) = 1/12 = 1/2 x 1/6 = P(A)P(B) (2)
So according to (1) A and B are independent events.  On the other hand, suppose we keep A the same but change B to be the event that the sum of the two dice is 6.  Now P(B) = 5/36 and A & B = {(4,2), (2,4)} so that P(A & B) = 2/36 = 1/18.  Now we have
P(A & B) = 1/18 = 4/72 < 5/72 = 1/2 x 5/36 = P(A)P(B) (3)
which means that A and B are not independent.  Both situations appeared similar, yet one produced independence and the other did not.  It's enough to drive you to drink!

    Okay, so we have to be careful.  Is there ever a situation where we can be certain that two events are independent?  Yes, indeed.  There is a principle in probability theory that I like to call the Independence Metaprinciple.  It is a principle that is seldom, if ever, stated in books on probability theory, yet is always invoked when possible.  It involves the notion of causality.  Specifically, we say that two events A and B are causally independent provided the occurrence or nonoccurrence of one of the events has no influence on whether the other event occurs or not.  An example might be helpful here.  Suppose I flip a coin and get a tail.  Now I flip the coin again.  Does the fact that I got a tail on the first flip in any way influence the result that I will get on the second flip?  I hope it is clear to you that the answer is no.  I submit that to believe otherwise one would have to suppose that the coin possesses both a memory and a will; not a credible supposition.

    The Independence Metaprinciple is as follows:

If two events A and B are causally independent, then they are independent in the sense of  (1).

In other words, if I am absolutely convinced that the occurrence or nonoccurrence of event A has no causal effect on whether or not event B occurs, then I can calculate the probability of the event A & B by simply multiplying the probability of A times the probability of B.

    Here is a simple example to illustrate how independence is used.  Suppose I roll a pair of dice and then reroll the dice.  What is the probability that I roll a total of five on the first roll and a total of seven on the second roll?  One way to do this would be to create a sample space that pairs each of the 36 outcomes on the first roll with each of the 36 outcomes on the second roll, a sample space containing 36 x 36 = 1296 outcomes, and then count up how many of these outcomes consist of a pair adding to 5 coupled with a pair adding to 7.  Sound like fun?  Here's the easy way.  I do not believe the first roll of the dice has any causal effect on the second roll of the dice.  If I define A to mean that a 5 is rolled on the first roll and B to mean that a 7 is rolled on the second roll, then A and B are causally independent.  The event A & B is just the event that a 5 is rolled on the first roll and a 7 is rolled on the second roll.  Referring to the dice sample space S above, it is easy to see that P(A) = 1/9 and P(B) = 1/6.  Hence, using the Independence Metaprinciple we have

P(A & B) = P(A)P(B) = 1/9 x 1/6 = 1/54 (4)
    How do we know that this principle is sound.  Well, I can't prove it to you, which is why I used the prefix 'meta'.  I can, however, make a very plausible argument on its behalf.  Recall my August article on proposition bets (An Ear Full of Cider, this site, August 1999).  In that article I introduced you to the notion of conditional probability and the notation P(A | B).  This meant the probability of event A occurring given that event B had occurred.  In the August article I showed that
P(A & B) = P(A | B)P(B) (5)
Now if the events A and B are causally independent, wouldn't you believe that the probability of A occurring given that B has occurred would just be the probability of A occurring?  After all, B has no effect on whether A occurs or not.  In symbols, I am saying that in this case P(A) = P(A | B).  Using this in (5) we obtain
P(A & B) = P(A)P(B) (6)
which is just relation (1) above.

    You have heard writer after writer tell you that the house edge for the Pass Line in Craps is 1.41%, but they never seem to get around to making the calculation.  That's because you need to have some mathematical tools under your belt to do so.  The good news for you is that you now have those tools.  Let's do it.

    I am assuming that you know the way the Pass Line works.  I want to do a sample calculation for you and then I'll make a table that has all of the information necessary to make the house edge calculation.  The sample calculation will address the following question.  What is the probability of establishing a point of 5 and then making the 5?  Let A be the event of rolling a 5 on the comeout roll.  Let B be the event of rerolling the 5 on a subsequent roll before rolling a 7.  Clearly, referring to the dice sample space S above, the probability of A occurring is 1/9.  What about BB is actually a conditional probability.  There are only two dice totals that affect B; the 5 and the 7, all others are ignored.  So the probability of B is actually the conditional probability that a 5 is rolled given that the outcome is either 5 or 7.  Since there are 6 ways to roll a 7 and 4 ways to roll a 5, there are 6 + 4 = 10 ways the established point of 5 can be settled, 4 of which constitute making the 5.  Hence, P(B) = 4/10 = 2/5.  Here comes the good part.  The events A and B are causally independent.  A & B is the event of establishing 5 as a point and then making it, so it is the probability of A & B we seek.  Easy!  Using the Independence Metaprinciple we have

P(A & B) = P(A)P(B) = 1/9 x 2/5 = 2/45 (7)
By the way, the probability of not making a 5 is 3/5 (do you see why?) so the probability of establishing a point of 5 and not making it is 1/9 x 3/5 = 3/45.

    Here is the table I promised.  The symbol DNA stands for 'Does Not Apply'.  Also, all of the probabilities involved can be expressed using a common denominator of 1980 so I'll convert them in the table.
 

Dice Total Comeout Probability Probability Point  Made Probability Point Not Made Product Probability of Win Probability of Loss
7 or 11 8/36 DNA DNA DNA  440/1980 DNA
2, 3, or 12 4/36 DNA DNA DNA DNA  220/1980
4 1/12 1/3 DNA 1/36   55/1980 DNA
4 1/12 DNA 2/3 2/36 DNA  110/1980
5 1/9 2/5 DNA 2/45 88/1980 DNA
5 1/9 DNA 3/5 3/45 DNA 132/1980
6 5/36 5/11 DNA 25/396 125/1980 DNA
6 5/36 DNA 6/11 30/396 DNA 150/1980
8 5/36 5/11 DNA 25/396 125/1980 DNA
8 5/36 DNA 6/11 30/396 DNA 150/1980
9 1/9 2/5 DNA 2/45 88/1980 DNA
9 1/9 DNA 3/5 3/45 DNA 132/1980
10 1/12 1/3 DNA 1/36 55/1980 DNA
10 1/12 DNA 2/3 2/36 DNA 110/1980
Totals 976/1980 1004/1980

Figure 1
Pass Line Probabilities

So, the probability of winning a Pass Line bet is 976/1980 and the probability of losing a Pass Line bet is 1004/1980.  Since the payoff is even money on this bet the expected return to the player is just

exp return = (+1) x 976/1980 + (-1) x 1004/1980

= -28/1980 = - 1.41%
(8)
    So, there you have it.  See how easy this was using the notion of independence.  What if you had to construct a sample space to analyze this wager?  Not a pleasant thought is it?

    Next month I'll tell you a bit more about independence and show you how to answer the dice question raised by Cardano.  In the meantime, see if you can figure out the house edge for the Don't Pass bar 12 bet in Craps.  You can use the probabilities in Figure 1 except that now the 12 becomes a push and the 2 and 3 are winners, that is, the second line in the table changes.  Your answer should be just a tad smaller than the edge for the Pass Line.  See you next month.

Donald Catlin

Don Catlin is a retired professor of mathematics and statistics from the University of Massachusetts. His original research area was in Stochastic Estimation applied to submarine navigation problems but has spent the last several years doing gaming analysis for gaming developers and writing about gaming. He is the author of The Lottery Book, The Truth Behind the Numbers published by Bonus books.

Books by Donald Catlin:

Lottery Book: The Truth Behind the Numbers
Donald Catlin
Don Catlin is a retired professor of mathematics and statistics from the University of Massachusetts. His original research area was in Stochastic Estimation applied to submarine navigation problems but has spent the last several years doing gaming analysis for gaming developers and writing about gaming. He is the author of The Lottery Book, The Truth Behind the Numbers published by Bonus books.

Books by Donald Catlin:

Lottery Book: The Truth Behind the Numbers