CasinoCityTimes.com

Home
Gaming Strategy
Featured Stories
News
Newsletter
Legal News Financial News Casino Opening and Remodeling News Gaming Industry Executives Author Home Author Archives Search Articles Subscribe
Newsletter Signup
Stay informed with the
NEW Casino City Times newsletter!
Recent Articles
Best of Alan Krigman
author's picture
 

How Many Cases Must You Count to Detect Bias or Preference?

6 December 2001

What if you have a coin you suspect is biased? That is, chances may not be precisely 50-50 a flip will yield heads or tails. You might care to know how many times you'd have to toss the coin and count results before you'd be confident that measured frequencies were acceptably close to the true but unknown probabilities.

Statisticians describe such situations rigorously by quantifying the degree of confidence sought, and the margin accepted, in the conclusions. For instance, under some conditions you might be satisfied being 95 percent sure that your results were within 1 percent of the actual value. For alternate purposes, 90 percent confidence and a 5 percent margin might do, or 99 percent certainty and 0.1 percent tolerance might be needed. Usually, more crucial consequences demand greater confidence, while smaller anticipated biases entail narrower tolerances. As you may guess, more exacting criteria require more cases to be counted.

I'll give some examples showing numbers of trials to detect bias subject to some typical constraints. For solid citizens who think I just make all this stuff up or want to check my arithmetic, the figures are based on application of the Laplace-Gauss theorem, following the assumption that results are binomially distributed.

Pretend you expect enough bias to be happy knowing the chances of heads or tails to within 1 percent. For 95 percent confidence that the measured frequencies will be no more than this far from the true probabilities, you'd have to run at least 9,600 tests. To be 99 percent sure that your counts were within 1 percent of the actual probabilities, you'd need at least 16,600 trials.

As an illustration, suppose you tally 9,600 flips and get 4,608 heads. The frequency is 4,608 divided by 9,600, which equals 48 percent. You'd be 95 percent certain that the true probability of heads was within 1 percent of this value - between 47 and 49 percent. If, instead, you counted 16,600 flips and got 7,885 heads, your measured frequency would be 7,885 divided by 16,600 or 47.5 percent. You'd now have 99 percent confidence that the true probability of heads was between 46.5 and 48.5 percent.

When you're concerned about small amounts of bias, you may want to shrink the tolerance band around the frequency obtained by counting. Perhaps you want your measurement to be within 0.1 percent of the true value. If 95 percent confidence was adequate for your purposes, you'd have to run 960,000 tests; a 99 percent degree of confidence would take 1,660,000 flips. There are also some trade-offs between confidence and tolerance. With 9,000,000 tosses of the coin, you'd have 95 percent confidence your results were within 0.033 percent of the true probability and 99 percent confidence you were no more than 0.043 percent up or down.

To impress civilians that things you learn in the casino apply to real life, consider the imbroglio besetting the 2000 US election. Did the voters favor - that is, were they biased toward - Al Gore or George Bush? The situation isn't identical to that of the coin toss, but a similar approach yields interesting insights.

Assume 9,000,000 votes were tallied in an imaginary state, with H and T receiving 4,499,100 and 4,500,900, respectively. T has won by 1,800 votes. In percentages, H received 4,499,100/9,000,000 or 49.99 percent, and T scored 4,500,900/9,000,000 or 50.01 percent. Unknown errors arose in the tallies owing to counting procedures. Can election officials reliably ascertain the will of the people from this fictional count? The model suggests they can be 99 percent sure H actually got 49.947 to 50.033 percent, and T 49.967 to 50.053 percent. Were 95 percent confidence adequate, they could still only narrow the gap to put H between 49.957 and 50.023 percent of the vote, and T from 49.977 to 50.043 percent. Even at the lower level of confidence, the ranges overlap and the will of the majority would be statistically indeterminate.

There are other ways of interpreting such data. For instance, in the election, counting uncertainties could be estimated and used to calculate margin of error in the final figures. Or, lawyers could muster mathematicians to argue authoritatively over chads, providing the poet, Sumner A Ingmark, pause to pine:

Politicians often go,
Where statisticians falter,
Blithely, armed with truths they know,
Or, when it suits them, alter.

Alan Krigman

Alan Krigman was a weekly syndicated newspaper gaming columnist and Editor & Publisher of Winning Ways, a monthly newsletter for casino aficionados. His columns focused on gambling probability and statistics. He passed away in October, 2013.
Alan Krigman
Alan Krigman was a weekly syndicated newspaper gaming columnist and Editor & Publisher of Winning Ways, a monthly newsletter for casino aficionados. His columns focused on gambling probability and statistics. He passed away in October, 2013.