CasinoCityTimes.com

Home
Gaming Strategy
Featured Stories
News
Newsletter
Legal News Financial News Casino Opening and Remodeling News Gaming Industry Executives Author Home Author Archives Author Books Search Articles Subscribe
Newsletter Signup
Stay informed with the
NEW Casino City Times newsletter!
Articles in this Series
Best of Donald Catlin
author's picture
 

Taking Advantage of an Advantage: Part 3 – Kelly Betting

3 April 2011

            In last month’s article we discovered that when employing proportional betting, choosing the fraction f of our stake that we should risk when playing a positive game is tricky.  In particular we noted that if f is a large number (between 0 and 1), virtual ruin is almost certain.  Even lowering f to 0.3 was not sufficient to ensure that we would not experience ruin.  Is there any way to choose f that makes sense?

            The development that follows is not as rigorous as that presented by Kelly [3] and other practitioners.  It is less technical and more simplistic but does, I believe, convey the idea accurately.

            We noted that by trying to optimize our expected return over all possible paths (see last month’s article for the definition of path), we included many paths that would terminate with our being unable to continue betting.  In these instances the longer we play, the worse things get.  The approach we now take is suggested by some of the discussion at the end of last month’s article.  If the game is positive, we want to be able to play it for a long time.  In such a scenario the law of large numbers (commonly referred to in naive terms as the law of averages) says that for a large number of trials, the ratio of wins to the number of trials will be close to the win probability with a high likelihood.  In symbols, w/n will be close to p.  Similarly, l/n will be close to q.

            Another way of saying this is to say that np will be close to w and nq will be close to l.  How close?  You’ll have to get out your old probability book for that one  I am going to skip that issue though it is a real one (see [1],  [4], and [6]).  Recall that last month we derived the expression

                                                Sn = (1 + f)w(1 – f)lS0        (1)

For large n we then have

                                                Sn ~ (1 + f)np(1 – f)nqS0     (2)

where ~ stand for approximately equal.  We can easily rewrite (2) as

                                                Sn ~ [(1 + f)p(1 – f)q]nS0     (3)

Defining the function G by

                                                G(x) = (1 + x)p(1 – x)q, 0 ≤ x ≤ 1     (4)

we can rewrite (3) as

                                                Sn ~ [G(f)]nS0                    (5)

            The approximation indicated in expression (5) makes it clear that the righthand side of this approximation determines how Sn propagates.  In particular, if G(f) is a number smaller than 1, then the right hand side of (5) will get smaller and smaller as n increases.  Similarly, if G(f) is larger than 1, then the righthand side of (5) will increase as n increases.  Because this expression approximates Sn we can draw a similar conclusion regarding Sn and the choice of f.

            Notice that from expression (4) we see that G(0) = 1 and G(1) = 0.  What about values of f between 0 and 1.  Here is where a bit of calculus is handy.  The derivative of G, written G’(x), is given by the expression

                                    G’(x) = [(p – q – x)/(1 – x2)]G(x)      (6)

For those of you who have had calculus, I leave the derivation of (6) to you as an exercise; the rest of you will just have to take my word for it.  It is the interpretation of this formula that is important here. 

If one were to draw a graph of G on the interval from 0 to 1, the formula given in (6) would give you the slope of the tangent line to the resulting curve.  Notice that G’(0) = p – q, which we have been assuming is positive.  That means that the function G(x) is increasing as x increases from 0.  We know that G(1) = 0, so G must reach a maximum at some point to the right of 0.  At such a point the tangent line to the curve would be a horizontal line and thus have a slope of 0.  Also G(x) would be positive at such a point; in fact it would be greater than 1.  Hence, since x is less than 1, 1 – x2 would also be positive.  The only way to make the slope 0 at such a point would be to take p – q – x = 0 or, in other words, to set x = p – q = e.

            Here then is the Kelly Criterion.  Simply set f = e.  In words, the fraction of our stake that we should risk is equal to the advantage that we have.  In our running example this would be 2%.  Now we see why we had such difficulty last month.  The values of f that we chose were just too large.  If we set f = e, then G(e) is greater than 1 and is the maximum value that we can take for G.

            A few words are in order here. If our stake is $100 and our edge e = 0.02, our first bet should be $2.  Assuming we win that bet, our stake is $102 but our next bet can’t be $2.04 since casinos don’t deal in pennies.  What we have to do is round off our theoretical bet to the nearest dollar so our second bet would still be $2.  When our stake gets to $126, our next theoretical bet would be $2.52 so we would round this up to $3.  And so on.

            As I indicated earlier the above is not a rigorous derivation of Kelly betting.  What I tried to do here is give a plausible and easily understood explanation of Kelly betting.  I have provided references for more sophisticated arguments if you wish to follow up on them.

            Finally, for you calculus buffs out there, you might like to try to derive an expression for the second derivative of G.  The answer is

                                    G’’(x) = [(e2 – 1)/(1 – x2)2]G(x)        (7)

Using this expression you can then show that G is concave down on the interval from 0 to 1.  Have fun!  See you next month.

References

References [1], [2], [4], and [6] are all found in the book Finding the Edge published by The Institute for the Study of Gambling and Commercial Gaming at the University of Nevada, Reno. Editors are Olaf Vancura, Judy A. Cornelius, and William R. Eadington.

[1] Brown, Sid (2000) Can You Do Better Than Kelly in the Short Run?, pp 215-231

[2] Griffin, Peter ND Thorp, E.O. (2000), Blackjack: Betting the Klondike's Free Ride, pp 215-272

[3] Kelly, J.L. (1956), A New Interpretation of Information Rate, Bell Systems Technical Journal, July 1956, pp 917-926

[4] Leib, John (2000) Limitations on Kelly or The Ubiquitous "n approaches infinity," pp 233-253

[5] Thorp, E.O., (1962), Beat the Dealer: A Winning Strategy for the Game of Twenty One, Blaisdell Publishing Company, New York, page 89

[6] Thorp, E.O. (2000), The Kelly Criterion in Blackjack, Sports Betting, and the Stock Market, pp 163-213


Don Catlin can be reached at 711cat@comcast.net

Donald Catlin

Don Catlin is a retired professor of mathematics and statistics from the University of Massachusetts. His original research area was in Stochastic Estimation applied to submarine navigation problems but has spent the last several years doing gaming analysis for gaming developers and writing about gaming. He is the author of The Lottery Book, The Truth Behind the Numbers published by Bonus books.

Books by Donald Catlin:

Lottery Book: The Truth Behind the Numbers
Donald Catlin
Don Catlin is a retired professor of mathematics and statistics from the University of Massachusetts. His original research area was in Stochastic Estimation applied to submarine navigation problems but has spent the last several years doing gaming analysis for gaming developers and writing about gaming. He is the author of The Lottery Book, The Truth Behind the Numbers published by Bonus books.

Books by Donald Catlin:

Lottery Book: The Truth Behind the Numbers