Hacker News
ā† Back

Kelly Can't Fail

21 hours ago/95 comments/win-vector.com
16 hours ago by pcthrowaway

Note that you need to be able to infinitely divide your stake for this to work out for you all the time.

For example, if the deck has 26 red cards on top, you'd end up dwindling your initial $1.00 stake to 0.000000134 before riding it back up to 9.08

16 hours ago by boothby

If you start out with a $1e12 stake, you're able to avoid catastrophic rounding errors even in the worst case. There's probably a life lesson here.

14 hours ago by cbsks

My simulation shows that with a 52 card deck, if you round the bet to the nearest $.01 you will need to start with $35,522.08 to win a total of $293,601.28.

If you start with $35,522.07 or less, you will lose it all after 26 incorrect cards.

12 hours ago by boothby

Nearest rounding does seem like a mistake here. Rounding down is quite safe: rather than lose it all, you end up with at least 2^26 pennies.

13 hours ago by undefined
[deleted]
16 hours ago by fragmede

Is the lesson: choose to be born to wealthy parents?

3 hours ago by mannykannot

It would really help if your parents know someone who can and will take the other side in this game.

8 hours ago by croes

Itā€™s easier to make money if you already habe money

3 hours ago by renewiltord

A popular view is that having wealthy parents gives one a great advantage. Another popular view is that working extraordinarily hard for money is a waste of oneā€™s life even if one gets the money. But the two are only consistent if one believes that oneā€™s own life is the optimization target. If I live a life of misery so that my children live a life of prosperity that would strike me as a phenomenal result.

So another reading is ā€œchoose to give your children wealthy parentsā€.

15 hours ago by darkerside

Or is it to choose appropriate betting amounts based on your capacity for risk

16 hours ago by jmount

Very good point. I did some experiments and the system is very sensitive to any sort of quantization or rounding of bets. You get the expected value about the right place, but the variance goes up quickly. So in addition to your important case, things are a bit dicey in general.

5 hours ago by nyeah

It's a good point. I think it affects the realism of the model. When the stake is very low, finding a penny on the street gives an astronomical improvement in the end results. At the high end, it's possible the counterparty might run out of money.

14 hours ago by tgma

Yup, the dual would be saying Martingale can't fail with infinite money.

14 minutes ago by aidenn0

It's not because there is a finite amount of money at which this can't fail, which is never the case for martingale. Martingale is actually likely to bankrupt you against a casino that is much more well staked than you even if you have a small advantage.

6 hours ago by PaulHoule

When I was a teen I discovered that I could always guess more than half the cards right using card counting to determine what color is more common in the deck. I programmed my

https://en.wikipedia.org/wiki/TRS-80_Model_100

to simulate it and it never failed. Recently I thought about it again and wrote a Python script that tried it 30 million times and... it never failed.

I've been thinking about what to do with it and came up with the options of (i) a prop bet and (ii) a magic trick, neither of which seemed that promising.

As a prop bet I can offer $1000 to somebody's $10 which is not the route to great prop bet profits, also I worry that if I make a mistake or get cheated somehow I could be out a lot of money. (Now that I think of it maybe it is better if I re-organize it as a parlay bet)

As a magic trick it is just too slow paced. I developed a patter to the effect that "Parapsychologists were never able to reliably demonstrate precognition with their fancy Zener cards, but I just developed a protocol where you can prove it every time!" but came to the conclusion that it was not entertaining enough. It takes a while to go through a deck which doesn't seem like a miracle, you will have to do it 7 times in a row to exclude the null hypothesis at p=0.01. Maybe somebody with more showmanship could do it but I gave up.

an hour ago by jdhwosnhw

That reminds me of my favorite algorithm, which can find the majority element in a list with any number of distinct entries while using O(N) time and O(1) space (provided a majority element exists). I sometimes pose deriving this algorithm as a puzzle for people, no one has ever solved it (nor could I).

https://en.m.wikipedia.org/wiki/Boyer%E2%80%93Moore_majority...

24 minutes ago by barapa

That is really cool

10 hours ago by lordnacho

Interesting side note on Kelly:

In probability theory, Proebsting's paradox is an argument that appears to show that the Kelly criterion can lead to ruin. Although it can be resolved mathematically, it raises some interesting issues about the practical application of Kelly, especially in investing. It was named and first discussed by Edward O. Thorp in 2008.[1] The paradox was named for Todd Proebsting, its creator.

https://en.wikipedia.org/wiki/Proebsting%27s_paradox

10 hours ago by dominicrose

Quoting the same page: One easy way to dismiss the paradox is to note that Kelly assumes that probabilities do not change.

That's good to know. Kelly is good if you know the probabilities AND they don't change.

If you don't know or if they can change, I expect the right approach has to be more complex than the Kelly one.

3 hours ago by cubefox

In particular, then the right approach has to be more risk averse than Kelly would recommend. In reality, most probabilities can only be estimated, while the objective probabilities (e.g. the actual long run success rate) may well be different and lead to ruin. That's also what makes the title "Kelly can't fail" more wrong than right in my opinion.

10 hours ago by ilya_m

Beautiful, thanks for sharing it!

I think the portfolio argument is an unnecessary detour though. There's a two-line proof by induction.

1. The payoff in the base case of (0,1) or (1,0) is 2.

2. If we are at (r,b), r >=b , have $X, and stake (r-b)/(r+b) on red, the payoff if we draw red and win is X * (1+(r-b)/(r+b)) * 2^(r+b-1) / (r+b-1 choose r-1) = X * 2^(r+b) * r / ((r+b) * (r+b-1 choose r-1)) = X * 2^(r+b) / (r+b choose r).

Similarly, if we draw black and lose, the payoff is X * (1-(r-b)/(r+b)) * 2^(r+b-1) / (r+b-1 choose r) = X * 2^(r+b) * b / ((r+b) * (r+b-1 choose r)) = X * 2^(r+b) / (r+b choose r). QED

16 hours ago by fancy_pantser

A very similar card game played by deciding when to stop flipping cards from a deck where red is $1 and black is āˆ’$1 as described in Timothy Falconā€™s quantitative-finance interview book (problem #14). Gwern describes it and also writes code to prove out an optimal stopping strategy: https://gwern.net/problem-14

15 hours ago by snthpy

Nice!

Only quibble i have is that black should be +$1 and red -$1 to follow standard finance conventions, i.e. be in the "black" or "red".

16 hours ago by jmount

That is a nice game and writeup.

16 hours ago by JohnMakin

Kelly criterion is one of my favorite game theory concepts that is used heavily in bankroll management of professional gamblers, particularly poker players. It is a good way to help someone understand how you can manage your finances and stakes in a way that allows you to climb steadily forward without risking too much or any ruin, but is frequently misapplied in that space. The problem is kelly deals with binary results, and often situations in which this is applied where the results are not binary (a criteria for applying this) you can see skewed results that look almost right but not quite so, depending on how you view the math

16 hours ago by amluto

> particularly poker players

The Kelly criterion seems excellent for many forms of gambling, but poker seems like it could be an exception: in poker, youā€™re playing against other players, so the utility of a given distribution of chips seems like it ought to be more complicated than just the number of chips you have.

(Iā€™m not a poker player.)

2 hours ago by fernandopj

Chris "Jesus" Ferguson "proved" an application of this theory back in ~2009 [1]. He was a the time promoting Full Tilt and commited to turn $1 dollar bankroll to $10000 by applying a basic strategy of never using more than a low % of his bankroll into one tournament or cash game session.

So, if one's skill would turn your session probability to +EV, by limiting your losses and using the fact that in poker the strongest hands or better tourney positions would give you a huge ROI, it would be just a matter of time and discipline to get to a good bankroll.

Just remember that for the better part of this challenge he was averaging US$ 0.14/hour, and it took more than 9 months.

[1] https://www.thehendonmob.com/poker_tips/starting_from_zero_b...

12 hours ago by tempestn

It's used for bankroll management (basically deciding what stakes to play) rather than for sizing bets within a particular game.

15 hours ago by undefined
[deleted]
13 hours ago by peter_retief

Could this work with roulette betting on color? Seems like you could spend a lot of time not winning or losing

13 hours ago by plorkyeran

Roulette results are uncorrelated and you have the exact same chance of winning each time, so the Kelly criterion isnā€™t applicable. Betting on a color has a negative edge and you donā€™t have the option of taking the houseā€™s side, so it just tells you the obvious thing that you should bet zero.

10 hours ago by dmurray

> exact same chance of winning each time, so the Kelly criterion isnā€™t applicable.

Actually, the main assumption that leads to the Kelly criterion is that you will have future opportunities to bet with the same edge, not constrained by the amount.

For example, if you knew this was your last profitable betting opportunity, to maximise your expected value you should bet your entire stake.

I'm slightly surprised it leads to such a nice result for this game - I don't see a claim that this is the optimal strategy for maximizing EV zero variance is great, but having more money is also great.

Of course you are right about roulette and, if you are playing standard casino roulette against the house, the optimal strategy is not to play. But that's not because bets are uncorrelated, it's because they are all negative value.

10 hours ago by Tepix

What makes 0 better than the other numbers?

16 hours ago by bloodyplonker22

You are right that Kelly criterion deals with binary results. This won't work for poker. In poker, we use expected value because wins and losses are not binary because of the amount you win or lose. Once you figure out your approximate EV, you use a variance calculator in addition to that (example: https://www.primedope.com/poker-variance-calculator/) to see how likely and how much it is you will be winning over a certain number of hands in the long run.

18 hours ago by barbegal

It would have been a better demo if reduced to more manageable numbers e.g. a deck of 2 black and 2 red cards.

Turn 1 r = b so no bet

Turn 2 bet 1/3 on whichever card wasn't revealed in turn 1.

Turn 3 either you were wrong on turn 2 and you now have 2/3 of your stake but you know the colour of the next two cards so you can double your stake each time to end up with 4/3 after turn 3 or you were right and you have 4/3 of your stake but have one of each red or black left so you don't bet this turn.

Turn 4 you know the colour of the final card so you double your money to 8/3 of your original stake.

And then the exercise to the reader is to prove optimality (which is fairly straightforward but I don't believe there is a short proof)

18 hours ago by libraryofbabel

Yes. Although four cards has only one nontrivial branch, on turn 3. So, start out with the four cards example, and then show tree diagrams for the 5 and 6 cards cases (still manageable numbers) to build intuition for induction to the general case.

18 hours ago by stevage

Agreed, I could follow the general argument but not enough to be convinced about why the result is exactly the same regardless of the order of cards.

16 hours ago by undefined
[deleted]
19 hours ago by hawkjo

Very cool to see no variance in the outcome. But that also makes it feel like there should be a strategy with better expected return due to the unique problem structure. Do we know if the Kelly strategy is optimal here?

15 hours ago by travisjungroth

I have a feeling itā€™s the highest EV. I tried a strategy of flipping all the cards until thereā€™s only one color left and then betting it all every time. Ran a million trials and got 9.08.

I was thinking these are very different strategies, but theyā€™re not exactly. The Kelly strategy does the same thing when thereā€™s only one color left. The difference is this strategy does nothing before that point.

Still, they feel like limit cases. Betting it all with only one color left is the only right move, so itā€™s what you do before that. Nothing and Kelly seem like the only good strategies.

13 hours ago by foota

Ah, but these aren't the same. The Kelly strategy has zero variance, whereas this strategy likely has very high variance.

It would be interesting to do the math and show why they're equal. It seems like you should be able to make the same sort of portfolio probability argument.

12 hours ago by foota

To start, your minimum return is 2x, and depending on how many cards of a single color are left at the end, you get a return of 2^N. You could take the summation of those N card returns, times the probability of each, and that must come out to 9.08 on average.

I guess the number of possible arrangements of cards with N of one color remaining is... The number of permutations of N times 2 times the number of permutations of 52 minus N times 26 choose N?

Ah, yes this works, you can see it here: https://www.wolframalpha.com/input?i=%28summation+of+N%21+*+....

That is: (summation of N! * (52 - N)!* (26 choose N) * 2^N/52! from N=0 to 26 (for some reason the * 2 for different suits was over counting, so I removed it. Not sure why? Also it seems like it should be from 1 to 26, but that also doesn't give the right answer, so something is whack)

6 hours ago by travisjungroth

Of course they're not the same. They have the same EV and the strategies do the same thing in a condition that always happens: there's only one color left. The variance is wildly different.

18 hours ago by rahimnathwani

  Do we know if the Kelly strategy is optimal here?
What do you mean by optimal? Do you mean you're willing to risk going bankrupt, if it means a higher expected value?
16 hours ago by scotty79

Surely there's some space between risking to go bankrupt and risking of getting less than 9.08 return guaranteed by Kelly strategy.

If you are willing to take some risk in exchange for possibility of higher payout just bet a bit more then Kelly recommends. That's your "optimal" strategy for the amount of risk you are willing to take. I imagine it's expected return is the same as Kelly and calculating it's variance is left as the exercise for the reader.

10 hours ago by rahimnathwani

  I imagine it's expected return is the same as Kelly
Given two options with the same expected return, most people would prefer the lower variance.

Accepting higher variance with no increase in expected return has a name: gambling.

9 hours ago by OscarCunningham

In this game, all strategies have the same expected value, so long as they follow the rule 'if the remaining deck is all the same colour, then you should bet everything you have on that colour'.

19 hours ago by jmount

The book claims it is optimal for a set of strategies they called "sensible." I didn't think the argument flowed as well as the zero variance part of the proof, so I didn't work it in. I think the source also hinted at a game-theory proof as they called the sub-strategies in the portfolio "pure strategies."

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.