Jump to content

I Wrote A Variance Simulation


Recommended Posts

I basically got in a big argument with someone about how poker over time is more skill than luck. I told him the more hands you play the less luck there is and the more skill there is. He pretty much understood that, but maintained that there are so many total poker players, that at least 1 must be getting a LOT more lucky than the average, and at least 1 must be getting a TON more unlucky than the average. I disagree'd with him and was trying to prove my point. So I decided to use some math theory and simulations to begin to try and prove my point.I wrote this little C++ program to show him. I'm using coin flips to predict luck, since you can't really simulate all the different situations that come into poker, but if you think about it, all we really care about is that set percentages hold up to their end of the deal. If anyone is interested, here's the source code:

// Kevin LaMar// 02-01-2007// Coinflip Simulation#include <iostream>#include <cstdlib>#include <ctime>#include <conio.h>using namespace std;void main(){	int i, j;	int coin;	int players;	int flips;	int heads, tails;	int playerheads, playertails;	int mostheads = 0;	int mosttails = 0;	srand(time(0));	cout << "Enter the number of players: ";	cin >> players;	cout << "Enter the number of flips per player: ";	cin >> flips;	for (i = 0;i < players;i++) {		heads = 0;		tails = 0;		if (players >= 100) {			if ((i % (players / 100)) == 0) {				system("cls");				cout << "Progress: " << (i / (players / 100)) << '%';			} // End if		} // End if		else {			system("cls");			cout << "Progress: Flipping for Player #" << i;		} // End else		for (j = 0;j < flips;j++) {			coin = rand() % 2;			heads += coin;			tails = flips - heads; 		} // End for		if (heads > mostheads) {			mostheads = heads;			playerheads = i + 1;		} // End if		if (tails > mosttails) {			mosttails = tails;			playertails = i + 1;		} // End if	} // End for	system("cls");	cout << "The most heads flipped was " << mostheads << " by Player #" << playerheads;	cout << "\nThe most tails flipped was " << mosttails << " by Player #" << playertails;	cout << "\n\nPress any key to exit...";	getch();} // End main

and you can download the .exe to run the simulations at: http://www.smalltimepoker.com/cpp/coinflip.exeI basically found that the number of players that have the same sample size doesn't really matter in determing how far from even luck the luckiest or unluckiest person will go. Only the sample size itself really seems to matter.This should mean that the luckiest of poker players in the long term should be only very slightly luckier than the unluckiest poker player, and should have little to no effect on total winnings overtime.Anyone else able to draw some other conclusions from this?Edit: Lame, the codebox totally screwed up my spacing/indenting in the code. If you want the .cpp, it's at http://www.smalltimepoker.com/cpp/coinflip.cpp

Link to post
Share on other sites

A random walk is when you flip a coin and add 1 if you get heads and take 1 away if you get tails. You then flip the coin a lot of times and see what final number you get. So, the number randomly goes up or down based on a finite number of coinflips.

Link to post
Share on other sites

Bubba...I'll look at the code; but it's certain that if you give 100 people a coin to flip 50,000 times each, and sort by the number of Heads flipped, it would be a decent gap between the most and least. You can even quantify the likelihood that the span will be greater than some x.So, I tend to agree with your friend, if I'm understanding this.Now... if you are looking the Rate or Percentage of Heads tossed, yeah, that will tend towards 50% once you've flipped a lot.That has been discussed a ton. The difference between Rates and Accumulated totals.

Link to post
Share on other sites

Have you used the program? The biggest deviation with 100 people flipping 50,000 times each is very minimal.Try inputting a variety of different simulations into the program and observe the results.I found that we only see a relatively large deviation when the number of flips is small, and that the number of total people participating hardly makes any impact at all.

Link to post
Share on other sites
Have you used the program? The biggest deviation with 100 people flipping 50,000 times each is very minimal.Try inputting a variety of different simulations into the program and observe the results.I found that we only see a relatively large deviation when the number of flips is small, and that the number of total people participating hardly makes any impact at all.
RNG are generally uniform and not truely random.the Stnd Dev of 50k flips is SQRT ( 50,000 * 0.5 * 0.5) and it approximates a Normal distributionSo, what is one Stndv... Sqrt(12,500) right?So 3 Stndv = 335wow, that is pretty smallhmmm...did i do something wrong?
Link to post
Share on other sites

RNG's are fine. We were talking about online poker anyways, which uses RNGs, so I don't see how you could argue about the use of a RNG here. I don't see how you could possibly say using a coin and recording results is any different than an RNG unless you start adding in environment factors etc.Also, I'm not familiar with any of the standard deviation talk, please explain further.Edit: I found a nice essay on RNGs and how they work here: http://www.random.org/essay.htmlMy program uses a pseudo-RNG that can be predicted if you know how the generator is seeded (by the timestamp in my program)However, it really doesn't matter that we can predict my random number generator, as all we really care about is that it mimics real randomness, which it does.

Link to post
Share on other sites

actuary is correct, an rng by definition generates 'random' numbers that are uniformly distributed across a number space.you are testing the rng, not luck. luck is a human attribute. if you want to test luck then you need a group of people flipping a coin with something at stake. you would also need to somehow split the group into generally lucky versus generally unlucky.

Link to post
Share on other sites
RNG's are fine. We were talking about online poker anyways, which uses RNGs, so I don't see how you could argue about the use of a RNG here. I don't see how you could possibly say using a coin and recording results is any different than an RNG unless you start adding in environment factors etc.However, it really doesn't matter that we can predict my random number generator, as all we really care about is that it mimics real randomness, which it does.
Truly random and programmed random are not the same thing.Whether it impacts the H/T experiment, probably not much.Online Poker usually uses multiple cues/seeds..cycles..so the "randomness" is more random..I think...Coin flipping is a Binomial DistibutionBinomial probabilities can be approxmated closely with the Normal Distribution, given the right parameters.50k flips is more than enough. So, the Variance of the Binomial is N*p*(1-p). Wher N is flips, and p = probability of success.
Link to post
Share on other sites

I guess I don't understand what you're getting at Actuary.If it helps you to explain, I'm not a math guy, I'm a programming guy. So I don't understand a lot of the terms you're using.

Link to post
Share on other sites
I guess I don't understand what you're getting at Actuary.If it helps you to explain, I'm not a math guy, I'm a programming guy. So I don't understand a lot of the terms you're using.
I set out to show that there was a fairly wide gap between the highest and lowest.Then, I was stabbed in the back by MATH. It showed that over 99% of the time the lowest and highest flippers will be separated by less than 700 Heads. Stndv is just short hand for Standard Deviation. Which is just the square root of Variance. Which is a measure of the deivation about the Average. Certain known Distribution...like the ones you get by flipping a coin, have what's known as "Population Variances" that one can calculate. That was the N*p*(1-p)I liked programming.
Link to post
Share on other sites

Thanks, that helped.Also, when running 100 players with 50,000 flips each using the program, I never got a single player flipping heads or tails more than 25,250 times. I ran it like 10 times. Why does standard deviation allow for it to deviate to 25,700? How does the math work to show that? Which figure is more correct? Why?

Link to post
Share on other sites

setting the terms:

luck1. the force that seems to operate for good or ill in a person's life, as in shaping circumstances, events, or opportunities: With my luck I'll probably get pneumonia. 2. good fortune; advantage or success, considered as the result of chance: He had no luck finding work.
ran·dom1. proceeding, made, or occurring without definite aim, reason, or pattern: the random selection of numbers. 2. Statistics. of or characterizing a process of selection in which each item of a set has an equal probability of being chosen.
random number generatorA program routine that produces a random number. Random numbers are created easily in a computer, since there are many random events that take place such as the duration between keystrokes. Only a few milliseconds' difference is enough to seed a random number generation routine with a different starting number each time.Once seeded, an algorithm computes different numbers throughout the session. The numbers that are created must be distributed evenly over a certain range, and they cannot be predictable (the next number cannot be determined from the last).
any programmed test eliminates the human factor. you were arguing luck v. skill in poker, not the distribution of events in a closed computer simulation.if two human subjects play erratic poker-- betting by feel rather than by mathematical/experiential analysis of hands-- the luckier one would consistently get their money in for big hands and 'avoid' hands where they could have lost alot. in this context, avoid is not a conscious decision, but being lucky enough to have decided to lay the cards down.
Link to post
Share on other sites

My friend and I were assuming every player played optimally and their play styles had no bearing whatsoever in their results.

Link to post
Share on other sites

many people talk about luck without believing in luck. when someone is on a heater people say 'he is such a luckbox' and then slot the results into statistical variance-- 'he was not REALLY lucky, see how the bell curve allows for extreme points at either end? he is the point that proves that the curve works.'the millions of hands played online will all conform to expected values. the rng assures it: it generates rigourously random sequences that are guaranteed to strive toward uniform distribution.however, consider one player choosing to sit at table X and push all in 3 times and winning all 3 to go from $50 to $400. all hands by all players will populate the space defined by the bell curve; the lucky player will get an unusual number of hands that pay off, or are outside the standard deviation.we tell these people, 'you have not played enough hands, variance will get you eventually.' we really like the idea of luck while simultaneously doing everything we can to disprove it.

Skill will reduce your chances of losing a hand. Luck can turn that advantage on its head.
this guy nailed it.
Link to post
Share on other sites
Then, I was stabbed in the back by MATH.
Gaussians get very thin very fast, and since random walks after a large number of walks are basically gaussians, I'm not surprised that this gets quite thin (ie peaks highly around the expected value). Probability is fun, isn't it?
Link to post
Share on other sites
Probability is fun, isn't it?
It depends. The probabilities discussed in this thread are interesting. Discussing and proving Chebyshev's inequality along with its applications is not. Thursday wasn't a great day.
Link to post
Share on other sites

If we were to graph results, let's say...We had the x-axis labeled as number of flips, and the y-axis labeled as the maximum variance from the norm during those flips, would the graph look similar to this?image049.jpgHow could we then add the number of total people participating with that amount of flips to the graph to show how it affects the curve?

Link to post
Share on other sites
It depends. The probabilities discussed in this thread are interesting. Discussing and proving Chebyshev's inequality along with its applications is not. Thursday wasn't a great day.
OhohhhhhNow you're getting me excited...
Link to post
Share on other sites

Bubba1 Standard Deviation of 50,000 flips was equal to ~ 111. Or the Square Root of 50,000 * 0.5 * 0.599.7% of all outcomes of a random variable (like the number of Heads flipped out of 50,000 tosses) will fall between +/- 3 Standard Deviationsd from the Mean. So, that means 99.7% of the time the number of Heads flipped will fall beween 25,000 - 333 and 25,000 + 333, or 24,677 and 25,333About 68% of the results will fall within +/- 1 Standard Deviation, or 24,889 and 25,111About 95% of the results will fall within +/- 2 Standard Deviation, or 24,778 and 25,222Your graph shape is not correct.Well, again it depends on what you are trying to measure.The more times you flip, the more likely you are to be further fromn the Expected Number of flips; but the closer in percentage terms you are expected to beArbitrary example to make a point:10 flips, 3 heads. 30% Heads. Expect 50%. 3 Heads, expect 51,000,000 flips, 490,000 heads. 49% Heads. Expect 50%. 490,000 Heads, expect 500,000So, yeah, more flips gets you closer to the expected rate; but often not closer to the expected nominal value.

Link to post
Share on other sites

I should change the y-axis of my theoretical graph to be the maximum variance from the expected as expressed in a percentage, then the graph would make sense.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...