A well written program should tell you, necessarily with some margin of error, the amount of times out of, say, 1000 where you would expect the coin to land heads on the fourth flip after three heads, and the amount of times you would expect tails. The answer will not be 500.
If you make it 10000 instead of 1000, the margin of error will be smaller.
But then the math gets weird because, get this, if you get the same result twice in a given 50-50 repetition, the odds of the third result also being the same are greater than 50-50.
Flips a coin a million times, and keeps track of how many times in a row it was flipped. Every time it flips 3 in a row, it records the subsequent flip as a “headsAfter3” or a “tailsAfter3”.
Press “run” in the top left to give a run of 1,000,000
I have no idea what that means. I don’t remember ever specifying that, because I don’t even know what the hell it could possibly mean. Why is it incorrect because it doesn’t achieve some previously unspecified criteria that I never heard of?
I think I get what you’re asking for, right, okay.
I’ll do it that way. I don’t think it will make a difference, but I’ll do it that way.
When you flip a normal coin, it’s probably a coin that’s been flipped before. So if you flip 3 in a row, it’s not the first 3 flips that coin has ever experienced in its life, it’s part of a really long sequence of flips. It’s very silly that you think this matters.