dvrabel on 29/9/2007 at 22:18
When we reduce the incredibly complex physical system that is coin flipping to the simple mathematical model:
P(head) = 0.5
P(tail) = 0.5
Then we have a problem that is solely analyzable with the mathematics of probability.
Chade on 30/9/2007 at 01:20
Quote Posted by RocketMan
Why is it that as you consecutively flip heads the probability of such occurences diminishes exponentially? Its not a question I expect anybody to answer "because of this law stupid". Its just a question I feel we haven't fully answered yet.
It is a question we have fully answered, though ... I am uncertain why you would believe otherwise.
Secondly, we do have a good understanding of why thermodynamic behavior occurs. Well, that is to say,
I don't have a good understanding of it, but there are many people who do. It is a consequence of the symmetries, conservation laws, random perturbations (etc etc...), of the laws governing our universe.
Martek on 30/9/2007 at 02:49
Rolling all heads or or tails is no more or less likely than rolling any particular sequence of heads and tails.
For example, in 8 flips, it is more likely to flip 4 heads and 4 tails than it is to flip 8 heads or tails. But it is just as likely to flip 4 heads and 4 tails in a specific order as it is to flip 8 heads or tails.
In other words, if we use 1 for heads and 0 for tails, then we can flip 4 heads and 4 tails in many different sequences:
11110000 10101010 11001100 10010110, etc.
You are just as likely to flip 11110000 as to roll 11111111.
So yes it is more likely to flip a particular quantity of heads or tails than some other quantity of heads or tails. But the specific sequence they get flipped in is just as likely for any combination. It's just that more combinations of them contain the same quantity of heads or tails.
It's like a binary number. You can have many combinations of 4 bits on and four bits off. But only one specific combination equals 240.
Same goes for dice. It is more likely to roll 7 than snake eyes, but to roll a specific 7 is the same odds as rolling snake eyes (assuming you consider each die independent; such as "Die A" and "Die B" when considering a specifc combination, such as a 3 and a 4. A "Die A 3" plus a "Die B 4" is just as likely as a "Die A 1" and a Die B 1"; or as likely as a "Die A 4" plus a "Die B 3").
Martek
RocketMan on 30/9/2007 at 16:20
Quote Posted by Chade
It is a question we have fully answered, though ... I am uncertain why you would believe otherwise.
Secondly, we do have a good understanding of why thermodynamic behavior occurs. Well, that is to say,
I don't have a good understanding of it, but there are many people who do. It is a consequence of the symmetries, conservation laws, random perturbations (etc etc...), of the laws governing our universe.
I think the word random in itself is somthing we don't understand. We know what its implications are and its properties but what is "random"? When we try to make a random number (as far as I'm aware) we can't. We have to seed some sort of equation to make a pseudo-random number. In otherwords, we can't help but to incorporate intelligent design into our own manifestations of random, which sort of destroys the notion of what random is (being devoid of all design or order)....this is just an example that parallels the confusion I have surrounding entropy. It too acts in a completely "random" way so as to degenerate orderly systems into chaotic ones.
Martek:
Thanks for the alternate viewpoint. I hadn't pictured it quite that way. You are essentially saying that the most orderly outcomes have the fewest permutations, the least orderly having the most permutations and therefore, entropy is just a consequence of there being more disorderly states than orderly ones? I had been trying to incorporate that into my view but you presented it more effectively.
I'd like to offer a thought experiment with no particular bias:
Imagine you are an omnipresent being looking down on the universe in its entirety from the beginning of time to the end of time. To simplify the scenario lets stick with coin tosses as they only have 2 outcomes. If you could look at every coin toss in the history of the universe and count them all up...what might the distribution look like? Would it be skewed severely? Would it be fairly even? If by the end of the universe the results are highly skewed, what might you say to account for this behaviour. If even, what can be said for the symmetry you see? If the universe had been abruptly terminated at any random point in time prior to its true end, would the distribution look any different? Would you be catching the universe with its pants down before it could even the coin count or would it never skew enough to really matter anyway? Assume that somewhere there's ppl flipping coins for as long as the universe exists.
Chade on 30/9/2007 at 22:42
Quote Posted by RocketMan
I think the word random in itself is somthing we don't understand. We know what its implications are and its properties but what is "random"? When we try to make a random number (as far as I'm aware) we can't. We have to seed some sort of equation to make a pseudo-random number. In otherwords, we can't help but to incorporate intelligent design into our own manifestations of random, which sort of destroys the notion of what random is (being devoid of all design or order)....this is just an example that parallels the confusion I have surrounding entropy. It too acts in a completely "random" way so as to degenerate orderly systems into chaotic ones.
Fair enough, but in most cases, "random" is quite a pedestrian concept really. It just refers to the stuff we don't bother taking into account (either we don't know it, or it's too uneconomical to predict, etc etc) when we predict the behavior of a system.
In quantum mechanics, randomness appears to be an integral part of the way a system behaves.
Either way, it's not true to say that we can't build a random number generator. In fact, we build one every time we shuffle a deck of cards or flip a coin.
Computers, on the other hand, are really crappy places to generate a random number, because we have such a high level of control over what they do. We could build specialized hardware capable of generating a random number, but for many reasons we have generally decided not to do that and gone with pseudo-random number generators instead.
RocketMan on 1/10/2007 at 02:35
You're right about the random number generator thing..i thought of a couple of ways myself of making random numbers (if we realllly wanted to do so). Still they rely on stepping back and letting some natural phenomenon decide the course of events.
I thought of something else....kinda system shocky. In ss2 we have those hack puzzles where we have to connect 3 nodes. Every now and again the hack strength is 50%...depends on our cyber skills etc. But when that happens, has anybody tried clicking a node (1 of 3) and if it lights up, clicking the next one somewhere else (some useless node) in anticipation of the next node not being lit and then returning to the original string if node 2 did indeed turn dark? I was wondering today, if 1 person used this strategy of relying on history to select which node to click (1 of the 3 you are trying to line up or some "discard" node) and if another used the strategy of treating all trials equally and just clicking each of the 3 nodes in turn without consideration of history, who would have more successfull hacks? That goes with any other experiment really, roulette, coins, whatever. I'd be realllly interested to see which strategy wins more often.
Chade on 1/10/2007 at 10:42
Well, here's some Scilab code which does the test (assuming I interpreted you right).
//Toss coins, count the number of heads (1's) we get
//Original strategy: count every toss
//Modified strategy: only count tosses if the toss beforehand was tails
//how many coin tosses are we doing?
xcount = 1000;
//toss all the coins
x = int(rand(1, xcount)*2);
//line up tosses (x2) with the previous toss (x1) (why? see below)
x1 = [0 x];
x2 = [x 0];
// only include toss in modified strategy if the previous toss was 0
yincluded = (1-x1);
ycount = sum(yincluded);
// only get a tails if the previous toss was 0 and this toss was 1
y = yincluded .* x2;
//results of the original strategy
sum(x)/xcount
//results of modified strategy
sum(y)/ycount
Whack it in a .sce file and execute it within scilab and you'll see that there is no difference between strategies. (Scilab is a free download, btw)
RocketMan on 1/10/2007 at 15:28
This gives you more or less the same results both ways? I feel kinda sad. Maybe a more sophisticated test that compares the entire history rather than the prvious coin toss? I might try to write something up in c if I have the time.
Brian The Dog on 1/10/2007 at 17:44
In answer to the original questions:
1) As Fafhrd says, c denotes the speed of light in a vacuum. Physicists denote the ratio of the speed of light in a substance (v) to the speed of light in a vacuum (c) by the refractive index of the material.
i.e. n = c/v
For water, n is approximately 4/3. Just to cover myself, all speeds quoted above are the phase velocites of the EM signal.
In practice, n is a function of wavelength, so many materials chromatically disperse light at physical boundaries. The best example of this is a prism.
2) For your probabilities example, suppose we have 2 possible outcomes, A and B. Thus p(A) + p(B) = 1. You are right in saying that probabilities are commutative, i.e.
p(AA) = p(A) * p(A)
providing that the two results are independent. The reason why you see some more outcomes likely than others is simply because there are more possible combinations that can give that outcome. For instance, take the probability of getting 4 heads in tossing a coin, against getting the probability of getting 2 heads:
Case A: p(4 heads) = p(H) * p(H) * p(H) * p(H) = 0.5^4
Case B: p(2 heads) = p(H) * p(H) * p(T) * p(T)
+ p(H) * p(T) * p(H) * p(T)
+ p(H) * p(T) * p(T) * p(H)
+ p(T) * p(H) * p(H) * p(T)
+ p(T) * p(H) * p(T) * p(H)
+ p(T) * p(T) * p(H) * p(H) = 6 * (0.5 ^ 4)
This is known as the Central Limit Theorem - adding Uniformly Distributed probabilities gives a Normal distribution.
Hope this helps (Brian is wagging his tail, he doesn't often get to think about this stuff).
RocketMan on 1/10/2007 at 18:56
Sort of the same thing that goes on in the rows of pascal's triangle.