|
The problem here is that 0 exactly or 1 exactly doesn't tell you if it's an impossible or certain event or not so using that language really doesn't help. That's why I was making the point of writing 1(almost sure) or 1(sure) before so people actually understand what you're saying there.
The probability of eventually randomly selecting any particular real number between [0,1] is 0 (but not impossible) The probability of eventually randomly selecting 0.548 is 1 exactly. (is it almost sure, or sure)?
The last part there I'm not sure about, it's basically the same question as before anyway.
|
i think that we can denote 1(almost sure) as p(x)=1-epsilon and 1(sure) as p(x)=epsilon, where epsilon as a number infinitely small, it would be kinda more mathematical, as for the question involving probability of selecting particular number a from range [0,1] in an infinite number of tries(let's say continuum) just take a 2d uniform distribution and calc the integrals -_-' , so it'd be int int f(x,y) dx dy, where x = a to a, y = 0 to 1. where a is our number.
|
On July 18 2013 23:32 Reason wrote: The problem here is that 0 exactly or 1 exactly doesn't tell you if it's an impossible or certain event or not so using that language really doesn't help. That's why I was making the point of writing 1(almost sure) or 1(sure) before so people actually understand what you're saying there.
The probability of eventually randomly selecting any particular real number between [0,1] is 0 (but not impossible) The probability of eventually randomly selecting 0.548 is 1 exactly. (is it almost sure, or sure)?
The last part there I'm not sure about, it's basically the same question as before anyway. I'd say the probability of eventually randomly selecting 0.548 is 1 (almost sure), because just like before, it's conceivable that the it will never ever select that number since it's random.
|
On July 18 2013 23:46 uzyszkodnik wrote: i think that we can denote 1(almost sure) as p(x)=1-epsilon and 1(sure) as p(x)=epsilon, where epsilon as a number infinitely small, it would be kinda more mathematical, as for the question involving probability of selecting particular number a from range [0,1] just take a 2d uniform distribution and calc the integrals -_-' . My understanding is that the only infinitely small number (i.e. infinitesimal) over the real numbers is zero, which means that epsilon is exactly zero.
Also, I think you mean for the second equation to be p(x)=1, not p(x)=epsilon, since that would imply that 1(sure) has p(x) ~=~ 0.
|
On July 18 2013 23:46 Tobberoth wrote:Show nested quote +On July 18 2013 23:32 Reason wrote: The problem here is that 0 exactly or 1 exactly doesn't tell you if it's an impossible or certain event or not so using that language really doesn't help. That's why I was making the point of writing 1(almost sure) or 1(sure) before so people actually understand what you're saying there.
The probability of eventually randomly selecting any particular real number between [0,1] is 0 (but not impossible) The probability of eventually randomly selecting 0.548 is 1 exactly. (is it almost sure, or sure)?
The last part there I'm not sure about, it's basically the same question as before anyway. I'd say the probability of eventually randomly selecting 0.548 is 1 (almost sure), because just like before, it's conceivable that the it will never ever select that number since it's random. mmmm that was my guess too... although...
...was this guy onto anything or not? (about the flipping heads thing)
On July 18 2013 20:58 uzyszkodnik wrote: @Rassy Strong law of large numbers says that such trail wont happen even once.
|
On July 18 2013 23:48 Shiori wrote:Show nested quote +On July 18 2013 23:46 uzyszkodnik wrote: i think that we can denote 1(almost sure) as p(x)=1-epsilon and 1(sure) as p(x)=epsilon, where epsilon as a number infinitely small, it would be kinda more mathematical, as for the question involving probability of selecting particular number a from range [0,1] just take a 2d uniform distribution and calc the integrals -_-' . My understanding is that the only infinitely small number (i.e. infinitesimal) over the real numbers is zero, which means that epsilon is exactly zero. Also, I think you mean for the second equation to be p(x)=1, not p(x)=epsilon, since that would imply that 1(sure) has p(x) ~=~ 0.
yeah sorry
sure - p(x)=1, almost sure p(x) = 1-epsilon, impossible p(x) = 0 , almost impossible p(x)=epsilon
as for the second part,
http://mathb.in/8653
hence the probability of selecting number a in continuum infinite number of tries =0 , i assumed that the probability mass function is f(x,y) = 1.
If you think that's wrong, please talk the math to me .
|
On July 18 2013 22:53 paralleluniverse wrote:Show nested quote +On July 18 2013 21:07 Rassy wrote:On July 18 2013 19:42 paralleluniverse wrote:On July 18 2013 09:10 DoubleReed wrote:On July 18 2013 08:43 yOngKIN wrote:On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. measurable only in terms of our limited huma knowledge of math and physics Uhh... no. Measurable as in Lebesgue Measure. Yep. You are completely correct. One thing I've learned is that people like to debate math on the internet and get it all wrong (like this guy), when in fact math is almost never up for debate. Especially, when it's well-established, hundreds of years old math, like measure theory. Math debates are often the most futile because people substitute their intuition with unrelenting zeal in place of mathematical rigor. Particularly in topics like measure theory, which produces counter-intuitive results to those who haven't learned the subject. In a math "debate", a general rule that I observe is the following: If you're debating about technical details, then you're talking to a crank (and probably getting nowhere). If you're debating about philosophy, it's not necessarily apparent that you're talking to a crank. Appeals to ignorance are also very common, as you've just experience. For example. people love saying that we don't understand infinity. There are some things in math that we don't understand, infinity is not one of them. Infinity is a rigorously defined and well-understood concept. With knowledge from a high school or 1st or 2nd year math course, pretty much any perceived problems or hole in our human knowledge of math that one would think of (other than famous unsolved problems), isn't actually a problem nor a hole, but rather a personal lack of knowledge in math. On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. This is correct. The distinction is between countable and uncountable. As an example to back up your fact that "Sum(Measure(An)) = Measure(Union(An))" doesn't work when n is an element of an uncountable set, we can use the uncountable set [0,infinity) and set An as the independent events "a Brownian motion hits 3 at time n". Then the LHS = 0 and the RHS = 1. Also, no I don't understand what they're arguing about either. But in a probably futile attempt to resolve it, let me state the following fact: If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0. Now i have a question to parralel universe who seems to be verry sure in his statements. If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0. Now you pick a number between 0 and 1 an infinite amount of times, what are the odds to pick 0.548 exactly at least once? If it is 0 exactly as he say, then the answer should be 0 However if it is infinitesimally close to 0, then the odds of picking this number at least once would be 1 No?
I'll think about it. I'd say that it'd still be zero, but I'm not entirely sure. I kinda feel like this (maybe) touches on transfinite arithmetic, but division of N_0 by N_1 or vice versa is undefined so I'm stuck. I guess you could define it as zero, but I have no idea what that would do.
http://math.stackexchange.com/questions/146844/how-to-divide-aleph-numbers
These guys seem to be having a tonne of trouble making sense of cardinal division, even when it's uncountable and countable rather than uncountable and uncountable.
Fucking transfinite how does it work
|
On July 18 2013 21:07 Rassy wrote:Show nested quote +On July 18 2013 19:42 paralleluniverse wrote:On July 18 2013 09:10 DoubleReed wrote:On July 18 2013 08:43 yOngKIN wrote:On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. measurable only in terms of our limited huma knowledge of math and physics Uhh... no. Measurable as in Lebesgue Measure. Yep. You are completely correct. One thing I've learned is that people like to debate math on the internet and get it all wrong (like this guy), when in fact math is almost never up for debate. Especially, when it's well-established, hundreds of years old math, like measure theory. Math debates are often the most futile because people substitute their intuition with unrelenting zeal in place of mathematical rigor. Particularly in topics like measure theory, which produces counter-intuitive results to those who haven't learned the subject. In a math "debate", a general rule that I observe is the following: If you're debating about technical details, then you're talking to a crank (and probably getting nowhere). If you're debating about philosophy, it's not necessarily apparent that you're talking to a crank. Appeals to ignorance are also very common, as you've just experience. For example. people love saying that we don't understand infinity. There are some things in math that we don't understand, infinity is not one of them. Infinity is a rigorously defined and well-understood concept. With knowledge from a high school or 1st or 2nd year math course, pretty much any perceived problems or hole in our human knowledge of math that one would think of (other than famous unsolved problems), isn't actually a problem nor a hole, but rather a personal lack of knowledge in math. On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. This is correct. The distinction is between countable and uncountable. As an example to back up your fact that "Sum(Measure(An)) = Measure(Union(An))" doesn't work when n is an element of an uncountable set, we can use the uncountable set [0,infinity) and set An as the independent events "a Brownian motion hits 3 at time n". Then the LHS = 0 and the RHS = 1. Also, no I don't understand what they're arguing about either. But in a probably futile attempt to resolve it, let me state the following fact: If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0. Now i have a question to parralel universe who seems to be verry sure in his statements. If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0.[/QUOTE Now you pick a number between 0 and 1 an infinite amount of times, what are the odds to pick 0.548 exactly at least once? If it is 0 exactly as he say, then the answer should be 0 However if it is infinitesimally close to 0, then the odds of picking this number at least once would be 1 No? For those of you quick enough, you may have notice that I edited out my first response to this question. That's because it was incorrect. The correct solution is 0.
If U_n are independent uniform random variables on [0,1] for n = 1, 2, ..., infinity, then the probability that U_n is eventually 0.548 is equal to 0. This is because P(U_n = 0.548 eventually) = 1 - P(U_n != 0.548 infinitely often), and by the 2nd Borel-Cantelli lemma, P(U_n != 0.548 infinitely often) = 1.
So even if you select (countably many) infinite random numbers that are uniform on [0,1], you still won't get 0.548 exactly. Interestingly, if you have a Brownian motion, which is in some sense like randomly selecting a normal random variable at every time instance in [0,1], then the probability that it's equal to 0.548 at any particular time t, is 0. But the probability that it will equal 0.548 exactly, infinitely often, has probability 1. Note that this doesn't contradict the solution above, because here we are on an uncountable set.
The Borel-Cantelli lemma also says that if we let E_n = "getting all heads in infinite flips on trial n" (a trial is one string of infinite flips), then P(E_n infinitely often) = 0. This is contrary to claims above saying that if you have infinite trials of infinite flips, then you'll get infinitely many trials of all heads. Those claims are wrong.
|
On July 18 2013 22:55 Shiori wrote:Show nested quote +Also, no I don't understand what they're arguing about either. But in a probably futile attempt to resolve it, let me state the following fact: If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly.
Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0. This is really what I was trying to say, in a roundabout way of using examples. I was very poor at communicating it, because probability isn't really my focus in math, and because I'm nothing more than an (competent, I like to think) undergraduate, so thank you very much for making this post (and same with DoubleReed). While I probably didn't know enough to attempt to convince wherebugsgo in a precise fashion, I find these sorts of debates really helpful at learning aspects of math that I don't usually work with, because there's the opportunity to have someone criticize perceived weaknesses in an argument. I am very relieved to know that I wasn't wrong about the probability actually being 0 over uncountably infinite possibilities. Thanks muchly. Are you a mathematician, by the way? May I ask what your specialty is? Yes, in the sense that I have a math degree and job in math (not academic). I specialize in probability theory and statistics.
|
On July 19 2013 00:23 paralleluniverse wrote:Show nested quote +On July 18 2013 21:07 Rassy wrote:On July 18 2013 19:42 paralleluniverse wrote:On July 18 2013 09:10 DoubleReed wrote:On July 18 2013 08:43 yOngKIN wrote:On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. measurable only in terms of our limited huma knowledge of math and physics Uhh... no. Measurable as in Lebesgue Measure. Yep. You are completely correct. One thing I've learned is that people like to debate math on the internet and get it all wrong (like this guy), when in fact math is almost never up for debate. Especially, when it's well-established, hundreds of years old math, like measure theory. Math debates are often the most futile because people substitute their intuition with unrelenting zeal in place of mathematical rigor. Particularly in topics like measure theory, which produces counter-intuitive results to those who haven't learned the subject. In a math "debate", a general rule that I observe is the following: If you're debating about technical details, then you're talking to a crank (and probably getting nowhere). If you're debating about philosophy, it's not necessarily apparent that you're talking to a crank. Appeals to ignorance are also very common, as you've just experience. For example. people love saying that we don't understand infinity. There are some things in math that we don't understand, infinity is not one of them. Infinity is a rigorously defined and well-understood concept. With knowledge from a high school or 1st or 2nd year math course, pretty much any perceived problems or hole in our human knowledge of math that one would think of (other than famous unsolved problems), isn't actually a problem nor a hole, but rather a personal lack of knowledge in math. On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. This is correct. The distinction is between countable and uncountable. As an example to back up your fact that "Sum(Measure(An)) = Measure(Union(An))" doesn't work when n is an element of an uncountable set, we can use the uncountable set [0,infinity) and set An as the independent events "a Brownian motion hits 3 at time n". Then the LHS = 0 and the RHS = 1. Also, no I don't understand what they're arguing about either. But in a probably futile attempt to resolve it, let me state the following fact: If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0. Now i have a question to parralel universe who seems to be verry sure in his statements. If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0.[/QUOTE Now you pick a number between 0 and 1 an infinite amount of times, what are the odds to pick 0.548 exactly at least once? If it is 0 exactly as he say, then the answer should be 0 However if it is infinitesimally close to 0, then the odds of picking this number at least once would be 1 No? For those of you quick enough, you may have notice that I edited out my first response to this question. That's because it was incorrect. The correct solution is 0. If U_n are independent uniform random variables on [0,1] for n = 1, 2, ..., infinity, then the probability that U_n is eventually 0.548 is equal to 0. This is because P(U_n = 0.548 eventually) = 1 - P(U_n = 0.548 infinitely often), and by the 2nd Borel-Cantelli lemma, P(U_n = 0.548 infinitely often) = 1. So even if you select (countably many) infinite random numbers that are uniform on [0,1], you still won't get 0.548 exactly. Interestingly, if you replace "uniform" with "normal", and you randomly selected for every time instance in [0,1] (which is uncountable), then you will get 0.548 exactly, infinitely often with probability 1. That's because it becomes a Brownian motion, which is known to have this property. So the countability (or uncountability) in the number of draws is very important. I don't know without having to do some work what happens if you select uniform random variables on [0,1] for each time instance in [0,1]. Awesome! Thanks very much for your aid in resolving this dispute ><.
The Borel-Cantelli lemma also says that if we let E_n = "getting all heads in infinite flips on trial n" (a trial is one string of infinite flips), then P(E_n infinitely often) = 0. This is contrary to claims above, saying that if you have infinite trials of infinite flips, then you'll get infinitely many trials of all heads. Those claims are wrong. Holy shit my intuition was actually correct. I redacted the argument in an edit because I thought my reasoning was wrong (and I'm pretty sure it was).
I have a related question, though. If we do a countably infinite number of trials (a trial is one string of infinite flips) what is the probability that we will get a string of all heads at least once? Maybe that's a silly question but I just want to be absolutely sure, haha, since a lot of this stuff is kinda mind-boggling in ways.
|
On July 19 2013 00:34 Shiori wrote:Show nested quote +On July 19 2013 00:23 paralleluniverse wrote:On July 18 2013 21:07 Rassy wrote:On July 18 2013 19:42 paralleluniverse wrote:On July 18 2013 09:10 DoubleReed wrote:On July 18 2013 08:43 yOngKIN wrote:On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. measurable only in terms of our limited huma knowledge of math and physics Uhh... no. Measurable as in Lebesgue Measure. Yep. You are completely correct. One thing I've learned is that people like to debate math on the internet and get it all wrong (like this guy), when in fact math is almost never up for debate. Especially, when it's well-established, hundreds of years old math, like measure theory. Math debates are often the most futile because people substitute their intuition with unrelenting zeal in place of mathematical rigor. Particularly in topics like measure theory, which produces counter-intuitive results to those who haven't learned the subject. In a math "debate", a general rule that I observe is the following: If you're debating about technical details, then you're talking to a crank (and probably getting nowhere). If you're debating about philosophy, it's not necessarily apparent that you're talking to a crank. Appeals to ignorance are also very common, as you've just experience. For example. people love saying that we don't understand infinity. There are some things in math that we don't understand, infinity is not one of them. Infinity is a rigorously defined and well-understood concept. With knowledge from a high school or 1st or 2nd year math course, pretty much any perceived problems or hole in our human knowledge of math that one would think of (other than famous unsolved problems), isn't actually a problem nor a hole, but rather a personal lack of knowledge in math. On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. This is correct. The distinction is between countable and uncountable. As an example to back up your fact that "Sum(Measure(An)) = Measure(Union(An))" doesn't work when n is an element of an uncountable set, we can use the uncountable set [0,infinity) and set An as the independent events "a Brownian motion hits 3 at time n". Then the LHS = 0 and the RHS = 1. Also, no I don't understand what they're arguing about either. But in a probably futile attempt to resolve it, let me state the following fact: If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0. Now i have a question to parralel universe who seems to be verry sure in his statements. If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0.[/QUOTE Now you pick a number between 0 and 1 an infinite amount of times, what are the odds to pick 0.548 exactly at least once? If it is 0 exactly as he say, then the answer should be 0 However if it is infinitesimally close to 0, then the odds of picking this number at least once would be 1 No? For those of you quick enough, you may have notice that I edited out my first response to this question. That's because it was incorrect. The correct solution is 0. If U_n are independent uniform random variables on [0,1] for n = 1, 2, ..., infinity, then the probability that U_n is eventually 0.548 is equal to 0. This is because P(U_n = 0.548 eventually) = 1 - P(U_n = 0.548 infinitely often), and by the 2nd Borel-Cantelli lemma, P(U_n = 0.548 infinitely often) = 1. So even if you select (countably many) infinite random numbers that are uniform on [0,1], you still won't get 0.548 exactly. Interestingly, if you replace "uniform" with "normal", and you randomly selected for every time instance in [0,1] (which is uncountable), then you will get 0.548 exactly, infinitely often with probability 1. That's because it becomes a Brownian motion, which is known to have this property. So the countability (or uncountability) in the number of draws is very important. I don't know without having to do some work what happens if you select uniform random variables on [0,1] for each time instance in [0,1]. Awesome! Thanks very much for your aid in resolving this dispute ><. Show nested quote +The Borel-Cantelli lemma also says that if we let E_n = "getting all heads in infinite flips on trial n" (a trial is one string of infinite flips), then P(E_n infinitely often) = 0. This is contrary to claims above, saying that if you have infinite trials of infinite flips, then you'll get infinitely many trials of all heads. Those claims are wrong. Holy shit my intuition was actually correct. I redacted the argument in an edit because I thought my reasoning was wrong (and I'm pretty sure it was). I have a related question, though. If we do a countably infinite number of trials (a trial is one string of infinite flips) what is the probability that we will get a string of all heads at least once? Maybe that's a silly question but I just want to be absolutely sure, haha, since a lot of this stuff is kinda mind-boggling in ways. This is the same question as the one I answered, but with the events "U_n = 0.548" replaced with the events "trial n is all head". Since both these events have probability 0, the answer is 0 using the same argument as above.
Lesson: You can't get zero probability events (like selecting a particular real number at random, or flipping infinite heads) occurring for sure, even if you repeat it (countably) infinitely many times.
|
On July 19 2013 00:40 paralleluniverse wrote:Show nested quote +On July 19 2013 00:34 Shiori wrote:On July 19 2013 00:23 paralleluniverse wrote:On July 18 2013 21:07 Rassy wrote:On July 18 2013 19:42 paralleluniverse wrote:On July 18 2013 09:10 DoubleReed wrote:On July 18 2013 08:43 yOngKIN wrote:On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. measurable only in terms of our limited huma knowledge of math and physics Uhh... no. Measurable as in Lebesgue Measure. Yep. You are completely correct. One thing I've learned is that people like to debate math on the internet and get it all wrong (like this guy), when in fact math is almost never up for debate. Especially, when it's well-established, hundreds of years old math, like measure theory. Math debates are often the most futile because people substitute their intuition with unrelenting zeal in place of mathematical rigor. Particularly in topics like measure theory, which produces counter-intuitive results to those who haven't learned the subject. In a math "debate", a general rule that I observe is the following: If you're debating about technical details, then you're talking to a crank (and probably getting nowhere). If you're debating about philosophy, it's not necessarily apparent that you're talking to a crank. Appeals to ignorance are also very common, as you've just experience. For example. people love saying that we don't understand infinity. There are some things in math that we don't understand, infinity is not one of them. Infinity is a rigorously defined and well-understood concept. With knowledge from a high school or 1st or 2nd year math course, pretty much any perceived problems or hole in our human knowledge of math that one would think of (other than famous unsolved problems), isn't actually a problem nor a hole, but rather a personal lack of knowledge in math. On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. This is correct. The distinction is between countable and uncountable. As an example to back up your fact that "Sum(Measure(An)) = Measure(Union(An))" doesn't work when n is an element of an uncountable set, we can use the uncountable set [0,infinity) and set An as the independent events "a Brownian motion hits 3 at time n". Then the LHS = 0 and the RHS = 1. Also, no I don't understand what they're arguing about either. But in a probably futile attempt to resolve it, let me state the following fact: If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0. Now i have a question to parralel universe who seems to be verry sure in his statements. If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0.[/QUOTE Now you pick a number between 0 and 1 an infinite amount of times, what are the odds to pick 0.548 exactly at least once? If it is 0 exactly as he say, then the answer should be 0 However if it is infinitesimally close to 0, then the odds of picking this number at least once would be 1 No? For those of you quick enough, you may have notice that I edited out my first response to this question. That's because it was incorrect. The correct solution is 0. If U_n are independent uniform random variables on [0,1] for n = 1, 2, ..., infinity, then the probability that U_n is eventually 0.548 is equal to 0. This is because P(U_n = 0.548 eventually) = 1 - P(U_n = 0.548 infinitely often), and by the 2nd Borel-Cantelli lemma, P(U_n = 0.548 infinitely often) = 1. So even if you select (countably many) infinite random numbers that are uniform on [0,1], you still won't get 0.548 exactly. Interestingly, if you replace "uniform" with "normal", and you randomly selected for every time instance in [0,1] (which is uncountable), then you will get 0.548 exactly, infinitely often with probability 1. That's because it becomes a Brownian motion, which is known to have this property. So the countability (or uncountability) in the number of draws is very important. I don't know without having to do some work what happens if you select uniform random variables on [0,1] for each time instance in [0,1]. Awesome! Thanks very much for your aid in resolving this dispute ><. The Borel-Cantelli lemma also says that if we let E_n = "getting all heads in infinite flips on trial n" (a trial is one string of infinite flips), then P(E_n infinitely often) = 0. This is contrary to claims above, saying that if you have infinite trials of infinite flips, then you'll get infinitely many trials of all heads. Those claims are wrong. Holy shit my intuition was actually correct. I redacted the argument in an edit because I thought my reasoning was wrong (and I'm pretty sure it was). I have a related question, though. If we do a countably infinite number of trials (a trial is one string of infinite flips) what is the probability that we will get a string of all heads at least once? Maybe that's a silly question but I just want to be absolutely sure, haha, since a lot of this stuff is kinda mind-boggling in ways. This is the same question as the one I answered, but with the events "U_n = 0.548" replaced with the events "trial n is all head". Since both these events have probability 0, the answer is 0 using the same argument as above. Lesson: You don't zero probability occurring for sure, even if you repeat it (countably) inifinitely many times. Okay thanks very much. That's what I initially thought but I was kinda getting confused since probability/statistics have been, thus far, basically pushed to the side during my degree for the sake of satiating my desire to sample different fields than just math. After this thread, though, I'm definitely looking forward to taking the required statistics/probability courses in the future a lot more than I was beforehand, haha.
Thanks ^.^
|
On July 18 2013 19:42 paralleluniverse wrote:Show nested quote +On July 18 2013 09:10 DoubleReed wrote:On July 18 2013 08:43 yOngKIN wrote:On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. measurable only in terms of our limited huma knowledge of math and physics Uhh... no. Measurable as in Lebesgue Measure. Yep. You are completely correct. One thing I've learned is that people like to debate math on the internet and get it all wrong (like this guy), when in fact math is almost never up for debate. Especially, when it's well-established, hundreds of years old math, like measure theory. Math debates are often the most futile because people substitute their intuition with unrelenting zeal in place of mathematical rigor. Particularly in topics like measure theory, which produces counter-intuitive results to those who haven't learned the subject. In a math "debate", a general rule that I observe is the following: If you're debating about technical details, then you're talking to a crank (and probably getting nowhere). If you're debating about philosophy, it's not necessarily apparent that you're talking to a crank. Appeals to ignorance are also very common, as you've just experience. For example. people love saying that we don't understand infinity. There are some things in math that we don't understand, infinity is not one of them. Infinity is a rigorously defined and well-understood concept. With knowledge from a high school or 1st or 2nd year math course, pretty much any perceived problems or hole in our human knowledge of math that one would think of (other than famous unsolved problems), isn't actually a problem nor a hole, but rather a personal lack of knowledge in math. Show nested quote +On July 18 2013 07:42 DoubleReed wrote: What are you guys talking about?
Finite vs Infinite shouldn't matter for those problems. Uncountable and countable are the only restrictions on such sets. There is no additivity of an uncountable number of sets.
If A and B are disjoint (and measurable), Measure(A) + Measure(B) = Measure(A U B) You can also do this for many sets. Sum(Measure(An)) = Measure(Union(An)) where n is finite (just stretching the previous statement to multiple sets). So {An} is a finite sequence of disjoint, measurable sets. You can also do this if n is a countably infinite set, so {An} is a countably infinite sequence of disjoint, measurable sets. But you can't do that if n is an uncountably infinite set. That doesn't work. You can't pretend that it does, and there are plenty of easy exceptions.
I'm confused because it seems like you should be differentiating between countable and uncountable, rather than finite and infinite. This is measure theory. So just use measure theory. This is correct. The distinction is between countable and uncountable. As an example to back up your fact that "Sum(Measure(An)) = Measure(Union(An))" doesn't work when n is an element of an uncountable set, we can use the uncountable set [0,infinity) and set An as the independent events "a Brownian motion hits 3 at time n". Then the LHS = 0 and the RHS = 1. Also, no I don't understand what they're arguing about either. But in a probably futile attempt to resolve it, let me state the following fact: If every real number in [0,1] has equal probability of selection, then the probability of randomly selecting any particular number between [0,1] (e.g. 0.548 exactly), is 0 exactly. Not "approximately 0", or "infinitesimally close to 0", or "1/infinity", or "approaches 0", or "0 in the limit", or whatever. It's simply 0.
I'm going to ignore the fact that you started this post off by being an ass, because I really want to understand this assertion that both you and Shiori have made.
Let me start this by saying that I am a programmer, not a mathematician, so I am approaching this from a logical perspective with limited mathematical knowledge.
Logically, if you have an equal probability of selecting any number from an infinite selection, selecting one number should have a probability of 1/infinity. I am not denying that it is mathematically provable to be 0, but in order for me to believe you that the probability is 0 and not 1/infinity and hence invalidate my intuition I need to see solid evidence that it is not 1/infinity.
On July 19 2013 00:23 paralleluniverse wrote:For those of you quick enough, you may have notice that I edited out my first response to this question. That's because it was incorrect. The correct solution is 0. If U_n are independent uniform random variables on [0,1] for n = 1, 2, ..., infinity, then the probability that U_n is eventually 0.548 is equal to 0. This is because P(U_n = 0.548 eventually) = 1 - P(U_n != 0.548 infinitely often), and by the 2nd Borel-Cantelli lemma, P(U_n != 0.548 infinitely often) = 1.
As far as I can tell your solution does not disprove P(U_n = 0.548 eventually) = 1/infinity since the 2nd Borel-Cantelli lemma obtains a probability of 1 using a limit so it could equally be represented as 1 - 1/infinity (or if this is the case http://en.wikipedia.org/wiki/0.999.. as Tobberoth posted earlier, 1 could always be represented as 1 - 1/infinity). Therefore it would be equally true to say P(U_n != 0.548 infinitely often) = 1 - 1/infinity, hence P(U_n = 0.548 eventually) = 1 - (1 - 1/infinity) = 1/infinity.
If the above is correct, you could say that the answer is 0 and (1/infinity, infinitesimally small, approaching 0, whatever) but you could not say it is 0 and not infinitesimally small, and you must both concede that you made an incorrect assertion, if it's not correct please tell me what I am missing or what I have done incorrectly in my reasoning.
|
Myrddraal: allow me to help. This stuff is pretty counter-intuitive but also really cool.
"Infinitesimally small" is kind of a weird and vague intuitive idea. It's not a rigorous thing as far as I know. Like when people say "approaching 0," this doesn't actually make sense. Try writing something like this down. It's just not the way limits work. Limits equal things.
Another reason why this doesn't work is because of the countable/uncountable thing that I was trying to get people to understand. This is not a problem of finite and infinite. This is a problem of countable and uncountable.
Example:
Rather than look at the probability of picking a single number between 0 and 1, let's look at the probability of picking a rational number between 0 and 1. There are certainly infinite rational numbers, each with 1/infinity chance of picking it. So obviously we have absolutely no idea, if you look at it this way. It just doesn't make sense. There's infinite rational numbers and infinite irrational numbers. What do we do?
This is why you can't think of it this way. Here's the correct way: the rational numbers are countable (you can align the rational numbers with the natural numbers in one-to-one correspondence). All countable sets have measure zero. So the probability is zero. The irrational numbers are uncountable. There's way way way more irrational numbers than rational numbers.
If you were to actually pick a random real number between 1 and 0, it's going to be a irrational number.
Hope that helps and blows your mind.
|
DoubleReed explained it better.
The point about limits is good too. Lim f(x) = L for x-->a doesn't mean that the limit approaches L. It means that f(x) approaches L at a. To say that a limit approaches something is like saying the limit of the limit :p.
|
On July 19 2013 12:11 DoubleReed wrote:Myrddraal: allow me to help. This stuff is pretty counter-intuitive but also really cool. "Infinitesimally small" is kind of a weird and vague intuitive idea. It's not a rigorous thing as far as I know. Like when people say "approaching 0," this doesn't actually make sense. Try writing something like this down. It's just not the way limits work. Limits equal things. Show nested quote +I think you may need to elaborate on this, Wikipedia seems to disagree: "In mathematics, a limit is the value that a function or sequence "approaches" as the input or index approaches some value." http://en.wikipedia.org/wiki/Limit_(mathematics)I'm not trying to deny that limit with an input approaching infinity can be considered equal, but saying that the value does not "approach 0" seems wrong given the above definition. Does the definition change when the input is infinite? Another reason why this doesn't work is because of the countable/uncountable thing that I was trying to get people to understand. This is not a problem of finite and infinite. This is a problem of countable and uncountable. Example: Rather than look at the probability of picking a single number between 0 and 1, let's look at the probability of picking a rational number between 0 and 1. There are certainly infinite rational numbers, each with 1/infinity chance of picking it. So obviously we have absolutely no idea, if you look at it this way. It just doesn't make sense. There's infinite rational numbers and infinite irrational numbers. What do we do? This is why you can't think of it this way. Here's the correct way: the rational numbers are countable (you can align the rational numbers with the natural numbers in one-to-one correspondence). All countable sets have measure zero. So the probability is zero. The irrational numbers are uncountable. There's way way way more irrational numbers than rational numbers. If you were to actually pick a random real number between 1 and 0, it's going to be a irrational number. Hope that helps and blows your mind.
I'm pretty sure I understand what you are saying here, but I still don't understand how it disproves a 1/infinity probability.
Are you saying that 1/infinity can not be represented by rational numbers since they must be countable hence the probability is 0?
In which I would argue that saying that the probability is still 1/infinity is still conceptually correct, even if it cannot be represented with a real number other than 0.
Or are you saying that since there are uncountably infinite possibilities that 1/infinity does not represent the probability?
In which case I would ask what prevents the probability from being 1/(uncountable infinity)?
|
I check this thread from time to time just to see if somebody managed to revive it.
|
No. I'm saying I have no idea what 1/infinity means. It may be intuitive but it doesn't actually mean anything.
[Edit: If you mean Lim 1/x as x -> infinity then this equals zero. Try writing it down. It doesn't approach zero. It equals zero. Limits don't approach things. Limits equal things. The x approaches infinity in the limit, but the limits themselves don't approach stuff.]
But I forgot something. There's actually a super duper easy way to see that the probability is exactly zero.
Let's look at the probability of picking a random number on [0,1] that it lands on the interval [0.47,0.53]. Well it's 6%, right? Because the length of the interval is 0.06. So let's forget that whole measure thing and just look at lengths of intervals. What's the generalized way to find the length of an interval?
Length[a,b] = b - a. Simple.
Okay. How does this relate to the probability of picking a single number? Well, a single number can be expressed as a closed interval! What's the probability of picking a number on the interval [0.47,0.47]? Well it's just the length of the interval. Which is 0.47 - 0.47 = 0.
No calculus. No infinity. No countability. Just subtraction. That's how we like it.
|
On July 19 2013 13:29 DoubleReed wrote: No. I'm saying I have no idea what 1/infinity means. It may be intuitive but it doesn't actually mean anything.
[Edit: If you mean Lim 1/x as x -> infinity then this equals zero. Try writing it down. It doesn't approach zero. It equals zero. Limits don't approach things. Limits equal things. The x approaches infinity in the limit, but the limits themselves don't approach stuff.]
Sorry if I am being difficult, but it seems like you are getting caught up on the language I am using rather than what I am actually intending. I am assuming we can agree on the language used in the first statement on limits from Wikipedia : In mathematics, a limit is the value that a function or sequence "approaches" as the input or index approaches some value. I will do my best not to differentiate from this language in any way to describe what I am trying to say.
Yes, a limit equals something, but when calculating the limit of 1/x as the input x approaches infinity, the value of the limit L will approach 0. Now, my understanding (perhaps here is where you can fill me in and it will make sense to me) is that it is technically impossible for x to actually "reach" infinity, so it is technically impossible for the limit to "reach" 0, though instead it gets so close so as to make practically no difference. I would argue that while it makes practically no difference, and mathematically we don't run into problems treating them as equal and they are mathematically provable to be equal, if we were to define the difference, the clearest way to define the difference would be infinitesimal.
I think what I am trying to say pretty much is that to me, the concept of 1/infinity or something infinitesimal is effectively equal to 0 in almost every way. Except that I think the separate definition would be useful in terms of theoretical probability to be able to effectively describe the difference between something that is impossible and almost impossible.
But I forgot something. There's actually a super duper easy way to see that the probability is exactly zero.
Let's look at the probability of picking a random number on [0,1] that it lands on the interval [0.47,0.53]. Well it's 6%, right? Because the length of the interval is 0.06. So let's forget that whole measure thing and just look at lengths of intervals. What's the generalized way to find the length of an interval?
Length[a,b] = b - a. Simple.
Okay. How does this relate to the probability of picking a single number? Well, a single number can be expressed as a closed interval! What's the probability of picking a number on the interval [0.47,0.47]? Well it's just the length of the interval. Which is 0.47 - 0.47 = 0.
No calculus. No infinity. No countability. Just subtraction. That's how we like it.
That is a nice concise solution, though I have asserted multiple times that I don't doubt that it is mathematically provable to be 0, I am interested in how it is provable to not be 1/infinity or infinitesimal.
|
On July 19 2013 13:29 DoubleReed wrote: No. I'm saying I have no idea what 1/infinity means. It may be intuitive but it doesn't actually mean anything.
[Edit: If you mean Lim 1/x as x -> infinity then this equals zero. Try writing it down. It doesn't approach zero. It equals zero. Limits don't approach things. Limits equal things. The x approaches infinity in the limit, but the limits themselves don't approach stuff.]
But I forgot something. There's actually a super duper easy way to see that the probability is exactly zero.
Let's look at the probability of picking a random number on [0,1] that it lands on the interval [0.47,0.53]. Well it's 6%, right? Because the length of the interval is 0.06. So let's forget that whole measure thing and just look at lengths of intervals. What's the generalized way to find the length of an interval?
Length[a,b] = b - a. Simple.
Okay. How does this relate to the probability of picking a single number? Well, a single number can be expressed as a closed interval! What's the probability of picking a number on the interval [0.47,0.47]? Well it's just the length of the interval. Which is 0.47 - 0.47 = 0.
No calculus. No infinity. No countability. Just subtraction. That's how we like it. So if you have a theoretical one-sided dice, the odds of getting 1 is 0, because 1-1 = 0? I don't think this closed interval thing works.
|
|
|
|