|
|
On September 12 2012 02:20 paralleluniverse wrote:Show nested quote +On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.
|
United States41936 Posts
On September 12 2012 02:01 NonCorporeal wrote:I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future. Anyway, let's get back on topic, shall we? Show nested quote +On September 11 2012 08:25 Savio wrote:On September 11 2012 08:24 ziggurat wrote:On September 11 2012 06:53 KwarK wrote: Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love? How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views? Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values. Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate. Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things. Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to?
I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?
|
On September 12 2012 02:30 radiatoren wrote:Show nested quote +On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.
|
On September 12 2012 02:30 radiatoren wrote:Show nested quote +On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. No that's not what you said.
What you said is:
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.
In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. 1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.
Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.
If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.
And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.
|
On September 12 2012 02:34 xDaunt wrote:Show nested quote +On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved. Adjustment or assumptions. I belive they are taking selected random people to combat the problem. But the adjustment made is different for each poll-provider, making it more or less useless to compare data from two different providers of polls.
|
On September 12 2012 02:34 xDaunt wrote:Show nested quote +On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved. If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), where the weights given to the responses are adjusted to make the sample representative of some known characteristic of the population, then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.
Also, as far as I'm aware, polls don't do this. BLS surveys do.
|
On September 12 2012 02:44 paralleluniverse wrote:Show nested quote +On September 12 2012 02:34 xDaunt wrote:On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved. If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias. I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.
|
On September 12 2012 02:36 paralleluniverse wrote:Show nested quote +On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. No that's not what you said. What you said is: Show nested quote +However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.
In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. 1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million. Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million. If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US. And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated. We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote.
|
On September 12 2012 02:48 xDaunt wrote:Show nested quote +On September 12 2012 02:44 paralleluniverse wrote:On September 12 2012 02:34 xDaunt wrote:On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved. If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias. I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results. I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated.
The BLS does it: http://bls.gov/osmr/pdf/st930500.pdf
But even if they did, it's not invalid. It generally makes estimates more accurate, not less.
|
On September 12 2012 02:51 radiatoren wrote:Show nested quote +On September 12 2012 02:36 paralleluniverse wrote:On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. No that's not what you said. What you said is: However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.
In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. 1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million. Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million. If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US. And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated. We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote. I'm yet to be convinced that pollsters do this.
But as I've said, this is post stratification and it's perfectly valid. Here's what the BLS says about it.
Post-stratification estimation is a technique used in sample surveys to improve efficiency of estimators. Survey weights are adjusted to force the estimated numbers of units in each of a set of estimation cells to be equal to known population totals. The resulting weights are then used in forming estimates of means or totals of variables collected in the survey. For example, in a household survey the estimation cells may be based on age/race/sex categories of individuals and the known totals may come from the most recent population census. Although the variance of a post-stratified estimator can be computed over all possible sample configurations, inferences made conditionally on the achieved sample configuration are desirable. Theory and a simulation study using data from the U.S. Current Population Survey are presented to study both the conditional bias and variance of the post-stratified estimator of a total. The linearization, balanced repeated replication, and jackknife variance estimators are also examined to determine whether they appropriately estimate the conditional variance. Source: http://bls.gov/osmr/pdf/st930500.pdf
|
On September 12 2012 02:33 KwarK wrote:Show nested quote +On September 12 2012 02:01 NonCorporeal wrote:I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future. Anyway, let's get back on topic, shall we? On September 11 2012 08:25 Savio wrote:On September 11 2012 08:24 ziggurat wrote:On September 11 2012 06:53 KwarK wrote: Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love? How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views? Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values. Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate. Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things. Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to? I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?
No, the government shouldn't be able to stop two consenting adults from getting married. That wasn't what Savio was referring to though, If I'm not mistaken, Savio was referring to the idea that people have a "right" to not be offended; hence why he said "how can you possibly have a right to social acceptance," which is indeed ridiculous and goes against the entire idea of freedom of speech. I was then expanding upon ziggurat's statement, by bringing the entitlement crowed into the picture.
|
On September 12 2012 02:34 xDaunt wrote:Show nested quote +On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved. I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals.
User was temp banned for this post.
|
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)
|
On September 12 2012 03:32 sc2superfan101 wrote: gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.) Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?
|
On September 12 2012 00:41 xDaunt wrote:Show nested quote +On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." Pollsters don't have access to a person's actual party registration as far as I know. Anecdotally, a much larger amount of Republicans I know tend to self-identify as Independents than Democrats who self-identify as Independents.
If they include lean-R and lean-D in with the self-identified partisans, and it totals to about 45% of each, then that's fairly accurate.
|
On September 12 2012 03:35 NonCorporeal wrote:Show nested quote +On September 12 2012 03:32 sc2superfan101 wrote: gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.) Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different? Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.
also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...
|
On September 12 2012 03:32 NonCorporeal wrote:Show nested quote +On September 12 2012 02:34 xDaunt wrote:On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved. I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals. I'm always puzzled by the huge number of conservative news organizations and people willing to use those opinion laden "stories" as evidence of some nonexistant or overblown trend.
|
On September 12 2012 03:51 sc2superfan101 wrote:Show nested quote +On September 12 2012 03:35 NonCorporeal wrote:On September 12 2012 03:32 sc2superfan101 wrote: gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.) Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different? Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws. also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon... No, this is totally wrong. From http://en.wikipedia.org/wiki/Opposition_to_legal_abortion
Before the Roe v. Wade decision, the right-to-life movement in the U.S. consisted of lawyers, politicians, and doctors, almost all of whom were Catholic. The only coordinated opposition to abortion during the early 1970s came from the United States Conference of Catholic Bishops and the Family Life Bureau, also a Catholic organization. Mobilization of a wide-scale pro-life movement among Catholics began quickly after the Roe v. Wade decision with the creation of the National Right to Life Committee (NRLC). The NRLC also organized non-Catholics, eventually becoming the largest pro-life organization in the United States. Connie Paige has been quoted as having said that, "[t]he Roman Catholic Church created the right-to-life movement. Without the church, the movement would not exist as such today."[15
Much of the pro-life movement in the United States and around the world finds support in the Roman Catholic Church, Christian right, the Lutheran Church-Missouri Synod and the Wisconsin Evangelical Lutheran Synod, the Church of England, the Anglican Church in North America, the Eastern Orthodox Church, and The Church of Jesus Christ of Latter-day Saints (LDS).[31][32][33][34] However, the pro-life teachings of these denominations vary considerably. The Eastern Orthodox Church and Roman Catholic Church consider abortion to be immoral in all cases, but permit acts[citation needed] which indirectly result in the death of the fetus in the case where the mother's life is threatened. In Pope John Paul II's Letter to Families he simply stated the Roman Catholic Church's view on abortion and euthanasia: "Laws which legitimize the direct killing of innocent human beings through abortion or euthanasia are in complete opposition to the inviolable right to life proper to every individual; they thus deny the equality of everyone before the law."
|
On September 12 2012 02:53 paralleluniverse wrote:Show nested quote +On September 12 2012 02:48 xDaunt wrote:On September 12 2012 02:44 paralleluniverse wrote:On September 12 2012 02:34 xDaunt wrote:On September 12 2012 02:30 radiatoren wrote:On September 12 2012 02:20 paralleluniverse wrote:On September 12 2012 01:53 radiatoren wrote:On September 12 2012 00:41 xDaunt wrote:On September 12 2012 00:35 KwarK wrote:On September 12 2012 00:16 xDaunt wrote:Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know. I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me. Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it. I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters." There is something to be said about removing biases introduced by the people you choose to poll. However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions. I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here. The size of the population makes no difference. I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distributionThe exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference. The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll. If you don't believe my algebra here's a numerical example. In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged. No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election. My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved. If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias. I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results. I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated. The BLS does it: http://bls.gov/osmr/pdf/st930500.pdfBut even if they did, it's not invalid. It generally makes estimates more accurate, not less. Most major polling firms in the US are using it, and correct for ethnic groups, age groups and gender, mainly to correct for voters that are notoriously hard to reach. For example, Gallup does the following:
Samples are weighted by gender, age, race, Hispanic ethnicity, education, region, adults in the household, and phone status (cell phone only/landline only/both, cell phone mostly, and having an unlisted landline number). Demographic weighting targets are based on the March 2011 Current Population Survey figures for the aged 18 and older non-institutionalized population living in U.S. telephone households. All reported margins of sampling error include the computed design effects for weighting and sample design.
That said, I agree with your position on polling overall and that weighing samples is not problematic if you account for it correctly. Obviously some polls are outliers, but overall, the general trend in polls can be very telling. Additionally, Nate Silver over at 538 has what seems like a very solid prediction model which incorporates all somewhat reliable polls, which did a good job last time around too.
|
On September 12 2012 03:51 sc2superfan101 wrote:Show nested quote +On September 12 2012 03:35 NonCorporeal wrote:On September 12 2012 03:32 sc2superfan101 wrote: gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.) Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different? Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws. also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon... There are two types of cathlics, a fairly liberal fairly tolerant left wing, and a more conserative wing. Biden and ryan are both catholics, just from the opposite wings of the church.
|
|
|
|