• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 15:29
CET 21:29
KST 05:29
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10
Community News
Weekly Cups (Nov 24-30): MaxPax, Clem, herO win2BGE Stara Zagora 2026 announced15[BSL21] Ro.16 Group Stage (C->B->A->D)4Weekly Cups (Nov 17-23): Solar, MaxPax, Clem win3RSL Season 3: RO16 results & RO8 bracket13
StarCraft 2
General
Chinese SC2 server to reopen; live all-star event in Hangzhou Maestros of the Game: Live Finals Preview (RO4) BGE Stara Zagora 2026 announced Weekly Cups (Nov 24-30): MaxPax, Clem, herO win SC2 Proleague Discontinued; SKT, KT, SGK, CJ disband
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament RSL Offline Finals Info - Dec 13 and 14! StarCraft Evolution League (SC Evo Biweekly) RSL Offline FInals Sea Duckling Open (Global, Bronze-Diamond)
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 502 Negative Reinforcement Mutation # 501 Price of Progress Mutation # 500 Fright night Mutation # 499 Chilling Adaptation
Brood War
General
[ASL20] Ask the mapmakers — Drop your questions BW General Discussion Which season is the best in ASL? Data analysis on 70 million replays BGH Auto Balance -> http://bghmmr.eu/
Tourneys
[Megathread] Daily Proleagues [BSL21] RO16 Group D - Sunday 21:00 CET [BSL21] RO16 Group A - Saturday 21:00 CET [BSL21] RO16 Group B - Sunday 21:00 CET
Strategy
Current Meta Game Theory for Starcraft How to stay on top of macro? PvZ map balance
Other Games
General Games
Path of Exile Nintendo Switch Thread Stormgate/Frost Giant Megathread ZeroSpace Megathread The Perfect Game
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas TL Mafia Community Thread
Community
General
Russo-Ukrainian War Thread US Politics Mega-thread Things Aren’t Peaceful in Palestine The Big Programming Thread Artificial Intelligence Thread
Fan Clubs
White-Ra Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion! Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
Where to ask questions and add stream? The Automated Ban List
Blogs
I decided to write a webnov…
DjKniteX
Physical Exertion During Gam…
TrAiDoS
James Bond movies ranking - pa…
Topin
Thanks for the RSL
Hildegard
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1533 users

President Obama Re-Elected - Page 488

Forum Index > General Forum
Post a Reply
Prev 1 486 487 488 489 490 1504 Next
Hey guys! We'll be closing this thread shortly, but we will make an American politics megathread where we can continue the discussions in here.

The new thread can be found here: http://www.teamliquid.net/forum/viewmessage.php?topic_id=383301
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:30 GMT
#9741
On September 12 2012 02:20 paralleluniverse wrote:
Show nested quote +
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.
Repeat before me
KwarK
Profile Blog Joined July 2006
United States43316 Posts
September 11 2012 17:33 GMT
#9742
On September 12 2012 02:01 NonCorporeal wrote:
I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future.

Anyway, let's get back on topic, shall we?

Show nested quote +
On September 11 2012 08:25 Savio wrote:
On September 11 2012 08:24 ziggurat wrote:
On September 11 2012 06:53 KwarK wrote:
Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love?

How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views?

Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values.



Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate.

Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things.

Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to?

I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?
ModeratorThe angels have the phone box
xDaunt
Profile Joined March 2010
United States17988 Posts
September 11 2012 17:34 GMT
#9743
On September 12 2012 02:30 radiatoren wrote:
Show nested quote +
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:40:35
September 11 2012 17:36 GMT
#9744
On September 12 2012 02:30 radiatoren wrote:
Show nested quote +
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:42 GMT
#9745
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

Adjustment or assumptions. I belive they are taking selected random people to combat the problem. But the adjustment made is different for each poll-provider, making it more or less useless to compare data from two different providers of polls.
Repeat before me
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:50:15
September 11 2012 17:44 GMT
#9746
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), where the weights given to the responses are adjusted to make the sample representative of some known characteristic of the population, then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

Also, as far as I'm aware, polls don't do this. BLS surveys do.
xDaunt
Profile Joined March 2010
United States17988 Posts
September 11 2012 17:48 GMT
#9747
On September 12 2012 02:44 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:51 GMT
#9748
On September 12 2012 02:36 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
Show nested quote +
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.

We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote.
Repeat before me
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:53:51
September 11 2012 17:53 GMT
#9749
On September 12 2012 02:48 xDaunt wrote:
Show nested quote +
On September 12 2012 02:44 paralleluniverse wrote:
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.

I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated.

The BLS does it: http://bls.gov/osmr/pdf/st930500.pdf

But even if they did, it's not invalid. It generally makes estimates more accurate, not less.
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:56:56
September 11 2012 17:55 GMT
#9750
On September 12 2012 02:51 radiatoren wrote:
Show nested quote +
On September 12 2012 02:36 paralleluniverse wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.

We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote.

I'm yet to be convinced that pollsters do this.

But as I've said, this is post stratification and it's perfectly valid. Here's what the BLS says about it.

Post-stratification estimation is a technique used in sample surveys to improve
efficiency of estimators.
Survey weights are adjusted to force the estimated numbers of
units in each of a set of estimation cells to be equal to known population totals.
The
resulting weights are then used in forming estimates of means or totals of variables
collected in the survey. For example, in a household survey the estimation cells may be
based on age/race/sex categories of individuals and the known totals may come from the
most recent population census. Although the variance of a post-stratified estimator can
be computed over all possible sample configurations, inferences made conditionally on
the achieved sample configuration are desirable. Theory and a simulation study using
data from the U.S. Current Population Survey are presented to study both the conditional
bias and variance of the post-stratified estimator of a total. The linearization, balanced
repeated replication, and jackknife variance estimators are also examined to determine
whether they appropriately estimate the conditional variance.

Source:
http://bls.gov/osmr/pdf/st930500.pdf
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:28 GMT
#9751
On September 12 2012 02:33 KwarK wrote:
Show nested quote +
On September 12 2012 02:01 NonCorporeal wrote:
I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future.

Anyway, let's get back on topic, shall we?

On September 11 2012 08:25 Savio wrote:
On September 11 2012 08:24 ziggurat wrote:
On September 11 2012 06:53 KwarK wrote:
Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love?

How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views?

Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values.



Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate.

Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things.

Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to?

I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?


No, the government shouldn't be able to stop two consenting adults from getting married. That wasn't what Savio was referring to though, If I'm not mistaken, Savio was referring to the idea that people have a "right" to not be offended; hence why he said "how can you possibly have a right to social acceptance," which is indeed ridiculous and goes against the entire idea of freedom of speech. I was then expanding upon ziggurat's statement, by bringing the entitlement crowed into the picture.
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:32 GMT
#9752
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals.

User was temp banned for this post.
sc2superfan101
Profile Blog Joined February 2012
3583 Posts
September 11 2012 18:32 GMT
#9753
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)
My fake plants died because I did not pretend to water them.
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:35 GMT
#9754
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?
Signet
Profile Joined March 2007
United States1718 Posts
September 11 2012 18:42 GMT
#9755
On September 12 2012 00:41 xDaunt wrote:
Show nested quote +
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

Pollsters don't have access to a person's actual party registration as far as I know. Anecdotally, a much larger amount of Republicans I know tend to self-identify as Independents than Democrats who self-identify as Independents.

If they include lean-R and lean-D in with the self-identified partisans, and it totals to about 45% of each, then that's fairly accurate.
sc2superfan101
Profile Blog Joined February 2012
3583 Posts
September 11 2012 18:51 GMT
#9756
On September 12 2012 03:35 NonCorporeal wrote:
Show nested quote +
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...
My fake plants died because I did not pretend to water them.
aksfjh
Profile Joined November 2010
United States4853 Posts
September 11 2012 18:55 GMT
#9757
On September 12 2012 03:32 NonCorporeal wrote:
Show nested quote +
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals.

I'm always puzzled by the huge number of conservative news organizations and people willing to use those opinion laden "stories" as evidence of some nonexistant or overblown trend.
farvacola
Profile Blog Joined January 2011
United States18840 Posts
Last Edited: 2012-09-11 18:56:16
September 11 2012 18:55 GMT
#9758
On September 12 2012 03:51 sc2superfan101 wrote:
Show nested quote +
On September 12 2012 03:35 NonCorporeal wrote:
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...

No, this is totally wrong. From http://en.wikipedia.org/wiki/Opposition_to_legal_abortion
Before the Roe v. Wade decision, the right-to-life movement in the U.S. consisted of lawyers, politicians, and doctors, almost all of whom were Catholic. The only coordinated opposition to abortion during the early 1970s came from the United States Conference of Catholic Bishops and the Family Life Bureau, also a Catholic organization. Mobilization of a wide-scale pro-life movement among Catholics began quickly after the Roe v. Wade decision with the creation of the National Right to Life Committee (NRLC). The NRLC also organized non-Catholics, eventually becoming the largest pro-life organization in the United States. Connie Paige has been quoted as having said that, "[t]he Roman Catholic Church created the right-to-life movement. Without the church, the movement would not exist as such today."[15


Much of the pro-life movement in the United States and around the world finds support in the Roman Catholic Church, Christian right, the Lutheran Church-Missouri Synod and the Wisconsin Evangelical Lutheran Synod, the Church of England, the Anglican Church in North America, the Eastern Orthodox Church, and The Church of Jesus Christ of Latter-day Saints (LDS).[31][32][33][34] However, the pro-life teachings of these denominations vary considerably. The Eastern Orthodox Church and Roman Catholic Church consider abortion to be immoral in all cases, but permit acts[citation needed] which indirectly result in the death of the fetus in the case where the mother's life is threatened. In Pope John Paul II's Letter to Families he simply stated the Roman Catholic Church's view on abortion and euthanasia: "Laws which legitimize the direct killing of innocent human beings through abortion or euthanasia are in complete opposition to the inviolable right to life proper to every individual; they thus deny the equality of everyone before the law."
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
Derez
Profile Blog Joined January 2011
Netherlands6068 Posts
Last Edited: 2012-09-11 18:58:57
September 11 2012 18:56 GMT
#9759
On September 12 2012 02:53 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:48 xDaunt wrote:
On September 12 2012 02:44 paralleluniverse wrote:
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.

I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated.

The BLS does it: http://bls.gov/osmr/pdf/st930500.pdf

But even if they did, it's not invalid. It generally makes estimates more accurate, not less.

Most major polling firms in the US are using it, and correct for ethnic groups, age groups and gender, mainly to correct for voters that are notoriously hard to reach. For example, Gallup does the following:

Samples are weighted by gender, age, race, Hispanic ethnicity, education, region, adults in the household, and phone status (cell phone only/landline only/both, cell phone mostly, and having an unlisted landline number). Demographic weighting targets are based on the March 2011 Current Population Survey figures for the aged 18 and older non-institutionalized population living in U.S. telephone households. All reported margins of sampling error include the computed design effects for weighting and sample design.


That said, I agree with your position on polling overall and that weighing samples is not problematic if you account for it correctly. Obviously some polls are outliers, but overall, the general trend in polls can be very telling. Additionally, Nate Silver over at 538 has what seems like a very solid prediction model which incorporates all somewhat reliable polls, which did a good job last time around too.
Jaaaaasper
Profile Blog Joined April 2012
United States10225 Posts
September 11 2012 18:57 GMT
#9760
On September 12 2012 03:51 sc2superfan101 wrote:
Show nested quote +
On September 12 2012 03:35 NonCorporeal wrote:
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...

There are two types of cathlics, a fairly liberal fairly tolerant left wing, and a more conserative wing. Biden and ryan are both catholics, just from the opposite wings of the church.
Hey do you want to hear a joke? Chinese production value. | I thought he had a aegis- Ayesee | When did 7ing mad last have a good game, 2012?
Prev 1 486 487 488 489 490 1504 Next
Please log in or register to reply.
Live Events Refresh
Next event in 3h 31m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
mouzHeroMarine 574
IndyStarCraft 148
Railgan 124
StarCraft: Brood War
Britney 14689
Calm 2584
Shuttle 585
Larva 233
firebathero 168
Dewaltoss 120
Dota 2
420jenkins391
capcasts61
Counter-Strike
fl0m5833
chrisJcsgo59
kRYSTAL_24
Heroes of the Storm
Liquid`Hasu371
Khaldor141
Other Games
Grubby4241
Beastyqt758
RotterdaM146
C9.Mang0132
Sick131
ArmadaUGS107
QueenE69
Mew2King64
Trikslyr59
ViBE11
Organizations
Other Games
Algost 8
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 20 non-featured ]
StarCraft 2
• StrangeGG 51
• Reevou 10
• Dystopia_ 1
• LaughNgamezSOOP
• sooper7s
• AfreecaTV YouTube
• intothetv
• Migwel
• Kozan
• IndyKCrew
StarCraft: Brood War
• 80smullet 17
• FirePhoenix13
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• WagamamaTV624
• lizZardDota262
League of Legends
• TFBlade804
Other Games
• imaqtpie992
• Shiphtur220
Upcoming Events
Replay Cast
3h 31m
Korean StarCraft League
1d 6h
CranKy Ducklings
1d 13h
WardiTV 2025
1d 15h
SC Evo League
1d 16h
BSL 21
1d 23h
Sziky vs OyAji
Gypsy vs eOnzErG
OSC
2 days
Solar vs Creator
ByuN vs Gerald
Percival vs Babymarine
Moja vs Krystianer
EnDerr vs ForJumy
sebesdes vs Nicoract
Sparkling Tuna Cup
2 days
WardiTV 2025
2 days
OSC
2 days
[ Show More ]
BSL 21
2 days
Bonyth vs StRyKeR
Tarson vs Dandy
Replay Cast
3 days
Wardi Open
3 days
StarCraft2.fi
3 days
Monday Night Weeklies
3 days
Replay Cast
4 days
WardiTV 2025
4 days
StarCraft2.fi
4 days
PiGosaur Monday
5 days
StarCraft2.fi
5 days
Tenacious Turtle Tussle
6 days
The PondCast
6 days
WardiTV 2025
6 days
StarCraft2.fi
6 days
Liquipedia Results

Completed

Proleague 2025-11-30
RSL Revival: Season 3
Light HT

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
YSL S2
BSL Season 21
CSCL: Masked Kings S3
Slon Tour Season 2
Acropolis #4 - TS3
META Madness #9
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2

Upcoming

BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
Kuram Kup
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
eXTREMESLAND 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.