• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 13:39
CEST 19:39
KST 02:39
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Serral wins EWC 20259Tournament Spotlight: FEL Cracow 20259Power Rank - Esports World Cup 202580RSL Season 1 - Final Week9[ASL19] Finals Recap: Standing Tall15
Community News
[BSL 2025] H2 - Team Wars, Weeklies & SB Ladder2EWC 2025 - Replay Pack2Google Play ASL (Season 20) Announced26BSL Team Wars - Bonyth, Dewalt, Hawk & Sziky teams10Weekly Cups (July 14-20): Final Check-up0
StarCraft 2
General
#1: Maru - Greatest Players of All Time Greatest Players of All Time: 2025 Update Serral wins EWC 2025 Power Rank - Esports World Cup 2025 EWC 2025 - Replay Pack
Tourneys
FEL Cracov 2025 (July 27) - $10,000 live event TaeJa vs Creator Bo7 SC Evo Showmatch Esports World Cup 2025 $25,000 Streamerzone StarCraft Pro Series announced $5,000 WardiTV Summer Championship 2025
Strategy
How did i lose this ZvP, whats the proper response
Custom Maps
External Content
Mutation # 484 Magnetic Pull Mutation #239 Bad Weather Mutation # 483 Kill Bot Wars Mutation # 482 Wheel of Misfortune
Brood War
General
[BSL 2025] H2 - Team Wars, Weeklies & SB Ladder BW General Discussion BGH Auto Balance -> http://bghmmr.eu/ Afreeca app available on Samsung smart TV Google Play ASL (Season 20) Announced
Tourneys
[Megathread] Daily Proleagues [BSL20] Non-Korean Championship 4x BSL + 4x China CSL Xiamen International Invitational [CSLPRO] It's CSLAN Season! - Last Chance
Strategy
Does 1 second matter in StarCraft? Simple Questions, Simple Answers [G] Mineral Boosting
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Total Annihilation Server - TAForever [MMORPG] Tree of Savior (Successor of Ragnarok) Path of Exile
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
How many questions are in the Publix survey?
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread UK Politics Mega-thread Stop Killing Games - European Citizens Initiative Things Aren’t Peaceful in Palestine
Fan Clubs
INnoVation Fan Club SKT1 Classic Fan Club!
Media & Entertainment
Anime Discussion Thread [\m/] Heavy Metal Thread Movie Discussion! [Manga] One Piece Korean Music Discussion
Sports
2024 - 2025 Football Thread Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023 NBA General Discussion
World Cup 2022
Tech Support
Gtx660 graphics card replacement Installation of Windows 10 suck at "just a moment" Computer Build, Upgrade & Buying Resource Thread
TL Community
TeamLiquid Team Shirt On Sale The Automated Ban List
Blogs
Ping To Win? Pings And Their…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
StarCraft improvement
iopq
Socialism Anyone?
GreenHorizons
Eight Anniversary as a TL…
Mizenhauer
Customize Sidebar...

Website Feedback

Closed Threads



Active: 685 users

President Obama Re-Elected - Page 488

Forum Index > General Forum
Post a Reply
Prev 1 486 487 488 489 490 1504 Next
Hey guys! We'll be closing this thread shortly, but we will make an American politics megathread where we can continue the discussions in here.

The new thread can be found here: http://www.teamliquid.net/forum/viewmessage.php?topic_id=383301
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:30 GMT
#9741
On September 12 2012 02:20 paralleluniverse wrote:
Show nested quote +
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.
Repeat before me
KwarK
Profile Blog Joined July 2006
United States42663 Posts
September 11 2012 17:33 GMT
#9742
On September 12 2012 02:01 NonCorporeal wrote:
I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future.

Anyway, let's get back on topic, shall we?

Show nested quote +
On September 11 2012 08:25 Savio wrote:
On September 11 2012 08:24 ziggurat wrote:
On September 11 2012 06:53 KwarK wrote:
Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love?

How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views?

Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values.



Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate.

Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things.

Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to?

I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?
ModeratorThe angels have the phone box
xDaunt
Profile Joined March 2010
United States17988 Posts
September 11 2012 17:34 GMT
#9743
On September 12 2012 02:30 radiatoren wrote:
Show nested quote +
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:40:35
September 11 2012 17:36 GMT
#9744
On September 12 2012 02:30 radiatoren wrote:
Show nested quote +
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:42 GMT
#9745
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

Adjustment or assumptions. I belive they are taking selected random people to combat the problem. But the adjustment made is different for each poll-provider, making it more or less useless to compare data from two different providers of polls.
Repeat before me
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:50:15
September 11 2012 17:44 GMT
#9746
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), where the weights given to the responses are adjusted to make the sample representative of some known characteristic of the population, then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

Also, as far as I'm aware, polls don't do this. BLS surveys do.
xDaunt
Profile Joined March 2010
United States17988 Posts
September 11 2012 17:48 GMT
#9747
On September 12 2012 02:44 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:51 GMT
#9748
On September 12 2012 02:36 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
Show nested quote +
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.

We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote.
Repeat before me
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:53:51
September 11 2012 17:53 GMT
#9749
On September 12 2012 02:48 xDaunt wrote:
Show nested quote +
On September 12 2012 02:44 paralleluniverse wrote:
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.

I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated.

The BLS does it: http://bls.gov/osmr/pdf/st930500.pdf

But even if they did, it's not invalid. It generally makes estimates more accurate, not less.
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:56:56
September 11 2012 17:55 GMT
#9750
On September 12 2012 02:51 radiatoren wrote:
Show nested quote +
On September 12 2012 02:36 paralleluniverse wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.

We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote.

I'm yet to be convinced that pollsters do this.

But as I've said, this is post stratification and it's perfectly valid. Here's what the BLS says about it.

Post-stratification estimation is a technique used in sample surveys to improve
efficiency of estimators.
Survey weights are adjusted to force the estimated numbers of
units in each of a set of estimation cells to be equal to known population totals.
The
resulting weights are then used in forming estimates of means or totals of variables
collected in the survey. For example, in a household survey the estimation cells may be
based on age/race/sex categories of individuals and the known totals may come from the
most recent population census. Although the variance of a post-stratified estimator can
be computed over all possible sample configurations, inferences made conditionally on
the achieved sample configuration are desirable. Theory and a simulation study using
data from the U.S. Current Population Survey are presented to study both the conditional
bias and variance of the post-stratified estimator of a total. The linearization, balanced
repeated replication, and jackknife variance estimators are also examined to determine
whether they appropriately estimate the conditional variance.

Source:
http://bls.gov/osmr/pdf/st930500.pdf
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:28 GMT
#9751
On September 12 2012 02:33 KwarK wrote:
Show nested quote +
On September 12 2012 02:01 NonCorporeal wrote:
I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future.

Anyway, let's get back on topic, shall we?

On September 11 2012 08:25 Savio wrote:
On September 11 2012 08:24 ziggurat wrote:
On September 11 2012 06:53 KwarK wrote:
Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love?

How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views?

Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values.



Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate.

Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things.

Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to?

I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?


No, the government shouldn't be able to stop two consenting adults from getting married. That wasn't what Savio was referring to though, If I'm not mistaken, Savio was referring to the idea that people have a "right" to not be offended; hence why he said "how can you possibly have a right to social acceptance," which is indeed ridiculous and goes against the entire idea of freedom of speech. I was then expanding upon ziggurat's statement, by bringing the entitlement crowed into the picture.
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:32 GMT
#9752
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals.

User was temp banned for this post.
sc2superfan101
Profile Blog Joined February 2012
3583 Posts
September 11 2012 18:32 GMT
#9753
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)
My fake plants died because I did not pretend to water them.
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:35 GMT
#9754
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?
Signet
Profile Joined March 2007
United States1718 Posts
September 11 2012 18:42 GMT
#9755
On September 12 2012 00:41 xDaunt wrote:
Show nested quote +
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

Pollsters don't have access to a person's actual party registration as far as I know. Anecdotally, a much larger amount of Republicans I know tend to self-identify as Independents than Democrats who self-identify as Independents.

If they include lean-R and lean-D in with the self-identified partisans, and it totals to about 45% of each, then that's fairly accurate.
sc2superfan101
Profile Blog Joined February 2012
3583 Posts
September 11 2012 18:51 GMT
#9756
On September 12 2012 03:35 NonCorporeal wrote:
Show nested quote +
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...
My fake plants died because I did not pretend to water them.
aksfjh
Profile Joined November 2010
United States4853 Posts
September 11 2012 18:55 GMT
#9757
On September 12 2012 03:32 NonCorporeal wrote:
Show nested quote +
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals.

I'm always puzzled by the huge number of conservative news organizations and people willing to use those opinion laden "stories" as evidence of some nonexistant or overblown trend.
farvacola
Profile Blog Joined January 2011
United States18826 Posts
Last Edited: 2012-09-11 18:56:16
September 11 2012 18:55 GMT
#9758
On September 12 2012 03:51 sc2superfan101 wrote:
Show nested quote +
On September 12 2012 03:35 NonCorporeal wrote:
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...

No, this is totally wrong. From http://en.wikipedia.org/wiki/Opposition_to_legal_abortion
Before the Roe v. Wade decision, the right-to-life movement in the U.S. consisted of lawyers, politicians, and doctors, almost all of whom were Catholic. The only coordinated opposition to abortion during the early 1970s came from the United States Conference of Catholic Bishops and the Family Life Bureau, also a Catholic organization. Mobilization of a wide-scale pro-life movement among Catholics began quickly after the Roe v. Wade decision with the creation of the National Right to Life Committee (NRLC). The NRLC also organized non-Catholics, eventually becoming the largest pro-life organization in the United States. Connie Paige has been quoted as having said that, "[t]he Roman Catholic Church created the right-to-life movement. Without the church, the movement would not exist as such today."[15


Much of the pro-life movement in the United States and around the world finds support in the Roman Catholic Church, Christian right, the Lutheran Church-Missouri Synod and the Wisconsin Evangelical Lutheran Synod, the Church of England, the Anglican Church in North America, the Eastern Orthodox Church, and The Church of Jesus Christ of Latter-day Saints (LDS).[31][32][33][34] However, the pro-life teachings of these denominations vary considerably. The Eastern Orthodox Church and Roman Catholic Church consider abortion to be immoral in all cases, but permit acts[citation needed] which indirectly result in the death of the fetus in the case where the mother's life is threatened. In Pope John Paul II's Letter to Families he simply stated the Roman Catholic Church's view on abortion and euthanasia: "Laws which legitimize the direct killing of innocent human beings through abortion or euthanasia are in complete opposition to the inviolable right to life proper to every individual; they thus deny the equality of everyone before the law."
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
Derez
Profile Blog Joined January 2011
Netherlands6068 Posts
Last Edited: 2012-09-11 18:58:57
September 11 2012 18:56 GMT
#9759
On September 12 2012 02:53 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:48 xDaunt wrote:
On September 12 2012 02:44 paralleluniverse wrote:
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.

I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated.

The BLS does it: http://bls.gov/osmr/pdf/st930500.pdf

But even if they did, it's not invalid. It generally makes estimates more accurate, not less.

Most major polling firms in the US are using it, and correct for ethnic groups, age groups and gender, mainly to correct for voters that are notoriously hard to reach. For example, Gallup does the following:

Samples are weighted by gender, age, race, Hispanic ethnicity, education, region, adults in the household, and phone status (cell phone only/landline only/both, cell phone mostly, and having an unlisted landline number). Demographic weighting targets are based on the March 2011 Current Population Survey figures for the aged 18 and older non-institutionalized population living in U.S. telephone households. All reported margins of sampling error include the computed design effects for weighting and sample design.


That said, I agree with your position on polling overall and that weighing samples is not problematic if you account for it correctly. Obviously some polls are outliers, but overall, the general trend in polls can be very telling. Additionally, Nate Silver over at 538 has what seems like a very solid prediction model which incorporates all somewhat reliable polls, which did a good job last time around too.
Jaaaaasper
Profile Blog Joined April 2012
United States10225 Posts
September 11 2012 18:57 GMT
#9760
On September 12 2012 03:51 sc2superfan101 wrote:
Show nested quote +
On September 12 2012 03:35 NonCorporeal wrote:
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...

There are two types of cathlics, a fairly liberal fairly tolerant left wing, and a more conserative wing. Biden and ryan are both catholics, just from the opposite wings of the church.
Hey do you want to hear a joke? Chinese production value. | I thought he had a aegis- Ayesee | When did 7ing mad last have a good game, 2012?
Prev 1 486 487 488 489 490 1504 Next
Please log in or register to reply.
Live Events Refresh
RotterdaM Event
17:00
Rotti Stream Rumble All-Random
RotterdaM429
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 429
mouzHeroMarine 410
mcanning 220
UpATreeSC 80
EmSc Tv 31
MindelVK 27
ForJumy 26
StarCraft: Brood War
Mini 910
Barracks 718
EffOrt 630
Larva 457
yabsab 177
Mind 154
Snow 104
Killer 79
Dewaltoss 63
TY 45
[ Show more ]
Free 36
scan(afreeca) 26
Terrorterran 17
eros_byul 0
Dota 2
qojqva4305
Counter-Strike
fl0m4108
sgares358
Super Smash Bros
Westballz13
Other Games
B2W.Neo1175
Lowko324
Fuzer 103
Trikslyr82
Organizations
StarCraft 2
EmSc Tv 31
EmSc2Tv 31
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 21 non-featured ]
StarCraft 2
• Berry_CruncH123
• davetesta50
• tFFMrPink 19
• intothetv
• sooper7s
• Migwel
• AfreecaTV YouTube
• LaughNgamezSOOP
• IndyKCrew
• Kozan
StarCraft: Brood War
• HerbMon 22
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 6082
• masondota21399
League of Legends
• Nemesis4826
• Jankos1347
• TFBlade1048
Other Games
• imaqtpie724
• Shiphtur414
Upcoming Events
Sparkling Tuna Cup
16h 21m
WardiTV European League
22h 21m
PiGosaur Monday
1d 6h
uThermal 2v2 Circuit
1d 22h
The PondCast
2 days
Replay Cast
3 days
Korean StarCraft League
4 days
CranKy Ducklings
4 days
Online Event
5 days
Sparkling Tuna Cup
5 days
Liquipedia Results

Completed

BSL 20 Non-Korean Championship
FEL Cracow 2025
Underdog Cup #2

Ongoing

Copa Latinoamericana 4
Jiahua Invitational
BSL 20 Team Wars
CC Div. A S7
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25

Upcoming

BSL 21 Qualifiers
ASL Season 20: Qualifier #1
ASL Season 20: Qualifier #2
ASL Season 20
CSLPRO Chat StarLAN 3
BSL Season 21
RSL Revival: Season 2
Maestros of the Game
SEL Season 2 Championship
WardiTV Summer 2025
uThermal 2v2 Main Event
HCC Europe
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.