• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 00:33
CEST 06:33
KST 13:33
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Team TLMC #5: Winners Announced!0[ASL20] Ro8 Preview Pt2: Holding On9Maestros of the Game: Live Finals Preview (RO4)5TL.net Map Contest #21 - Finalists4Team TLMC #5: Vote to Decide Ladder Maps!0
Community News
5.0.15 Patch Balance Hotfix (2025-10-8)20Weekly Cups (Sept 29-Oct 5): MaxPax triples up3PartinG joins SteamerZone, returns to SC2 competition245.0.15 Balance Patch Notes (Live version)118$2,500 WardiTV TL Map Contest Tournament 152
StarCraft 2
General
5.0.15 Patch Balance Hotfix (2025-10-8) 5.0.15 Balance Patch Notes (Live version) The New Patch Killed Mech! Weekly Cups (Sept 29-Oct 5): MaxPax triples up Team TLMC #5: Winners Announced!
Tourneys
Tenacious Turtle Tussle Sea Duckling Open (Global, Bronze-Diamond) $2,500 WardiTV TL Map Contest Tournament 15 RSL Offline Finals Dates + Ticket Sales! Stellar Fest
Strategy
Custom Maps
External Content
Mutation # 494 Unstable Environment Mutation # 493 Quick Killers Mutation # 492 Get Out More Mutation # 491 Night Drive
Brood War
General
Question regarding recent ASL Bisu vs Larva game [BSL21] - How to Qualify to Each League ? ASL20 General Discussion BW General Discussion RepMastered™: replay sharing and analyzer site
Tourneys
[Megathread] Daily Proleagues [ASL20] Ro8 Day 4 Small VOD Thread 2.0 [ASL20] Ro8 Day 3
Strategy
Current Meta TvZ Theorycraft - Improving on State of the Art Proposed Glossary of Strategic Uncertainty 9 hatch vs 10 hatch vs 12 hatch
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread ZeroSpace Megathread Dawn of War IV Path of Exile
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
SPIRED by.ASL Mafia {211640} TL Mafia Community Thread
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread UK Politics Mega-thread The Games Industry And ATVI
Fan Clubs
The herO Fan Club! The Happy Fan Club!
Media & Entertainment
Movie Discussion! Anime Discussion Thread [Manga] One Piece
Sports
2024 - 2026 Football Thread Formula 1 Discussion MLB/Baseball 2023 NBA General Discussion TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
Recent Gifted Posts The Automated Ban List BarCraft in Tokyo Japan for ASL Season5 Final
Blogs
What your "aura" says about…
Peanutsc
Mental Health In Esports: Wo…
TrAiDoS
Try to reverse getting fired …
Garnet
[ASL20] Players bad at pi…
pullarius1
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1127 users

President Obama Re-Elected - Page 488

Forum Index > General Forum
Post a Reply
Prev 1 486 487 488 489 490 1504 Next
Hey guys! We'll be closing this thread shortly, but we will make an American politics megathread where we can continue the discussions in here.

The new thread can be found here: http://www.teamliquid.net/forum/viewmessage.php?topic_id=383301
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:30 GMT
#9741
On September 12 2012 02:20 paralleluniverse wrote:
Show nested quote +
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.
Repeat before me
KwarK
Profile Blog Joined July 2006
United States43104 Posts
September 11 2012 17:33 GMT
#9742
On September 12 2012 02:01 NonCorporeal wrote:
I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future.

Anyway, let's get back on topic, shall we?

Show nested quote +
On September 11 2012 08:25 Savio wrote:
On September 11 2012 08:24 ziggurat wrote:
On September 11 2012 06:53 KwarK wrote:
Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love?

How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views?

Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values.



Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate.

Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things.

Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to?

I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?
ModeratorThe angels have the phone box
xDaunt
Profile Joined March 2010
United States17988 Posts
September 11 2012 17:34 GMT
#9743
On September 12 2012 02:30 radiatoren wrote:
Show nested quote +
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:40:35
September 11 2012 17:36 GMT
#9744
On September 12 2012 02:30 radiatoren wrote:
Show nested quote +
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:42 GMT
#9745
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

Adjustment or assumptions. I belive they are taking selected random people to combat the problem. But the adjustment made is different for each poll-provider, making it more or less useless to compare data from two different providers of polls.
Repeat before me
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:50:15
September 11 2012 17:44 GMT
#9746
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), where the weights given to the responses are adjusted to make the sample representative of some known characteristic of the population, then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

Also, as far as I'm aware, polls don't do this. BLS surveys do.
xDaunt
Profile Joined March 2010
United States17988 Posts
September 11 2012 17:48 GMT
#9747
On September 12 2012 02:44 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.
radiatoren
Profile Blog Joined March 2010
Denmark1907 Posts
September 11 2012 17:51 GMT
#9748
On September 12 2012 02:36 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
Show nested quote +
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.

We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote.
Repeat before me
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:53:51
September 11 2012 17:53 GMT
#9749
On September 12 2012 02:48 xDaunt wrote:
Show nested quote +
On September 12 2012 02:44 paralleluniverse wrote:
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.

I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated.

The BLS does it: http://bls.gov/osmr/pdf/st930500.pdf

But even if they did, it's not invalid. It generally makes estimates more accurate, not less.
paralleluniverse
Profile Joined July 2010
4065 Posts
Last Edited: 2012-09-11 17:56:56
September 11 2012 17:55 GMT
#9750
On September 12 2012 02:51 radiatoren wrote:
Show nested quote +
On September 12 2012 02:36 paralleluniverse wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

No that's not what you said.

What you said is:
However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

1000 is more than enough for a population of 9 million. And continuing to use a sample of 1000 for a population of 300 million makes virtually 0 difference to the accuracy of the poll compared to the case of 9 million.

Accuracy of poll where 1000 sampled out of 9 million = Accuracy of poll where 1000 sampled out of 300 million.

If 1000 is as you say "decent" for NC, it is equally "decent" for all of the US.

And what has "homogeneity" (whatever you mean by this) and bias got to do with anything? The proportion estimator n/N is unbiased, so to show bias you would need to show that their sampling scheme (probably a telephone book) is biased in an appreciable way that affects the variable proportion being estimated.

We are not talking statistical bias. You are correct on the math. I am trying to say that the data gets "adjusted" by the poll-provider. Skin-colour/sex/age/job/religion/place of birth influence how people vote.

I'm yet to be convinced that pollsters do this.

But as I've said, this is post stratification and it's perfectly valid. Here's what the BLS says about it.

Post-stratification estimation is a technique used in sample surveys to improve
efficiency of estimators.
Survey weights are adjusted to force the estimated numbers of
units in each of a set of estimation cells to be equal to known population totals.
The
resulting weights are then used in forming estimates of means or totals of variables
collected in the survey. For example, in a household survey the estimation cells may be
based on age/race/sex categories of individuals and the known totals may come from the
most recent population census. Although the variance of a post-stratified estimator can
be computed over all possible sample configurations, inferences made conditionally on
the achieved sample configuration are desirable. Theory and a simulation study using
data from the U.S. Current Population Survey are presented to study both the conditional
bias and variance of the post-stratified estimator of a total. The linearization, balanced
repeated replication, and jackknife variance estimators are also examined to determine
whether they appropriately estimate the conditional variance.

Source:
http://bls.gov/osmr/pdf/st930500.pdf
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:28 GMT
#9751
On September 12 2012 02:33 KwarK wrote:
Show nested quote +
On September 12 2012 02:01 NonCorporeal wrote:
I just wanted to say that I am sorry if anyone was offended by my comments about Europe. It was never my intention to insult our European members, I just felt that a few members whose names shall not be mentioned were "harassing" me, and as what typically happens in heated debates like this, I did what most people tend to do, retaliate. Hopefully we can put our disagreements yesterday aside and we can all work on being more civil in our debates and discussions in the future.

Anyway, let's get back on topic, shall we?

On September 11 2012 08:25 Savio wrote:
On September 11 2012 08:24 ziggurat wrote:
On September 11 2012 06:53 KwarK wrote:
Your right to a tool that allows you to more easily kill people is far more important than your right for social acceptance of your love?

How can you possibly have a "right to social acceptance"? Wouldn't that basically mean that you have a right to have others agree with your views?

Conservatives generally don't agree with these "rights" that require others to do something for you. So a "right" to get health care, or a "right" to an education are not consistent with typical conservative values.



Calling those things "rights" really cheapens the concept of a right until it finally just means "something good that I want". I would prefer that we keep the 2 separate.

Indeed, I'd say that is the number one problem with many people today; they think they are entitled to everything, when they haven't done anything to earn these things.

Do you believe that applies to the issue being quoted here? Do you believe you have the right to get married to a woman if you both wish to do so and are consenting? If so, what have you done to earn that right that you think you are entitled to?

I know you want to make a general "lazy people just want things given to them" point but regarding the issue at hand, what do straight people do to earn the right to get married to that gays don't do? Or is asking you that too much like nailing you down on specifics?


No, the government shouldn't be able to stop two consenting adults from getting married. That wasn't what Savio was referring to though, If I'm not mistaken, Savio was referring to the idea that people have a "right" to not be offended; hence why he said "how can you possibly have a right to social acceptance," which is indeed ridiculous and goes against the entire idea of freedom of speech. I was then expanding upon ziggurat's statement, by bringing the entitlement crowed into the picture.
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:32 GMT
#9752
On September 12 2012 02:34 xDaunt wrote:
Show nested quote +
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals.

User was temp banned for this post.
sc2superfan101
Profile Blog Joined February 2012
3583 Posts
September 11 2012 18:32 GMT
#9753
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)
My fake plants died because I did not pretend to water them.
NonCorporeal
Profile Joined August 2012
United States106 Posts
September 11 2012 18:35 GMT
#9754
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?
Signet
Profile Joined March 2007
United States1718 Posts
September 11 2012 18:42 GMT
#9755
On September 12 2012 00:41 xDaunt wrote:
Show nested quote +
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

Pollsters don't have access to a person's actual party registration as far as I know. Anecdotally, a much larger amount of Republicans I know tend to self-identify as Independents than Democrats who self-identify as Independents.

If they include lean-R and lean-D in with the self-identified partisans, and it totals to about 45% of each, then that's fairly accurate.
sc2superfan101
Profile Blog Joined February 2012
3583 Posts
September 11 2012 18:51 GMT
#9756
On September 12 2012 03:35 NonCorporeal wrote:
Show nested quote +
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...
My fake plants died because I did not pretend to water them.
aksfjh
Profile Joined November 2010
United States4853 Posts
September 11 2012 18:55 GMT
#9757
On September 12 2012 03:32 NonCorporeal wrote:
Show nested quote +
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

I imagine if that were the case, they wouldn't ask 10% more Democrats than Republicans; that doesn't add up, especially since America is a country with twice as many conservatives than liberals.

I'm always puzzled by the huge number of conservative news organizations and people willing to use those opinion laden "stories" as evidence of some nonexistant or overblown trend.
farvacola
Profile Blog Joined January 2011
United States18833 Posts
Last Edited: 2012-09-11 18:56:16
September 11 2012 18:55 GMT
#9758
On September 12 2012 03:51 sc2superfan101 wrote:
Show nested quote +
On September 12 2012 03:35 NonCorporeal wrote:
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...

No, this is totally wrong. From http://en.wikipedia.org/wiki/Opposition_to_legal_abortion
Before the Roe v. Wade decision, the right-to-life movement in the U.S. consisted of lawyers, politicians, and doctors, almost all of whom were Catholic. The only coordinated opposition to abortion during the early 1970s came from the United States Conference of Catholic Bishops and the Family Life Bureau, also a Catholic organization. Mobilization of a wide-scale pro-life movement among Catholics began quickly after the Roe v. Wade decision with the creation of the National Right to Life Committee (NRLC). The NRLC also organized non-Catholics, eventually becoming the largest pro-life organization in the United States. Connie Paige has been quoted as having said that, "[t]he Roman Catholic Church created the right-to-life movement. Without the church, the movement would not exist as such today."[15


Much of the pro-life movement in the United States and around the world finds support in the Roman Catholic Church, Christian right, the Lutheran Church-Missouri Synod and the Wisconsin Evangelical Lutheran Synod, the Church of England, the Anglican Church in North America, the Eastern Orthodox Church, and The Church of Jesus Christ of Latter-day Saints (LDS).[31][32][33][34] However, the pro-life teachings of these denominations vary considerably. The Eastern Orthodox Church and Roman Catholic Church consider abortion to be immoral in all cases, but permit acts[citation needed] which indirectly result in the death of the fetus in the case where the mother's life is threatened. In Pope John Paul II's Letter to Families he simply stated the Roman Catholic Church's view on abortion and euthanasia: "Laws which legitimize the direct killing of innocent human beings through abortion or euthanasia are in complete opposition to the inviolable right to life proper to every individual; they thus deny the equality of everyone before the law."
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
Derez
Profile Blog Joined January 2011
Netherlands6068 Posts
Last Edited: 2012-09-11 18:58:57
September 11 2012 18:56 GMT
#9759
On September 12 2012 02:53 paralleluniverse wrote:
Show nested quote +
On September 12 2012 02:48 xDaunt wrote:
On September 12 2012 02:44 paralleluniverse wrote:
On September 12 2012 02:34 xDaunt wrote:
On September 12 2012 02:30 radiatoren wrote:
On September 12 2012 02:20 paralleluniverse wrote:
On September 12 2012 01:53 radiatoren wrote:
On September 12 2012 00:41 xDaunt wrote:
On September 12 2012 00:35 KwarK wrote:
On September 12 2012 00:16 xDaunt wrote:
Just as a followup to the conversation about polling bias and why I wouldn't trust the polls right now (wait til the two weeks before the election), look at this WashPo/ABC poll. It shows a 49-48 split in favor of Obama. However, go look at the very last question that shows the composition of the sample: 37% independent, 33% democrats, 23% republican, 4% other, 3% don't know.

I've always assumed that the purpose of polls was to try and propagate the idea that pollsters have a useful profession and should continue getting employment. Pay them and they'll give you a poll that shows anything. Doesn't in any way surprise me.
Still, assuming that you understand their methodology and, in this case, their selection of respondents you can try and glean something from it.

I don't know whether there's a concerted effort to kick out biased polls, I just know that there has been a systemic undersampling of republicans in polls. I highlighted the Washpo poll as a particularly egregious example. What a good is a poll that gives a 10% sampling bias to democrats? I'm trying to find it, but I'm pretty sure that I just saw an article in the past couple weeks that the number of registered republican voters has outgrown registered democrats. Plus, it has always been the case that registered republicans are more likely to vote, which is why polls of "likely voters" are more accurate than those of "registered voters."

There is something to be said about removing biases introduced by the people you choose to poll.

However, ~1000 people are too small a sample to carry any significance in itself for a country with 315 million inhabitants or even only counting swing states of about 76 millions.

I take it, that they use a lot of assumptions to spice up the statistical values of the test, but exactly these assumptions are what is killing the credibility like KwarK suggests

In other words: The poll is invalid from the get go due to too few participants. Had it been for a single state, like North Carolina, 1000 would be a decent poll, but that is not the case here.

The size of the population makes no difference.

I use the notation that Wikipedia uses: https://en.wikipedia.org/wiki/Hypergeometric_distribution

The exact standard error is found by considering the variance of a X/n, where X is Hypergeometric. This variance is then (p(1-p)/n)*((N-n)/(N-1)), the latter factor is call the finite population correction and is often dropped for simplicity because it makes little difference.

The standard error of a poll is then sqrt(p(1-p)*(f-1)/(N-1)), where f = n/N is the sampling fraction. From this, it's obvious that if N is large (like 311.5M), the factor (f-1)/(N-1) is small (virtually 0), so that the size of the sample compared to the population makes virtually no difference to the standard error of the poll.

If you don't believe my algebra here's a numerical example.
In a poll about who you're voting for, the proportion of people supporting a candidate is close to 50%, so take p = 0.5, we sample n = 1000 people from a population of N = 9.7 million (the population of North Carolina), the standard error is then 1.58106%. For the US, the population is N = 311.6 million, and the standard error is 1.58113%, virtually unchanged.

No, I am not talking about the statistical effect in itself. I am saying that the biases from the people you choose will get drowned out. With a smaller population, you get a more homogenous population and therefore a smaller amount of different population groups with a real effect on the result of the election.

My understanding of the sample selection process is that the pollsters tinker with the sample to create a sample that is representative of the population. These aren't just straight surveys that are taken of random people. There are a lot of adjustments involved.

If you're talking about post stratification (http://www.stat.columbia.edu/~gelman/research/published/weightingfinal.pdf), then that is a well-known, well-established, and uncontroversial statistical technique that significantly reduces the standard error for a small increase in bias.

I don't know what it's called. It's been ten years since I have opened a stats for econometrics book. However, I am aware that is acceptable to manipulate the sample to better reflect the general population. I'm just commenting that whatever it is that these pollsters are doing is likely not producing accurate results.

I'm pretty sure you're talking about post stratification. And I'm pretty sure that pollsters don't do this, because it's complicated.

The BLS does it: http://bls.gov/osmr/pdf/st930500.pdf

But even if they did, it's not invalid. It generally makes estimates more accurate, not less.

Most major polling firms in the US are using it, and correct for ethnic groups, age groups and gender, mainly to correct for voters that are notoriously hard to reach. For example, Gallup does the following:

Samples are weighted by gender, age, race, Hispanic ethnicity, education, region, adults in the household, and phone status (cell phone only/landline only/both, cell phone mostly, and having an unlisted landline number). Demographic weighting targets are based on the March 2011 Current Population Survey figures for the aged 18 and older non-institutionalized population living in U.S. telephone households. All reported margins of sampling error include the computed design effects for weighting and sample design.


That said, I agree with your position on polling overall and that weighing samples is not problematic if you account for it correctly. Obviously some polls are outliers, but overall, the general trend in polls can be very telling. Additionally, Nate Silver over at 538 has what seems like a very solid prediction model which incorporates all somewhat reliable polls, which did a good job last time around too.
Jaaaaasper
Profile Blog Joined April 2012
United States10225 Posts
September 11 2012 18:57 GMT
#9760
On September 12 2012 03:51 sc2superfan101 wrote:
Show nested quote +
On September 12 2012 03:35 NonCorporeal wrote:
On September 12 2012 03:32 sc2superfan101 wrote:
gotta say, i think Catholics might break right this time and help put Romney over the top. would be nice to see my fellow Catholics finally start following the gdamn teachings of the Church (abortion, gay marriage, religious freedom, etc.)

Haven't Catholics traditionally been opposed to such things (abortion & gay marriage)? How would this election be any different?

Catholics are traditionally opposed to those things, but then they traditionally go out and support the politicians that try to enact those laws.

also, for whatever reason, Catholics are more likely to support gay-marriage and abortion than other Christians... God, but i hope that changes soon...

There are two types of cathlics, a fairly liberal fairly tolerant left wing, and a more conserative wing. Biden and ryan are both catholics, just from the opposite wings of the church.
Hey do you want to hear a joke? Chinese production value. | I thought he had a aegis- Ayesee | When did 7ing mad last have a good game, 2012?
Prev 1 486 487 488 489 490 1504 Next
Please log in or register to reply.
Live Events Refresh
Next event in 5h 28m
[ Submit Event ]
Live Streams
Refresh
StarCraft: Brood War
Leta 153
Noble 64
Sharp 47
KwarK 19
Icarus 12
Dota 2
monkeys_forever508
capcasts115
League of Legends
JimRising 681
Counter-Strike
Stewie2K501
Other Games
summit1g9981
shahzam784
C9.Mang0517
Maynarde157
Models6
Organizations
Other Games
gamesdonequick958
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• practicex 25
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• Diggity5
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Rush1044
• Lourlo978
• Stunt435
Upcoming Events
The PondCast
5h 28m
Map Test Tournament
6h 28m
OSC
11h 28m
SKillous vs Krystianer
GgMaChine vs Demi
ArT vs Creator
INexorable vs TBD
ReBellioN vs TriGGeR
UedSoldier vs Iba
sOs vs Moja
Map Test Tournament
1d 6h
OSC
1d 8h
Korean StarCraft League
1d 22h
CranKy Ducklings
2 days
Map Test Tournament
2 days
OSC
2 days
[BSL 2025] Weekly
2 days
[ Show More ]
Safe House 2
2 days
Sparkling Tuna Cup
3 days
Map Test Tournament
3 days
OSC
3 days
IPSL
3 days
Bonyth vs Art_Of_Turtle
Razz vs rasowy
Liquipedia Results

Completed

Acropolis #4 - TS2
Maestros of the Game
HCC Europe

Ongoing

BSL 21 Points
ASL Season 20
CSL 2025 AUTUMN (S18)
C-Race Season 1
IPSL Winter 2025-26
WardiTV TLMC #15
EC S1
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025

Upcoming

SC4ALL: Brood War
BSL Season 21
BSL 21 Team A
RSL Offline Finals
RSL Revival: Season 3
Stellar Fest
SC4ALL: StarCraft II
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.