• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 02:58
CEST 08:58
KST 15:58
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Serral wins EWC 202543Tournament Spotlight: FEL Cracow 202510Power Rank - Esports World Cup 202580RSL Season 1 - Final Week9[ASL19] Finals Recap: Standing Tall15
Community News
Weekly Cups (Jul 28-Aug 3): herO doubles up6LiuLi Cup - August 2025 Tournaments3[BSL 2025] H2 - Team Wars, Weeklies & SB Ladder10EWC 2025 - Replay Pack4Google Play ASL (Season 20) Announced62
StarCraft 2
General
Official Ladder Map Pool Update (April 28, 2025) The GOAT ranking of GOAT rankings Weekly Cups (Jul 28-Aug 3): herO doubles up Clem Interview: "PvT is a bit insane right now" Serral wins EWC 2025
Tourneys
RSL Season 2 Qualifier Links and Dates StarCraft Evolution League (SC Evo Biweekly) Global Tourney for College Students in September Sparkling Tuna Cup - Weekly Open Tournament WardiTV Mondays
Strategy
Custom Maps
External Content
Mutation # 485 Death from Below Mutation # 484 Magnetic Pull Mutation #239 Bad Weather Mutation # 483 Kill Bot Wars
Brood War
General
StarCon Philadelphia BW General Discussion Where is technical support? BGH Auto Balance -> http://bghmmr.eu/ Google Play ASL (Season 20) Announced
Tourneys
[ASL20] Online Qualifiers Day 2 [CSLPRO] It's CSLAN Season! - Last Chance [Megathread] Daily Proleagues Cosmonarchy Pro Showmatches
Strategy
[G] Mineral Boosting Simple Questions, Simple Answers Muta micro map competition Does 1 second matter in StarCraft?
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Total Annihilation Server - TAForever Beyond All Reason [MMORPG] Tree of Savior (Successor of Ragnarok)
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread The Games Industry And ATVI European Politico-economics QA Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread
Fan Clubs
INnoVation Fan Club SKT1 Classic Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread [\m/] Heavy Metal Thread Korean Music Discussion
Sports
2024 - 2025 Football Thread Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
Gtx660 graphics card replacement Installation of Windows 10 suck at "just a moment" Computer Build, Upgrade & Buying Resource Thread
TL Community
TeamLiquid Team Shirt On Sale The Automated Ban List
Blogs
[Girl blog} My fema…
artosisisthebest
Sharpening the Filtration…
frozenclaw
ASL S20 English Commentary…
namkraft
The Link Between Fitness and…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
Customize Sidebar...

Website Feedback

Closed Threads



Active: 742 users

[Psychology] Overconfidence - Page 9

Forum Index > General Forum
Post a Reply
Prev 1 7 8 9 All
Daigomi
Profile Blog Joined May 2006
South Africa4316 Posts
August 04 2009 08:20 GMT
#161
I'm not necessarily going to be right, but I am 90% confident that I am right with that range.
On A) you know the odds are beforehand. The dice has 6 sides, so betting on one is 1:6. That is an objective assumption. You calculated that, so you're safe. You can picture exactly in your head what 16.6% is. But on B), when you say "90% confident that I am right". Where does that number came from? You cannot calculate that what "90% confidence" is. You don't have the variables for that. So you guessed a range that gives you a warm and fuzzy feeling inside that sounds 90%'ish.[/quote]
I get what you are saying here, but it doesn't really make sense. 90% confidence means I am willing to accept a 9:1 bet on it. The whole point of the exercise is to show that when people are willing to accept 90% bets, they should only accept 70% or 50% bets. Now, you can say that individuals change on a day to day basis, so one day they will be perfectly right, while the next day they will get everything wrong, and that might be true. But the fact is though, that this test is measured over thousands of people, and randomization will remove individual differences. So in your highly exaggerated example, the guy with the perfect understanding won't get 8376 answers right, it's called a standard deviation, but if you had thousands of people like him, then their mean should be fairly close to 8376. Furthermore, the point of this test is not to show that some people have a 70% confidence interval, while others have a 40% confidence interval. It is to show that people who should get 8376 questions right tend to get 3000 questions right, or people who should get 9 questions right, tend to get 3-7 questions right.

Of course this is a big exaggeration. But it shows you the two points where I disagree with the test's method. There are two axioms that you build at the start of the test that I disagree with:
1) The test subject can calculate off the top of his head what 90% of an unknown value is.
2) The test subject can provide a reliable confidence range off the top of his head to an unknown answer.

I think you can discard the unknown part of both those questions, as it is irrelevant. Firstly, none of those values are provided to you in a void. All of them you have some knowledge about, so you have a starting point. When did MLK die? Most people die between the ages of 0-110, so you know where to start. From there you can estimate how old he was when he was still alive... at least 25. And from there you can decide on what number you would be willing to take for the upperlimit so that you would accept a 9:1 bet on the range. Secondly, and more importantly, not having knowledge about it shouldn't matter, as the point of the exercise is to choose a limit at which you are confident that you will be right, so even if you have no knowledge of the answer, you should still be able to choose a limit at which you are confident that you are right. For example, I have no idea what the shortest distance from Earth to the next galaxy is, in fact, I don't even have a starting point. Yet I can still say that I am 90% confident that the distance to the next galaxy is between 10 light years and 10,000,000,000 light years away. I didn't just type a big number out now either, I feel that, based on no information whatsoever, that the next galaxy should not be more than ten billion light years away. Even if I get asked 10 questions like that, of which I have no knowledge, I should be able to provide intervals with which I feel confident.

The problem with questions like that is obviously, that people might vastly underestimate or overestimate the phenomenon if they have no basis of knowledge, which is why they give us questions which we know a little bit about. We know how old most people get, we have seen photos of MLK, so we have a basis for our estimates. With the elephant question, we know how long humans give birth, so we use that as our basis. It is unlikely that anyone will say 300 years, or 2 days, for that question, becuase they have a basic estimate. So people aren't likely to completely get the ball park wrong. So I think that you can leave the unknown quantity out of your axions, as firstly, it should not be relevant, and secondly, the questions are designed so that people are not without any knowledge.

That leaves us with two axions:
The test subject can calculate what 90% is.
The test subject can provide a reliable confidence range

I think we can both agree that everyone knows what 90% is theoretically (9 times out of 10). If they don't know what it is practically, then that's what the test is trying to show. This is not a general test of overconfidence, its a test of decision making overconfidence, and if people theoretically know what 90% confidence is, but they can't apply that practically, then it shows that people give 90% values to decisions in real life that should not get them - exactly what this test is trying to prove.

The second axion has two elements to it, that participants know what a confidence range is, and that they can provide a reliable confidence range. I think confidence range is explained reasonably well (for those that can and do read) in the test itself, and in practical applications of this test, it was likely to be explained again. So, can people provide a reliable answer? You seem to imply that people can't, that people choose figures just out of the blue. Here's an example: The computer you are using now is probably not brand new, so how much would I need to pay you for it? $2,000? How about tomorrow, nothing has changed except the memories in your head and your breakfast. $1,500 now? Then tomorrow you have a bad day, $4,000 right? People don't work like that. Yes, values might change. On a good day you might ask me $1,900 for it, and on a bad day you might ask me $2,100 for it. But that doesn't mean that you do not make a logical, systematic decision.

Furthermore, the test is averaged over lots of people, as said earlier, so individual differences shouldn't have an impact. The only way in which individual differences can be threat to validity is if it changes systematically for the participants, for example, if this test was done at a school on the day after prom, then the elation of the night before might make people be more optimistic than normal, leading to more overconfidence. However, if this test is done in a normal situation, then the ratio of positive vs. negative people should be the same as usual.

Yes, reliability won't be perfect, and yes, you're not unequivocally proving that people tend to be overconfident in their decisions, and some people will get different results based on situational factors. That's part of all research in the social sciences. That's why we don't work with causal factors, we work with correlations.

In this case. The theory passed the test. If the theory is right, then a very low number of people should get 9/10 answers correct, and according to the OP, only 1% did. But that doesn't mean the test is correct. I'm pretty sure that if you had asked the test subjects to roll a dice of a random size 10 times instead of asking those 10 questions. The results would be very similar. Does that mean rollings a dice is effectively measuring how confident one can be? I think not.

You have to actually substantiate what you think could be confounding the variables. The test says "choose a range with which you are 90% confident" and then finds that most people get the answer right only 50% of the time. It specifically asks them to give their confidence level, and then it proves that their confidence is unfounded. I do not not see how this is comparable to the results of rolling a die.

Actually, I'm pretty sure that if you repeated the same test with the same person, but with completely different questions, at completely different days and times. The results from the first and the test would vary a lot more often then not. That would objectively disprove the test, I think

All that would prove is that the test has high variance or low reliability. ("The reliability of a measurement procedure is the stability or consistency of a measurement. If the same individuals are measured under the same conditions, a reliable measurement procedure will produce identical (or nearly identical) measurements."). This would be relevant if confidence was relatively fixed, like IQ. You can't have an IQ test says that a person has a 150 IQ on day 1, and a 80 IQ on day 2. However, do people's confidence levels change depending on the day? If yes, then reliability isn't important to the test, and is expected to vary. As mentioned earlier, the only risk then would be a systematic variance in confidence levels.

Now about the authors' credibility. I'm gonna say something is not completely relevant to what we're talking. But it's so funny that I'm gonna post it anyway:+ Show Spoiler +

On August 03 2009 18:10 Daigomi wrote:And just so that you know, the test was designed by Prof. Russo and Prof. Schoemaker. Russo is a prof at Cornell, and if I remember correctly, he did his BA in maths, his masters in statistics, and his PhD in cognitive psychology. Schoemaker did a BS in physics, then an masters in management, an MBA in finance, and a PhD in decision making.
I've had this professor some years ago who was a phd in statistics. He was pretty well known around here because of his veeeery unconventional style and his. He often bragged about all his awards on mathematics "contests" and "olympic tests" (not sure how those are called in english) and how he could solve any complex trigonometry problem using only Tales and Pythagoras. So anyway, we happen to have heard many that he used to have serious money problems because of gambling. But for someone who is phd is fucking statistics that sounded more like gossips. Until one day, during class he was trying to prove that the odds of a specific sequence to happen was so rare. That he pulled a dice he had in his pocket, asked a girl in the front row to roll the dice x times and said that if numbers matched such sequence he would he would approve everyone in the final exams. Well, the girl rolled the dices, got the numbers correct and now he is all desperate begging us not to tell this to anyone because he could get fired and all and how he needed money because he lost so much to gambling already lol

And that's how I passed in statistics Not trying to imply anything about the authors of the test. I don't know them. Just saying you should always be skeptical about anyone

I don't really get the point of the example you give. Are you implying that he wasn't good at stats because he couldn't gamble? I've got a stunning handwriting, it doesn't mean I can write novels. Or are you implying that intelligent people also make mistakes? Because from your example, it doesn't seem like he made a mistake, he just had terrible luck. If the sequence was really rare (lets say four six roles in a row), then what he did with your class would have worked for 1295 other classes.

I've got endless respect for professors, as I think does anyone studying post-grad. That doesn't mean they are never wrong, not at all, but it does mean that compared to a lay-person, and on their topic of specialization, they are basically never wrong. Getting a PhD in psychology is 8-10 years of studying, of which half of it is focused on your specialisation. To become a professor changes from uni to uni, but one of the general conditions is that you need to publish a set amount of articles (like 6) in scientific journals every year. What that means is that the people that designed this test studied for a combined total of roughly 20 years, with half of it focusing on this topic, and that they designed an average of six experiments per year, experiments that were accepted through peer review by equally knowledgeable people. What this means to me is that they probably know how to set up valid experiments in their field of specialisation, and that your arguments are more likely to come from a misunderstanding of the experiment than it is to come from them completely screwing up the experiment.

I don't mean that to sound harsh, and I'm not saying that because you didn't study in the direction, your opinions should be ignored. That's why I touched on everything you said. What I am saying is that you should consider how confident you are that you are right here, and then consider the odds of you being right, and see if the two are the same :p
Moderator
moon`
Profile Blog Joined July 2009
United States372 Posts
Last Edited: 2009-08-04 08:37:35
August 04 2009 08:37 GMT
#162
Man.. I'm the stereotypical ignorant American. But I'm asian. D:

0 :[
Empty your mind, be formless. Shapeless, like pandabearguy.
Prev 1 7 8 9 All
Please log in or register to reply.
Live Events Refresh
Next event in 3h 3m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Nina 197
mcanning 29
ProTech25
StarCraft: Brood War
Britney 35327
BeSt 1759
ggaemo 465
Leta 52
Backho 51
Dewaltoss 45
IntoTheRainbow 4
Bale 4
Stormgate
WinterStarcraft716
Dota 2
ODPixel88
NeuroSwarm65
League of Legends
JimRising 572
Counter-Strike
Stewie2K1011
Super Smash Bros
Mew2King181
Other Games
shahzam591
Livibee178
SortOf96
Organizations
Other Games
gamesdonequick1124
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• Berry_CruncH319
• davetesta8
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Stunt475
• HappyZerGling154
Other Games
• Scarra931
Upcoming Events
The PondCast
3h 3m
WardiTV Summer Champion…
4h 3m
Replay Cast
17h 3m
LiuLi Cup
1d 4h
uThermal 2v2 Circuit
1d 8h
RSL Revival
1d 19h
RSL Revival
2 days
uThermal 2v2 Circuit
2 days
CSO Cup
2 days
Sparkling Tuna Cup
3 days
[ Show More ]
uThermal 2v2 Circuit
3 days
Wardi Open
4 days
RotterdaM Event
4 days
RSL Revival
5 days
Liquipedia Results

Completed

ASL Season 20: Qualifier #2
FEL Cracow 2025
CC Div. A S7

Ongoing

Copa Latinoamericana 4
Jiahua Invitational
BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Qualifiers
HCC Europe
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025

Upcoming

ASL Season 20
CSLPRO Chat StarLAN 3
BSL Season 21
BSL 21 Team A
RSL Revival: Season 2
Maestros of the Game
SEL Season 2 Championship
WardiTV Summer 2025
uThermal 2v2 Main Event
Thunderpick World Champ.
MESA Nomadic Masters Fall
CS Asia Championships 2025
Roobet Cup 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.