|
So, i saw quite a few threads mentioning stuff like IQ tests for politicians, or people talking about others being stupid wit han IQ of 30, and in general an overrating of the IQ norm not taking into consideration that it's a norm deriving from the transformation of the raw value of a specific test with no interpretational value whatsoever if you don't look at what that test is measuring.
I'll skip the history of the IQ norm except for telling you that the name isn't even appropriate anymore in a literal way. Optional for the interested: + Show Spoiler + The original IQ by Stern was indeed a quotient of intelligence age and nominal age (IA/NA), the intelligence age depended on the amount of solved problems for his age group and below. If a kid solved as many problems as the average kid of his age, he was average aswell. If a 8-year old kid solved the problems only the average 9-year old was able to solve, then his intelligence age was 9, his nominal age 7.
IQ=(9/7) x 100
Anyway, this method was only valid for people in development. With increasing age, the IQ diminished.
That's why Wechsler introducted the IQ as we know it today, which is nothing but a statistical expression of a value in a standardized scale.
You might know the the Gaussian distribution. mean average=0; Standard deviation=1
Then you have IQ. MA=100; SD=15
Then you have Centil. MA=5; SD=2
The numbers are different, the meaning is the same, the position in the Gaussian distribution is the same. What's different is the allowed precision of the norm. Between 0 and 1, the z-norm can assume values with 2 decimal places. The IQ norm can only be an integer and never goes under the value of 55 or over 145, in practice. The Centil norm can only assume integer values between 1 and 9. They are all listed in decreasing precision, which one you use depends on the test:
All you have to do is test a sufficient sample of people, sorted by age groups, optionally education level etc. You calculate the average mean of the scores, the standard deviation, you choose the appropriate norm depending on the type (personality vs. performance test) and accuracy of the test (depends on the quality criteria, a very exact test allows for norms with more variations. Choosing the wrong norm can cause the test to reflect a precision in outcome which isn't there or the opposite, being overly categorizing when there are more possible and valid variations of score.
Without going further into the test creation process, which i still have to learn in detail, I'll just say that fully grasping intelligence as a thing is impossible. You can get close to a real value, but in the end, you can score high in an IQ test and still fail in your career, although then it's more unlikely to happen. A working sample is still the best method of predicting success in your job, followed by a group discussion.
So, whenever you refer to an IQ, rather refer to the test you got that value from. I might aswell calculate an IQ from your income, which is also a good way to tell how successful you are, should income be your main target in your job. Personally, i like the display of intelligence in this way: (Source:+ Show Spoiler +http://benking.de/2000/augmented-understanding.html ) It doesn't display the connection between the areas. As it is of now it looks like it assumes that all of the areas are independent from each other, which they surely aren't. There's still a lot to find out about.For sure being smart at one thing doesn't exclude being smart at others like it's the case for savants with their 'island talents'.
Regarding the transformation of norms into each other:
You can describe any value in a ditribution with the formula
X=MA + z * SD
the standardized element in this is the 'z' cause it's always the position in a Gaussian distribution, the most formal illustration of the values a population can assume (speaking of population cause we're speaking of social sciences).
So, what about MA and SD?
Simple, every norm has a defined MA and SD, once you chose your norm, you insert the MA and SD of that norm into the formula and you obtain the position of the tested person in the distribution.
An example: In practice, once you tested a person, you obtain a raw value for the complete test and the values of the respective subtests. In the test manual you will find tables for different groups, but the most common differentiation is between age groups cause it's the most influential factor for intelligence (this would be a good point for discussing fluid vs. cristalline intelligence à la Cattell, but another blog pls). Anyway, you look up the norm you obtain for the raw value in such a table.
Ok, let's say i scored a raw value of 116 points in an intelligence test. I'm 23 years old and i look up the value in the respective table for the age range 21-30. It says that my score equivals to a centil value of 7.
Since the centil scale has MA=5 and SD=2, one would be troubled to say if he's average or above average (5+-2 avg. range). Before going on, let's go back to the formula from before. Since this was just an intelligence test, people want to hear about an IQ norm, so we're gonna transform the centil norm into that.
X being the value i just obtained from testing: X=MA + z * SD
7=5 + z * 2 z=1 (remember the Gaussian distribution?It has SD 1)
Now back to IQ: X=100 + z * 15 X=115
The IQ i obtained from testing is 115.
Now is this average or above average?Same question for the C-norm. The C-norm is above average. The IQ-norm is average. Weird isn't it?How's that.
If the C-norm of 7 wasn't above average (=significant), the next value to be sig. would be 8, but that would equival to 8=5 + z * 2 z=1.5 IQ=100 + 1.5 * 15= 122.5! which is way too high when converted into IQ (and z aswell). Thus you have to treat 4 and 7 of the C-norm as significant already.
tl dr; conclusion:
IQ is not the same as intelligence. IQ is a measuring tool among several others. Saying you have an IQ of x is meaningless if you don't look at the underlying test you obtained it with. Intelligence is measured with many methods and is still a controversial subject. Expect an own blog about it.I don't have the books at hand right now.
|
Isn't the IQ to test the competency of someone and their overall ability to fundamentally cooperate with society and its citizens?
|
No one says IQ is anywhere near the equivalent of income, you are fighting windmills here. As a matter of fact, there are statistical studies that effectively point to high levels of IQ and education NOT being related to the highest income bracket, which basically suggests that the most intelligent people dont see money as their ultimate goal.
|
On May 25 2012 01:44 Torte de Lini wrote: Isn't the IQ to test the competency of someone and their overall ability to fundamentally cooperate with society and its citizens? Nah, that's more like EQ. IQ is just your ability to solve a certain kind of logical problems.
|
Thanks for clarifying :B I confused the two.
|
As one famous psychologist said (I forget who off the top of my head), IQ tests are an excellent way to measure something; what that something is exactly we don't know.
|
On May 25 2012 01:52 ecstatica wrote: No one says IQ is anywhere near the equivalent of income, you are fighting windmills here. As a matter of fact, there are statistical studies that effectively point to high levels of IQ and education NOT being related to the highest income bracket, which basically suggests that the most intelligent people dont see money as their ultimate goal.
Ehhhh, whole point of the post is to illustrate that IQ is a norm, not that IQ=intelligence. I just mentioned that example to say that you can transformate any value into IQ as long as you have the AM and the SD. I didn't want to say that income is directly related to intelligence, but you could use the income as a base norm to calculate the IQ (which isn't necessarily a measurement of intelligence). It's a norm created to describe intelligence, but you can describe intelligence measured in a test with other norms too. Gimme a sec an i'll add the calculation of norms to the OP.
|
I personally don't believe someone's intelligence can be measured, or at least not precisely.
I was officially declared a logical-mathematical genius, if you want to know why they call it like that specificly: http://en.wikipedia.org/wiki/Theory_of_multiple_intelligences (which brings forth another theory about intelligence and there are a lot of those)
There are so many theories, so many "ways" to test someone's intelligence but I feel like the number they've put on my forehead regarding my "IQ" is like a curse. I don't stand out in everything I do, my capacity to reason, calculate, recognise patterns and handle logical thinking is.. apparently, very exceptional. Though, the things I've mentioned are the things a "logical-mathematical genius" is a genius in. But I have other things where I stand out just as much, do we just add that to the list and still declare me a logical-mathematical genius or do I suddenly get a new "title"? It's not that easy, really.
But because they've declared me a genius, people expect me to be.. perfect, in everything and sometimes it feels like I'm cursed, because when people find out I do NOT stand out in EVERYTHING and I'm not perfect, they're like well hey I thought you were declared a genius.. I usually keep this to myself because people usually expect you to be perfect when you're declared a genius.
so yeah, the tl;dr version of what I said all comes down to this: I don't believe intelligence can be measured precisely and the reason I've put IQ like "IQ" is because IQ isn't =intelligence.
|
Calgary25951 Posts
I finished and expected it to end in a conclusion but it just kind of cut short...
|
Yeah, I've been delving a little deeper into intelligence tests recently for my studies. And I never realized how much the outcome of your IQ depends on the basic model of intelligence underlying the test. Basically IQ is rather meaningless.
|
Pardon my asking, but the notation seems very strange to me. Would you say that you come from/work in a technical field, a social science or in mathematics?
On May 25 2012 01:43 Cattivik wrote:You might know the the Gaussian distribution. mean average=0; Standard deviation=1"
No, you can have an arbitrary expected value and an arbitrary variance (=SD^2) for Gaussian distributions. You are talking about the unit/standard normal distribution, which is a special case. I think that the expected value for IQ is 100, right?
On May 25 2012 01:43 Cattivik wrote:You can describe any value in a ditribution with the formula
X=MA + z * SD
What is z? Any distribution sounds very strange, especially multivariate and/or discrete cases?
Is your point is that 115 IQ is not equal to 122.5 IQ if you transform (with quantification errors?) between two ways of measuring IQ? Or is it that if you are within one (arbitrarily chosen as a measure) SD?
I honestly don't know.
|
Without going further into the test creation process, which i still have to learn in detail, I'll just say that fully grasping intelligence as a thing is impossible. You can get close to a real value, but in the end, you can score high in an IQ test and still fail in your career, although then it's more unlikely to happen. huh? Scoring high on an IQ test doesn't mean you are more likely to succeed... its a correlation of intelligence. Social factors like whether one was born to wealthy parents has a larger impact on if someone will succeed. I suggest you read Outliers if you want to learn more about what correlates to wealth.
|
|
@StatorFlux I'm studying psychology. Yes, I'm talking about the standard normal distribution. Forgive my terminology, it's spontaneuosly translated from german. And yes, the expected/average value for IQ is 100.
z is the value in a standard normal distribution. 1.65 and 1.96 for example are the significance cut-offs for one-sided and two-sided significance checks. These are the z-values you have to obtain when analyzing different outcomes in order to keep the chance of the difference resulting from randomness under 5 % when one-sided and 2.5 % when two-sided.
Not sure what you mean with the distributions sounding strange.
My point is that roughly structured norms (like Centil) already have to be significant upon reaching the value of MA +- SD, while finely structured norms (like IQ) allow that value to excess or fall below MA+-SD.
Another example: You test two persons
Person 1 gets C=3 (z=-1) Person 2 gets C=2 (z=-1.5)
We calculate the IQ from the values.
IQ1=85 IQ2=77.5
Ok, by definition of the IQ norm, guy 1 is average, guy 2 is below average.
If we didn't interpret C=3 as below average, the next value counting as below average would be C=2. But since C=2 equivals to an IQ of 77.5 and an IQ of 84 is already sufficient to count as below average, we would skip 6.5 significant IQ points by not counting C=3 as significant. Thus we have to count C=3 as significant although IQ=85 is not.
On May 25 2012 04:14 Livelovedie wrote:Show nested quote +Without going further into the test creation process, which i still have to learn in detail, I'll just say that fully grasping intelligence as a thing is impossible. You can get close to a real value, but in the end, you can score high in an IQ test and still fail in your career, although then it's more unlikely to happen. huh? Scoring high on an IQ test doesn't mean you are more likely to succeed... its a correlation of intelligence. Social factors like whether one was born to wealthy parents has a larger impact on if someone will succeed. I suggest you read Outliers if you want to learn more about what correlates to wealth.
Your first sentence is kinda controversial. People from families with high socio-economic status usually score higher at IQ test, so you could see intelligence as the moderator variable here. Wealth isn't the only measurement of success, although it's probably the most used. Studying medicine and working for doctors without borders is also a valid definition for success.
On May 25 2012 04:31 hypercube wrote: I would describe your post as precise but not accurate.
An IQ score of 115 on a test (SD=15) is certainly above average in a statistically significant way. The SD isn't the error of measurement, it's a statistic of the underlying distribution.
You can guess the measurement error by looking at one person's scores on a set of highly correlated tests.
The problem with highly correlated tests is that they might as well be subsumed under a single test since the high correlation implies that they all measure the same thing. Since they all measure the same thing, you can be sure that the error you obtain is for the biggest part a random error and not a systematic one. There might be a systematic error arising over time cause the person is doing tests which all measure the same thing. If it's a trait that can be trained you will get a progressive increase in IQ cause of an exercise effect.
If I'm not mistaken the main reason for 115 being significant is that in case of personality tests (IQ is considered a personality trait) the average area contains 68(.2) % of the cases. Since 115 equivals to a percentile rank of 84 (you scored better or same as 84 % of the sample) it's still considered average cause you might have scored the same.
But since psychological tests are probably not that precise, i don't think that arguing on a 114.999 vs 115 problem is productive.
Also, where did i mention that SD = statistical error?
|
I would describe your post as precise but not accurate.
An IQ score of 115 on a test (SD=15) is certainly above average in a statistically significant way. The SD isn't the error of measurement, it's a statistic of the underlying distribution.
You can guess the measurement error by looking at one person's scores on a set of highly correlated tests.
|
On May 25 2012 01:43 Cattivik wrote: IQ is not the same as intelligence. IQ is a measuring tool among several others. Saying you have an IQ of x is meaningless if you don't look at the underlying test you obtained it with. Intelligence is measured with many methods and is still a controversial subject.
Lol, this is exactly what i tell people, almost every time they mention IQ, because they put so much misplaced value on it. The fact is, you can train yourself to do well at IQ tests and get better at it by 10, 20, 30 points easly, which doesn't make you any smarter. Just makes you better at solving puzzles that measure IQ.
|
On May 25 2012 01:55 Tobberoth wrote:Show nested quote +On May 25 2012 01:44 Torte de Lini wrote: Isn't the IQ to test the competency of someone and their overall ability to fundamentally cooperate with society and its citizens? Nah, that's more like EQ. IQ is just your ability to solve a certain kind of logical problems. Are there any "real" EQ tests? I have seen and done a few online for the lulz but online tests have pretty much zero credibility. I can assume that if I did such a test in the way that I did my supervised IQ test the results would be pretty much the opposite.
|
On May 25 2012 04:28 Cattivik wrote:Show nested quote +On May 25 2012 04:31 hypercube wrote: I would describe your post as precise but not accurate.
An IQ score of 115 on a test (SD=15) is certainly above average in a statistically significant way. The SD isn't the error of measurement, it's a statistic of the underlying distribution.
You can guess the measurement error by looking at one person's scores on a set of highly correlated tests. The problem with highly correlated tests is that they might as well be subsumed under a single test since the high correlation implies that they all measure the same thing. Since they all measure the same thing, you can be sure that the error you obtain is for the biggest part a random error and not a systematic one. There might be a systematic error arising over time cause the person is doing tests which all measure the same thing. If it's a trait that can be trained you will get a progressive increase in IQ cause of an exercise effect.
Yes, but that's irrelevant to the point I'm making.
If I'm not mistaken the main reason for 115 being significant is that in case of personality tests (IQ is considered a personality trait) the average area contains 68(.2) % of the cases. Since 115 equivals to a percentile rank of 84 (you scored better or same as 84 % of the sample) it's still considered average cause you might have scored the same.
But since psychological tests are probably not that precise, i don't think that arguing on a 114.999 vs 115 problem is productive.
Also, where did i mention that SD = statistical error?
You didn't, but you did say that scoring 1 SD above the mean on an IQ test is significant and scoring within 1 SD isn't. That literally doesn't make sense.
If the average height of adult males is a normal distribution with a mean 178cm and standard deviation of 12cm does it make sense to say that someone who's 187cm high isn't significantly taller than average?
And then you say stuff like this:
Since 115 equivals to a percentile rank of 84 (you scored better or same as 84 % of the sample) it's still considered average cause you might have scored the same.
Someone scoring one SD above the mean on a reliable test would almost never score 100 on another reliable test. The wikipedia article says the standard error of measurement is 3 points for most widely used tests. The idea that someone would score 15 points above their expected score is just wrong.
I don't know why they tend to use average for 85-115 and above average for 115+. Probably just a matter of word usage. But it has nothing to do with the fact whether someone scored above the mean in a statistically significant way.
|
You didn't, but you did say that scoring 1 SD above the mean on an IQ test is significant and scoring within 1 SD isn't. That literally doesn't make sense.
If the average height of adult males is a normal distribution with a mean 178cm and standard deviation of 12cm does it make sense to say that someone who's 187cm high isn't significantly taller than average?
Significant in this context means enough above or under average to make a meaningful difference. Then there is also a significance used to determine the chance of a difference being the product of randomness, that type of significance is calculated in experiments, the method depends on the experimental design.
Yes, scoring MA+-1SD is significant for roughly structured norms and not for finely structured as already stated. For those, you need MA +- 1SD +- any value. No, he isn't significantly taller than average, cause the average area is MA +- 1 SD. The convention is that the value has to surpass the SD in order to be interpreted as significantly above/below average.
It's basically saying: Your score has to be higher or lower than the MA +- the average of deviations achieved from the sample in order to be considered superior or inferior to that sample. Since summing up all the deviations leads to a zero-sum, you 1st square each deviation, then add them up. You get the variance.You extract the root from it, you get the SD.
The variance is basically a measure of the strength of deviation from the MA. Useful fact aside: By squaring correlations you can determine the percentage of variance they 'explain'.
And then you say stuff like this: Show nested quote +Since 115 equivals to a percentile rank of 84 (you scored better or same as 84 % of the sample) it's still considered average cause you might have scored the same. Someone scoring one SD above the mean on a reliable test would almost never score 100 on another reliable test. The wikipedia article says the standard error of measurement is 3 points for most widely used tests. The idea that someone would score 15 points above their expected score is just wrong. I don't know why they tend to use average for 85-115 and above average for 115+. Probably just a matter of word usage. But it has nothing to do with the fact whether someone scored above the mean in a statistically significant way. [/quote]
They use that area as average cause it's within one SD.
I don't know why you mention a test comparison. Also, using reliability alone to predict another test score is insufficient.Reliability is a quality criterium to indicate the measuring precision of a test.It doesn't even say anything about what's being measured, while that would be a more important criterium to predict another test score.
I will learn about the quality criteria in detail in test construction lessons, so i would like to keep them out of the discussion as of now. I can think of a few methods for calculating part of the reliability right now, but what role does it even play?? We're discussing cut-off values for the interpretation of scores.
|
On May 25 2012 06:18 Cattivik wrote: They use that area as average cause it's within one SD.
I don't know why you mention a test comparison. Also, using reliability alone to predict another test score is insufficient.Reliability is a quality criterium to indicate the measuring precision of a test.It doesn't even say anything about what's being measured, while that would be a more important criterium to predict another test score.
I will learn about the quality criteria in detail in test construction lessons, so i would like to keep them out of the discussion as of now. I can think of a few methods for calculating part of the reliability right now, but what role does it even play?? We're discussing cut-off values for the interpretation of scores.
I brought it up because you said something that's factually untrue. You said:
Since 115 equivals to a percentile rank of 84 (you scored better or same as 84 % of the sample) it's still considered average cause you might have scored the same.
If you meant that a person might score both 115 and 100 on equivalent tests that's wrong. If you believe that happens somewhat often (and worse, that it's somehow connected to the fact that SD=15 for most tests), there's a serious hole in your understanding. If you don't trust me ask someone who has a background in statistics.
This is completely unrelated to the discussion of what IQ tests measure or what is the effect of IQ on "life success", which are interesting questions in their own right.
|
|
|
|