Intelligence Quotient (IQ)
What is an Intelligence Quotient (IQ)?
The Intelligence Quotient, commonly known as IQ, is the most recognized metric in psychology for assessing human intelligence. It is not a measure of knowledge or education, but rather a gauge of cognitive potential — specifically, the ability to reason, solve problems, understand complex ideas, and learn from experience.
IQ is a statistical construct derived from standardized tests. It places an individual’s performance on a comparative scale against the general population, typically following a normal distribution (bell curve) with a mean score of 100 and a standard deviation of 15. This means an IQ of 115 is one standard deviation above average, placing someone in roughly the top 16% of the population.
The History: From Binet to Wechsler
The origins of IQ testing trace back to early 20th century France — and to a problem that had nothing to do with measuring genius.
- Alfred Binet (1905): Along with Theodore Simon, Binet developed the first practical intelligence test to identify French schoolchildren who needed extra educational support. Crucially, Binet himself was skeptical that his scale measured a fixed, innate quality — he saw it as a practical educational tool, not a biological measurement.
- William Stern (1912): The German psychologist who coined the term “Intelligence Quotient.” He proposed the original ratio formula:
IQ = (Mental Age / Chronological Age) × 100. A 10-year-old performing like a 12-year-old had an IQ of 120. - Lewis Terman (1916): An American psychologist at Stanford who adapted Binet’s work into the Stanford-Binet Intelligence Scales and enthusiastically promoted IQ as a measure of fixed hereditary intelligence — a claim that shaped (and distorted) IQ’s cultural legacy for decades.
- David Wechsler (1939): Developed the Wechsler-Bellevue Intelligence Scale for adults, recognizing that the ratio IQ formula broke down for adults whose “mental age” stopped increasing while chronological age kept rising. Wechsler introduced the deviation IQ, the system still used today.
Modern Scoring: Deviation IQ
Early ratio IQ calculations worked reasonably well for children but failed systematically for adults. Modern tests — including the Wechsler Adult Intelligence Scale (WAIS-IV) and Stanford-Binet 5 — use deviation IQ.
In this system, a score is calculated based on how far an individual’s performance deviates from the average performance of their age cohort. The score is then placed on a scale with mean = 100 and SD = 15:
| IQ Range | Classification | Population % |
|---|---|---|
| 130+ | Very Superior / Gifted | ~2.2% |
| 120–129 | Superior | ~6.7% |
| 110–119 | High Average | ~16.1% |
| 90–109 | Average | ~50% |
| 80–89 | Low Average | ~16.1% |
| 70–79 | Borderline | ~6.7% |
| Below 70 | Intellectual Disability | ~2.2% |
The beauty of deviation scoring is that it is age-normed — a 70-year-old scoring 100 is performing as well as the average 70-year-old, not the average 25-year-old.
What Does IQ Actually Measure?
Modern IQ tests do not measure a single capacity. They assess a battery of distinct cognitive domains that together estimate general intelligence (g) through their shared variance:
- Verbal Comprehension: Vocabulary, general knowledge, abstract verbal reasoning — primarily loading on crystallized intelligence (Gc)
- Perceptual Reasoning / Visual Spatial: Visual pattern recognition, spatial rotation, non-verbal problem solving — loading on fluid intelligence (Gf) and visual processing (Gv)
- Working Memory: Holding and mentally manipulating information (e.g., mental arithmetic, digit-span backward) — strongly linked to Gf
- Processing Speed: How quickly and accurately the brain executes simple cognitive tasks (e.g., symbol-matching, coding) — measuring neural efficiency
The composite score from these domains produces the Full Scale IQ (FSIQ), which correlates about r = 0.70–0.80 with the latent g factor. No IQ test measures g perfectly — they measure it through an imperfect but highly valid proxy.
The Validity and Impact of IQ
Decades of research have established IQ as one of the most reliable and valid predictors in the social sciences:
- Academic Achievement: IQ correlates r ≈ 0.50–0.60 with school grades and r ≈ 0.55 with years of education completed.
- Job Performance: For complex professions, IQ is a robust predictor of trainability and efficiency — Frank Schmidt and John Hunter’s 1998 meta-analysis found g was the single best predictor of job performance across 515 occupations.
- Health and Longevity: Higher IQ is statistically associated with better health behaviors, lower mortality from cardiovascular disease, accidents, and suicide — the “cognitive epidemiology” findings of Ian Deary and colleagues.
- Income: IQ correlates about r = 0.40 with income, though this relationship is mediated by education and occupational complexity.
However, IQ is not destiny. While it sets something like a cognitive ceiling, factors like conscientiousness, grit, emotional regulation, and opportunity play massive roles in actualizing potential. Charles Murray’s analysis of the NLSY dataset shows that within any IQ band, outcome variation remains enormous.
Criticism and Controversy
IQ is a legitimate scientific construct that also carries significant cultural and political baggage:
Legitimate scientific critiques:
- IQ tests sample only a subset of cognitive abilities. Creativity, practical intelligence, social cognition, and wisdom are not captured.
- Construct validity varies across populations. Test familiarity, stereotype threat (Claude Steele’s research), and cultural context can suppress scores independently of underlying ability.
- The gap between IQ and real-world performance widens at the extremes — very high IQ does not guarantee proportionally exceptional real-world achievement.
Political and historical misuse:
- Early 20th century eugenicists — including Terman, who co-wrote Army Alpha and Beta tests — used IQ data to justify immigration restriction and forced sterilization policies. This history is inseparable from IQ’s public perception.
- The 1994 book The Bell Curve by Herrnstein and Murray reignited debates about group differences in IQ scores. The American Psychological Association’s 1996 task force report “Intelligence: Knowns and Unknowns” acknowledged that group differences in average IQ exist but that their causes remain scientifically contested and do not support racist interpretations.
Psychometric responses: Despite criticism, psychometricians maintain that IQ remains the most reliable and valid tool for measuring cognitive differences. The APA report concluded that IQ tests are valid predictors of academic and job performance, and that within-group predictive validity is comparable across demographic groups.
IQ in Context: What It Is and Isn’t
IQ is best understood as a relative rank within an age cohort on a specific battery of cognitive tasks that correlate moderately with real-world outcomes. It is:
- A reliable predictor, not a deterministic ceiling
- A snapshot of current cognitive functioning, not a fixed biological constant
- A practical tool for educational and clinical assessment, not a comprehensive measure of human worth
The most productive use of IQ data is clinical and educational — identifying children who need enrichment or support, diagnosing learning disabilities, tracking cognitive change in aging populations, and designing evidence-based interventions. The least productive use is ranking individuals as human beings.
Conclusion: A Tool, Not a Verdict
The Intelligence Quotient, for all its controversy, remains the most carefully studied psychological construct in history. Over a century of research has clarified both its genuine predictive power and its real limitations. Understanding IQ means understanding both what it captures — a real and consequential dimension of cognitive variation — and what it misses: the full complexity of human potential.