IQ Archive
Psychometrics

Standard Deviation

What is Standard Deviation?

Standard Deviation (SD) is the mathematical yardstick used to measure how spread out numbers are in a data set. In the context of intelligence testing, it is crucial for understanding what an IQ score actually means. It tells us how “rare” or “common” a specific score is relative to the general population.

The Magic Number: 15

On almost all modern IQ tests (like the WAIS-IV or Stanford-Binet), the mean (average) score is set to 100, and the Standard Deviation is set to 15. This constant allows us to categorize intelligence levels with mathematical precision:

  • 1 SD (IQ 85–115): Includes roughly 68% of the population. This is the range of “average” intelligence.
  • 2 SD (IQ 70–130): Includes roughly 95% of the population. Scoring above 130 (2 SDs above the mean) usually qualifies one for Mensa.
  • 3 SD (IQ 55–145): Includes 99.7% of the population. Scoring above 145 (3 SDs above the mean) is considered “highly gifted” or genius-level.

Why It Matters

Without standard deviation, an IQ score is just a meaningless number. Knowing you scored “130” is only impressive because we know that 130 is exactly two standard deviations above the mean, placing you in the top 2% of humanity.

Calculating Rarity

The further you move away from the mean in standard deviations, the rarer the score becomes exponentially:

  • +1 SD (IQ 115): 1 in 6 people
  • +2 SD (IQ 130): 1 in 50 people
  • +3 SD (IQ 145): 1 in 740 people
  • +4 SD (IQ 160): 1 in 31,500 people
  • +5 SD (IQ 175): 1 in 3.5 million people

This statistical reality explains why “true geniuses” (often defined as +4 or +5 SD) are so incredibly rare in human history.

Related Terms

Bell Curve Mean Variance Z-Score
← Back to Glossary