Although it has its limitations, Binet’s IQ test is well-known around the world as a way to assess and compare intelligence. It also set the stage for the development of several of the IQ tests that are still in use today.
History of Intelligence Assessments
In the late 1800s, Sir Francis Galton—the founder of differential psychology—published some of the first works about human intelligence. Galton proposed that intelligence was hereditary and that it could be tested by looking at how people performed on sensorimotor tasks. Sensorimotor tasks are tasks or exercises that involve the brain receiving a message, then producing a response. An example would be driving a car and recognizing that the vehicle in front of you is slowing (the receipt of a message), causing you to hit your brakes to slow down as well (a produced response). Galton also liked to use statistics to explain the data he collected, even though this information didn’t always verify his beliefs. For example, although he originally thought that head shape and size were correlated with intelligence, the data did not support this notion. Other psychologists of that time had their own ideas, such as James McKeen Cattell who proposed that simple mental tests could be used to measure intelligence. Yet, it wouldn’t be until a few years later that the first IQ test was born.
Alfred Binet and the First IQ Test
Alfred Binet was a French psychologist who played an important role in the development of experimental psychology. Although he originally pursued a career in law, Binet become increasingly interested in Galton’s attempts to measure mental processes—so much so that he abandoned his law career and set out to do the same. At the time, the French government had laws requiring that all children attend school. So, it was important to find a way to identify the kids who would need extra help. In 1904, as part of this effort, the French government asked Binet to help decide which students were most likely to experience difficulty in school. Binet and his colleague, Theodore Simon, began developing questions that focused on areas not explicitly taught in the classroom, such as attention, memory, and problem-solving skills. They then worked to determine which questions best predicted academic success. Binet and Simon ultimately came up with a test that included 30 questions, such as asking about the difference between “boredom” and “weariness,” or asking the test-taker to follow a moving object with just one eye. This became known as the Binet-Simon Scale and was the first recognized IQ test.
Limitations of the Binet-Simon IQ Test
This Binet-Simon Intelligence Scale (also sometimes called the Simon-Binet Scale) became the basis for the intelligence tests still in use today. Though, this scale had many limitations. For example, Binet did not believe that his psychometric instruments could be used to measure a single, permanent, and inborn level of intelligence. Instead, he suggested that intelligence is far too broad a concept to quantify with one number. Binet insisted that intelligence is complex in that it is influenced by many factors, changes over time, and can only be compared in children with similar backgrounds. The Binet-Simon test didn’t necessarily account for this complexity, providing an incomplete measure of intelligence. Some psychologists set out to make the modifications needed to supply a more complete picture, which led to the creation of newer, more comprehensive IQ tests.
Stanford-Binet Intelligence Scale
Stanford University psychologist Lewis Terman is one professional who took Binet’s original test and standardized it using a sample of American participants. Initially, this was known as the Revised Stanford-Binet Scale but is now known more commonly as the Stanford-Binet Intelligence Scale. The Stanford-Binet test, which was first published in 1916, was adapted from the original test in that French terms and ideas were translated into English. It also included new terms, using two scales of measurement versus one to provide a more accurate score. The Stanford-Binet intelligence test provided a single number, known as the intelligence quotient (IQ), to represent an individual’s score on the test. It remains a popular assessment tool today, despite going through a number of revisions over the years since its inception.
Army Alpha and Beta Tests
At the outset of World War I, U.S. Army officials were faced with the task of screening and classifying an enormous number of recruits. In 1917, as chair of the Committee on the Psychological Examination of Recruits, psychologist Robert Yerkes developed two IQ tests known as the Army Alpha and Beta tests. For example, a child with a mental age of 12 and a chronological age of 10 would have an IQ of 120: (12÷10) x 100 = 120. The Army Alpha was designed as a written test, while the Army Beta was made up of pictures for recruits who were unable to read or didn’t speak English. The tests were administered to over 2 million soldiers. The goal of the Alpha and Beta tests was to help the Army determine which men were suited for specific positions and leadership roles. After the war, the tests remained in use in a wide variety of situations outside of the military. For example, IQ tests were used to screen new immigrants as they entered the United States. As a result of these tests, harmful and inaccurate generalizations were made about entire populations, leading Congress to enact discriminatory immigration restrictions for groups deemed to have a “genetically inferior” IQ.
Wechsler Intelligence Scales
Much like Binet, American psychologist David Wechsler believed that intelligence involved different mental abilities. But he wasn’t happy with the limitations of the Stanford-Binet so, in 1955, he published his new intelligence test known as the Wechsler Adult Intelligence Scale (WAIS). Wechsler developed two different tests specifically for use with children: the Wechsler Intelligence Scale for Children (WISC) and the Wechsler Preschool and Primary Scale of Intelligence (WPPSI). The adult version of the test has been revised since its original publication and is now known as the WAIS-IV.
WAIS-IV
Rather than scoring based on chronological and mental age, the WAIS is scored by comparing the test taker’s score to the scores of others in the same age group. The average score is fixed at 100, with two-thirds of scores lying in the normal range, which is somewhere between 85 and 115. This scoring method has become the standard in intelligence testing and is also used in the modern revision of the Stanford-Binet test. The WAIS-IV contains 10 subtests, along with five supplemental tests, and provides scores in four major areas of intelligence:
Verbal comprehension Perceptual reasoning Working memory Processing speed
The WAIS-IV also provides two broad scores that can be used as a summary of overall intelligence. The Full-Scale IQ score combines performance on all four index scores while the General Ability Index is based on six subtest scores.
Debates Surrounding IQ Testing
Opinions vary on the validity of IQ testing, sometimes even changing based on the expert’s political viewpoints and gender. Concerns exist as to whether these tests accurately measure intelligence or if results are impacted by outside influences such as the person’s motivational level, quality of schooling, health status, coaching, and more. There are also questions as to whether IQ tests are reliable. Reliability exists when the same test results are repeated over time. One pilot study involving the Wechsler Abbreviated Scale of Intelligence - Second Edition (WASI-II) found good reliability in some testing conditions and poor reliability in others. Some of the controversies surrounding IQ tests revolve around the notion that they are inherently biased against certain ethnic groups, namely Black and Hispanic Americans. This apparent bias then can result in discrimination and disadvantages for these groups.
IQ Test Uses
At the same time, others believe that IQ tests offer some value, particularly in certain situations. A few of the ways intelligence tests are used today include:
Criminal defense applications: IQ tests are sometimes used in the criminal justice system to help identify whether a defendant can contribute to their own defense at trial, while others have used their test results in an attempt to secure benefits in the form of Social Security Disability. Learning disability identification: Subtest scores on the WAIS-IV can be useful in identifying learning disabilities. For instance, a low score in some areas combined with a high score in others may indicate that the person has a specific learning-related difficulty. To assess therapeutic impacts: IQ tests are sometimes used to help measure whether a certain therapy is working or if a medical treatment impacts cognitive function. For example, a 2016 research study used IQ testing to learn whether a therapy designed to help treat brain tumors in children had a better neurocognitive outcome than another type of therapy. To promote AI development: Some of the same theories and principles behind IQ testing on humans are being used to help advance the use of artificial intelligence (AI) in computer systems. AI is used online to personalize search engine results and product recommendations. It may even aid in the prediction of mental illness.