Nonlinearities in Intelligence
Most of
our everyday measurements are linear measurements. A linear
measurement is one in which a constant interval means the same
thing at any point on the scale. For instance, adding one inch
to a six-foot board produces the same change in length that
adding one inch to a five-foot board does. We are so familiar
with linear measurements that we often assume that the
properties of linear measurements apply to any characteristic
that is described by numbers. That is not so, and the erroneous
assumption can be particularly confusing when we deal with
intelligence.
In psychometric theories intelligence is
calculated by determining a person's standard score on an IQ
test. The standard score is the deviation of a person's absolute
score of a test from the mean test score of a reference
population, divided by the standard deviation (a measurement of
the variability of scores in the reference population):
where xi
is the ith person's score in absolute units
(usually the number of correct answers on a test) and ?
and σ are, respectively, the population mean and standard
deviation. If this equation were applied strictly, a person of
exactly average intelligence would have a score of zero, and
people with below-average intelligence would have negative
scores. Since the ideas of zero and negative intelligence do not
seem reasonable, it is conventional to report IQ scores by
rescaling standard scores, using the equation
IQ =
15z + 100
This gives the person of average
intelligence a score of 100. This equation is simply a scaling
convention; the real definition is contained in the first
equation, which makes the standard deviation the unit of
scoring. Herrnstein and Murray refer to the standard deviation
as "like an inch," but it is not. The standard
deviation is determined not by the absolute values of the scores
in a population, but rather by the extent to which one score is
likely to be different from another. In addition, the zero point
of the IQ scale (IQ = 100) is determined by the population mean,
not by a definition of "average intelligence" in terms
of intellectual performance. Therefore the IQ score of an
individual is a relative score, compared to the mean and
variability in the reference population, rather than an absolute
measure of mental competence. If we measured height the way that
we measured IQ, a six-foot, six-inch man would have a standard
score of somewhat greater than 2, in the North American male
population. The same person would have a standard score of about
0 if the reference population were professional basketball
players.
The distinction between the relative and
absolute definitions of intelligence becomes important when we
consider the relation between IQ, defined by standard scores,
and various dependent measures, such as school achievement and
workplace performance. Suppose a psychometrician records the job
performance and intelligence-test scores of a group of workers.
The relationship would be expressed by this equation, where
B is the regression coefficient, or the rate at which
job performance changes as IQ changes:
job performance
= average job performance + B * IQ
B
is calculated to make predictions as accurate as they can be.
The actual degree of accuracy is measured by the correlation
coefficient , which varies from 0 (no accuracy at all) to 1
(perfect prediction). Determining the regression and correlation
coefficients from a given set of data is straightforward. The
problem comes when an extrapolation is made to new situations,
where some data points lie outside the range of IQ units
observed in the original study. An example might be
extrapolating the grade-IQ relationship observed in high-school
students to grade-IQ relations among college students. Such
extrapolations implicitly assume that IQ scores are linear
measures of the intellectual traits that they are supposed to
measure. This is not true. Suppose that a person in his 20s
suffered a brain injury or infection that reduced his IQ score
by 20 points. (Such things are possible.) If he were a medical
or law school student with an original IQ of 140, he would
probably still complete his coursework, though perhaps with not
quite so high a class rank as before. If the person were a
blue-collar worker with an original IQ of 80 he would, at IQ 60,
have a substantial risk of homelessness, poverty and a number of
other serious social problems.
The issue of
nonlinearity applies to the very definition of intelligence, and
in particular to the question of whether there is one type of
intelligence or several. Suppose that general intelligence is
equally important at all levels of mental competence. In this
case the results of a factor-analytic study of test scores,
based on data from people with high levels of intelligence,
should be similar to the results of a study based on data from
people of lower absolute levels of intelligence. Historically
there have been suggestions that this is not so. The
general-intelligence model was first developed by Charles
Spearman (1904, 1927), based on analysis of test results from
English schoolchildren. In 1938 L. L. Thurstone challenged
Spearman's conclusion because he found very little evidence for
general intelligence in a sample of University of Chicago
undergraduates. It was observed at the time that the discrepancy
might have arisen because Spearman and Thurstone had taken data
from people of widely different intellectual levels, which would
be evidence that intelligence changes qualitatively as the level
of mental competence changes. However, the results were not
definitive because Spearman and Thurstone had used different
tests.
An important study by Douglas Detterman and Mark
Daniel (1989) showed that the relations between subtests do
change as the level of scores changes. Among other things,
Detterman and Daniel examined correlations between subtests of
the WAIS and found higher correlations between subtest scores
for people with below-average IQ than for people with
above-average IQ. David Waller and Derek Chung and I found the
same thing when we analyzed the ASVAB scores that Herrnstein and
Murray used in The Bell Curve to determine the relation
between IQ and various indicators of social adjustment. It
appears that general intelligence may not be an accurate
statement, but general lack of intelligence is!
The
conclusion that the relation between different indices of mental
competence depends on the general level of competence is not
consistent with psychometric approaches, but it is consistent
with the cognitive-psychology approach. Recall that the
cognitive-psychology approach assumes that mental competence is
produced by a cascade of progressively more refined abilities,
moving from information processing to problem-solving techniques
to knowledge possession. It follows that problems at the
information-processing level will be general, whereas potentials
established at higher levels will be specific. In fact,
Detterman and Daniel did find that the relation between
information-processing measures and intelligence-test
performance is higher at low levels of intelligence. Similar
observations have been made by scientists who have studied very
high-level performance, in fields ranging from physics to
literature. A certain amount of intelligence seems to be needed
to gain entry to an intellectually demanding field, but beyond
that point success is determined by the effort put into the job,
social support, and just sheer experience. (See Ericsson, Krampe
and Tesch-Romer (1993) on expertise, Simonton (1984) on
creativity, and Gardner (1993) for some interesting biographical
data.)
In economic terms it appears that the IQ score
measures something with decreasing marginal value. It is
important to have enough of it, but having lots and lots does
not buy you that much. My regrets to Mensa, but that is the way
things are. Nonlinearity becomes important when we ask a key
question raised by Herrnstein and Murray: What is the relation
between intelligence and workplace performance?