The use of intelligence testing for diagnosing learning disabilities is seen as

Spearman (1904a) developed factor analytic techniques to test his hypothesis that a single dimension accounted for correlations among all tests of mental ability. Spearman called this dimension “general intelligence.” To avoid contaminating the scientific construct of general intelligence with any ideas associated with the notion of intelligence in common parlance, Spearman signified the scientific construct derived from correlations among ability tests with the letter g, which stood for general intelligence. Spearman argued that g represented a new scientific construct, the meaning of which would be established only with substantial empirical research.

Spearman was perhaps the first to notice what is called the positive manifold, which refers to the finding of uniformly positive correlations among tests of ability. This positive manifold is a hallmark of the ability domain and is a distinctive attribute of the domain in comparison with others. Spearman reasoned that, if all tests of ability are positively intercorrelated, a single entity might influence all tests and thus be in common among the tests. Tests that correlate highly with other tests would be more heavily saturated with this common entity, whereas tests that tended to correlate at lower levels with other tests would be less saturated with the common entity. Spearman (1904a) presented techniques for estimating the saturation of each test, based on its correlations with other tests, and he continued to refine and extend these techniques for the remainder of his career.

Spearman's theory is frequently called the two-factor theory, reflecting the hypothesis that two factors account mathematically for the variance of each measured variable. One of these factors is g, the factor of general intelligence; and the second factor is s j , a factor that is specific to manifest variable j. Thus, the two-factor theory postulates two classes of factors. One class has a single member, g, the factor of general intelligence, which is the single influence that is common to all tests of ability. The second class of factors has as many members as there are tests of ability, one specific factor for each different test of ability.

As a theoretical metaphor, Spearman (1927) borrowed from the Industrial Revolution. Arguing that g, or general intelligence, could be likened to or identified with mental energy, Spearman also hypothesized that individual differences in mental energy were largely genetic in origin. This mental energy can be directed toward any kind of intellectual task or problem, and the greater the amount of mental energy devoted to a task, the better the performance on the task. Individuals with a high level of g have a high level of mental energy to devote to intellectual pursuits, whereas persons with low levels of g have much lower levels of mental energy at their disposal when confronting intellectual problems or puzzles. Consequently, individual differences in g reflect individual differences in mental energy, and individual differences in mental energy lead to individual differences in performance on all ability tests and therefore account for the correlations among all tests of mental ability.

The specific factor for variable j, s j , is composed theoretically of two components—a reliable component that is specific to variable j, and a stochastic or random component that represents random error of measurement. (This specific factor is sometimes referred to as residual variance.) In most research situations, these two components cannot be separated, so emphasis is laid on the combined specific factors. Spearman equated the specific factor s j for a given test j with an engine. General intelligence, or g, provides the mental energy to power the engine that is used to solve a particular type of problem. Thus, one engine would be used to solve the problems on a verbal comprehension test, another engine would be used for numerical problems, and so forth. For certain types of problems, the general factor g is of primary importance, leading to a high g-loading for such a test and a relatively low contribution to explained variance by the engine, or specific factor, for the test. But, for other tests, g is of less importance, and the engine for the test accounts for the majority of the variance. The specific factor s j for a test is an opportunity for the environment or experience to play a part in performance on mental ability tests.

When conducting research to test his hypothesized ability structure, Spearman often conducted analyses so that the results would conform to his theory. For example, Spearman (1914) dropped a test from his analyses because its inclusion resulted in a failure to satisfy his statistical criterion for adequacy of a single factor. Once the test was dropped from the analysis, the remaining tests satisfied the mathematical criterion, supporting the adequacy of a single factor for the set of tests. This approach—discarding tests that led to failure to confirm his theory—was a common one for Spearman, who discarded recalcitrant tests in several analyses reported in his major empirical work on mental abilities (Spearman, 1927). As a result, Spearman's two-factor theory has equivocal support, because any indication of lack of fit was effectively swept under the rug. But the two-factor theory is important for several reasons, including its status as the first theory of the structure of mental abilities, the clarity with which the theory and its predictions were stated, and the close interplay between psychological theory and the mathematical and statistical tools developed to test it.

Thurstone's Primary Mental Abilities

During the 1930s, L.L. Thurstone and his colleagues pursued a program of research designed to identify the basic set of dimensions that span the ability, or intelligence, domain. Rather than beginning with a strong a priori theory about the structure of mental abilities as Spearman had done, Thurstone and his collaborators took a very different approach. Specifically, they collected a large battery of tests comprising all conceivable types of intellectual tasks, administered the battery to a large sample of subjects, and then analyzed the correlations among the tests in this battery to determine the number and nature of the dimensions required to account for the correlations. If the same dimensions continued to emerge from their analyses across several samples of subjects and different but largely overlapping batteries of tests, then the dimensions would serve as a framework for representing the ability domain.

In several early studies, Thurstone and his colleagues (1938a, 1938b) found seven interpretable factors that were replicated across several analyses; these seven factors were termed primary mental abilities. The seven primary mental abilities that consistently appeared across samples were identified as: (1) verbal comprehension (V), or the ability to extract meaning from text; (2) word fluency (W), subsuming the ability to access elements of the lexicon based on structural characteristics (e.g., first letters, last letters), rather than meaning; (3) spatial ability (S), or the ability to rotate figural stimuli in a two-dimensional space; (4) memory (M), involving the short-term retention of material typically presented in paired-associate format; (5) numerical facility (F), reflecting the fast and accurate response to problems involving simple arithmetic; (6) perceptual speed (P), or the speedy identification of stimuli based on their stimulus features; and (7) reasoning (R), which represented inductive reasoning in some studies, deductive reasoning in other studies, and general reasoning in still others.

As for an interpretation of the nature of mental abilities, Thurstone (1938a, 1938b) was not specific. He repeatedly referred to ability dimensions as representing “functional unities,” by which he meant that the tests loading on a given factor had some functional similarity that was hypothesized to be the same across tests. Thurstone did believe that the future would bring a mapping of mental abilities onto brain areas, such that each ability factor would be tied to particular brain areas that supported its functioning. But brain mapping was in its initial stages and Thurstone could only voice this as a hope for the future. He did think that cognitive psychology held hope for understanding the underpinnings of mental abilities, stating that psychologists should move into the laboratory to devise studies that would illuminate why a given set of tests loaded on a given factor (Thurstone, 1947). Once again, the field of psychology was not ready for this recommendation, and cognitive investigations into the processes underlying mental test performance began in earnest about 30 years after Thurstone's encouragement to pursue this avenue of research.

In the initial studies by Thurstone and his collaborators (e.g., 1938a, 1938b), the primary mental ability factors were rotated orthogonally, so they were statistically uncorrelated with one another. But after the development of the mathematical theory for oblique rotations (Tucker, 1940), Thurstone and Thurstone (1941) quickly applied oblique rotations to the primary mental abilities and found substantial correlations among the seven dimensions. The correlations among the primary mental abilities were well described by a single second-order factor, which Thurstone and Thurstone argued provided a way to reconcile Spearman's theory with their own. That is, at the level of the primary mental abilities, seven dimensions were required to represent the relations among a large set of tests. But correlations among the primary mental abilities could be explained by a single second-order factor. Thus, one could argue that Spearman pursued work on the ability domain at the second-order level, whereas Thurstone and his colleagues worked to specify well the dimensions that constituted the first-order level of factoring. Although this would provide a way of integrating the Spearman and Thurstone models, not all researchers agreed with this position. Indeed, Spearman (1939) argued that the primary mental abilities were rather trivial and narrow, and that the second-order general factor, or g, should be considered the principal or primary factor, rather than being relegated to second-order importance.

British Hierarchical Theorists

As early as 1909, Burt performed analyses that demonstrated the need to consider more than a single factor for explaining the correlations among a set of manifest indicators of ability. In this early publication, Burt (1909) provided little indication of a meaningful multiple factor structure, but 40 years later, he presented a theoretical summary of research that provided a three-level structure of mental abilities (Burt, 1949). At the first level, Burt postulated the presence of basic sensory and perceptual dimensions, including dimensions such as sound discrimination thresholds. The second level contained dimensions that were more cognitive and intellective in nature; here, typical ability dimensions such as verbal comprehension and spatial ability resided. The third level has a single dimension, the general factor of Spearman.

Vernon (1950, 1961) provided the most comprehensive and integrative review of the hierarchical theory; Vernon's focus was at its highest levels. The topmost level had a single dimension, the general intelligence factor, g, of Spearman. Below g were two subgeneral abilities: v:ed (or verbal:educational), and k:m (or spatial:mechanical). Below the v:ed subgeneral dimension fall factors such as verbal comprehension, verbal fluency, numerical facility, and reasoning, whereas under the k:m subgeneral dimension are factors such as spatial rotation, mechanical and technical information, and various psychomotor abilities. Vernon presented the hierarchical structure of abilities as a way of summarizing the previous three decades of research and considered the several versions of the hierarchy to be tentative and subject to revision in the future. However, both Vernon and Burt believed strongly in the nature of the general factor g as representing a single entity that was common to all tests of ability.

The third member of the British hierarchical group was Godfrey Thomson (1951), who supported the general hierarchy of abilities even as he espoused a rather different theoretical basis for it. Thomson's hierarchical factor pattern was similar to Vernon's, with a general factor aligned with Spearman's g at the apex of the hierarchy, followed by rather broad subgeneral factors, and finally a series of much more narrow factors at the bottom of the hierarchy.

However, Thomson believed that the ability hierarchy was based on a radically different set of processes. Indeed, he repudiated the notion of a single entity common to all tests of ability. Instead, the human mind may be composed of a virtually infinite set of bonds or potential bonds that are independent of one another. When working on a particular type of test, a given set of bonds is required to arrive at a correct answer. When a different type of test was administered, a different but overlapping set of bonds was activated. The more highly overlapping the sets of bonds required by two tests, the higher the correlation between the tests. Conversely, if the sets of bonds sampled by two tests showed little overlap, then the tests would correlate positively but at a low level. The upshot of Thomson's sampling theory was this: no single entity (i.e., bond) may be found that is common to all tests of mental ability, so the hierarchical structure of human mental abilities simply indicates the degree of overlap among the bonds sampled by tests of mental ability.

The Thomson explanation for the hierarchy of mental abilities may lead to a number of reactions. One may become highly suspect of factor analytic approaches, as one set of empirical results, with a dominant general factor, is consistent with diametrically opposed generating mechanisms—a single entity common to all tests (e.g., Spearman) versus no single entity common to all tests of ability (e.g., Thomson). But another reaction to these findings is to become attuned to the need to marshal evidence beyond the pattern of tests loading on factors. The loadings of tests on factors may suggest the presence of an entity common to all tests of ability. But additional evidence of different types may be relevant to the choice between a single common entity and the absence of a single common entity. This additional evidence may then tip the balance in favor of one or the other of the two competing positions.

Guilford's Structure of Intellect

Based on considerable research during World War II on army recruits and a thorough review of cognitive psychological research, J.P. Guilford (1967) developed a model he termed the structure of intellect, or SOI. He and his colleagues spent over two decades attempting to confirm the basic hypotheses of SOI theory, work summarized by Guilford (1967) and Guilford and Hoepfner (1971). The Guilford theory was well recognized as a competing model of ability structure until Horn and Knapp (1973) published a reanalysis of many of the data sets used by Guilford and his colleagues to corroborate SOI theory. They found that Guilford's own data gave much stronger, in fact almost perfect, support for Thurstone's hypotheses than for hypotheses generated by SOI theory. An interesting pair of commentaries on the Horn and Knapp (1973) study by Guilford (1974) and Horn and Knapp (1974) left the main findings by Horn and Knapp (1973) unchallenged. SOI theory is no longer recognized as a useful conceptualization of the structure of human abilities.

Cattell-Horn Theory of Fluid and Crystallized Intelligence

Capitalizing on a much earlier observation (Cattell, 1941), Raymond B. Cattell (1963) proposed a new theory of ability structure, subsequently referred to as the theory of fluid and crystallized intelligence. According to the initial theory sketched by Cattell (1963), two very broad and important dimensions of intelligence—fluid intelligence, or Gf, and crystallized intelligence, or Gc—could be distinguished, rather than the single dimension of g hypothesized by Spearman. Cattell conceived of fluid intelligence in ways that were reminiscent of Spearman's theorizing about g. In particular, Gf was thought to be a reservoir of reasoning ability that could be directed toward many different kinds of content, hence its identification as a fluid form of intelligence. Furthermore, Gf was thought to be largely genetically determined.

As fluid intelligence was expended on a given kind of content or intellectual problem, the individual would develop knowledge stores related to the particular content or type of problem as well as mental algorithms for solving such problems. The knowledge and mental algorithms developed through the application of Gf on given tasks are therefore crystallizations of the influence of Gf. Thus, verbal comprehension, or the ability to extract meaning from text, is a crystallized ability assessed using tests of vocabulary, paragraph comprehension, and the understanding of proverbs, among others. All of these tests require one to extract the meaning from text using stored meanings of words in the lexicon. Numerical facility is a crystallized ability that subsumes knowledge of simple numerical facts (e.g., addition facts, subtraction facts) as well as algorithms for solving numerical problems that cannot be solved easily mentally (e.g., long division, multiple-place multiplication). The higher a person's level of Gf, the greater the amount of fluid intelligence invested on particular tasks, and therefore the higher that person's general levels of performance crystallized ability on tasks. Because Gf influences performance on all crystallized ability tasks, these tasks should correlate with one another and therefore define a general crystallized intelligence factor, or Gc.

Because Gf was a fluid ability to reason with new material, Cattell (1963, 1971) argued that Gf was best measured using either novel stimuli or problems or with highly overlearned stimuli with which a person is instructed to perform some novel operation, like doing simple math with letters of the alphabet. Theoretically, Gf was largely genetic in origin, and any learning that affected tests for Gf would be haphazard learning that occurred in the context of daily life. In contrast, Gc was best measured using tests of standard cultural knowledge (vocabulary, information, similarities) or tests of material like numerical facility that was highly practiced in standardized cultural settings such as school. One hypothesis regarding the pattern of tests loading on factors that distinguishes Gf-Gc hypotheses from those of the British hierarchical theorists has to do with tests of mechanical knowledge. Cattell argued that tests of mechanical knowledge should load on the Gc factor, which is closest to Vernon's v:ed factor, because mechanical knowledge is systematically taught in schools, rather than on the k:m factor, as Vernon had hypothesized. In addition, boys should have an advantage on mechanical knowledge over girls relative to other indicators of Gc, due to the more consistent teaching of mechanical knowledge to boys than to girls. These hypotheses were confirmed, lending support to structural hypotheses of Gf-Gc theory over those associated with the hierarchical model of Vernon.

Cattell (1971) made a further contribution to the understanding of mental abilities by distinguishing between the order and stratum of a factor. The order of a factor is a superficial, methodological aspect of the analysis in which a factor is identified, whereas the stratum a factor occupies is a deeper, theoretical concern regarding the nature and breadth of the factor. Factors that are obtained from analyzing the correlations among observed variables are termed first-order factors. If the first-order factors are rotated obliquely, factoring the matrix of correlations among first-order factors leads to the identification of second-order factors. Multiple orders of factoring may be continued as long as at least three oblique factors are identified at a given level. In contrast, the stratum a factor occupies depends on its breadth and the generality of its influence.

To make the distinction between order and stratum clearer, consider the following two research scenarios. In the first scenario, suppose that a researcher included in a battery of tests three tests of word fluency, three tests of associational fluency, and three tests of ideational fluency. Factoring these nine tests would lead to the identification of three first-order factors, one each for word fluency, associational fluency, and ideational fluency. If the correlations among these three factors were factored, a single general fluency factor (or Glr, for general long-term retrieval from memory) could be derived as a second-order factor, and the three first-order fluency factors would load on this second-order factor. In this research scenario, the first-order factors are also first-stratum factors, representing the narrowest dimensions that would be fruitful to research. In addition, the second-order general fluency factor is a second-stratum factor, with broader influence on each of several types of more narrow fluency.

In the second research scenario, given constraints in testing time, the second researcher could administer only a single test for word fluency, a single test for associational fluency, and a single test of ideational fluency. In this scenario, the researcher could not identify first-stratum factors for word fluency, associational fluency, and ideational fluency, because only a single manifest variable for each dimension was available, and one must have at least two, and preferably three, tests for a given factor to identify it as a factor. Factor analyzing the three fluency tests would lead to a first-order factor on which the word fluency, associational fluency, and ideational fluency tests loaded. Now this factor is a first-order factor, because it was derived from the correlations among measured variables. But, because the variables loading on it represented different types of fluency, the first-order factor reflects general fluency, or Glr, a second-stratum dimension.

The distinction between the order and the stratum of factors enables one to place results in a hierarchical structure based on the stratum of the factors found in different studies. The current version of Gf-Gc theory has been outlined in several papers by John L. Horn (1985, 1988, 1998). The ability structure for Gf-Gc theory posits at least 55 primary or first-stratum factors. When correlations among the first-stratum factors are analyzed, nine second-stratum factors are found. These nine second-stratum factors are: (1) Gc (crystallized intelligence), which has verbal comprehension, semantic relations, numerical facility, mechanical knowledge, syllogistic reasoning, verbal closure, and general information factors as indicators; (2) Gf (fluid intelligence), which subsumes first-order factors such as induction, general reasoning, figural relations, concept formation, and symbolic classification; (3) Gv (general visualization), with loadings from first-stratum factors for visualization, speed of closure, flexibility of closure, spatial orientation, figural fluency, and figural adaptive flexibility; (4) Ga (general auditory processing), with loadings from first-stratum factors, such as listening, verbal comprehension, temporal tracking, sound pattern discrimination, and auditory memory span; (5) Gsm (general short-term memory, also identified at times as SAR, for short-term acquisition and retrieval), which subsumes first-stratum factors of associative memory, span memory, meaningful memory, and memory for order; (6) Glr (general long-term memory, also sometimes identified as TSR, for tertiary storage and retrieval), which represents a variety of fluency dimensions, such as delayed retrieval, associational fluency, expressional fluency, ideational fluency, word fluency, and originality; (7) Gs (general speediness or processing speed), covering first-stratum dimensions of perceptual speed, numerical facility, and writing and printing speed; (8) Gt (decision speed, also identified at times as CDS, for correct decision speed), reflecting choice reaction time, decision speed, and simple reaction time; and (9) Gq (general quantitative knowledge), representing dimensions such as applied problems, quantitative concepts, numerical facility, and general reasoning.

The preceding results related to the loading of first-stratum factors on the nine second-stratum factors may be termed structural results. But in the continued development of Gf-Gc theory, Horn (1998) has always monitored several additional kinds of information. One of these additional types of information is derived from developmental studies and consists both of kinematic trends (developmental growth and decline of abilities over the life span) and of the dynamic effects of ability dimensions on one another. The differential kinematic, life-span trends for the various second-stratum abilities have been replicated many times.

These trends show that both Gf and Gs begin to decline early in adulthood, around the age of 30, whereas Gc continues to increase in mean level until perhaps age 70 before declines begin. This is perhaps the strongest evidence against attempting to define a higher-stratum general factor analogous to Spearman's g, an argument that Horn has made repeatedly. The moderate correlations among the nine second-stratum ability dimensions have been an impetus to many researchers to factor analyze these correlations and obtain a higher-order general factor. But Horn argued that, with very different life-span trends for the second-stratum dimensions, any general factor would be constructed out of the mixing of cognitive apples and oranges. This would lead to a hopelessly confounded and uninterpretable general factor showing essentially no change in level during the adult years, a pattern that none of the second-stratum factors actually displays. The dynamic effects mentioned above involve the hypothesized lead-lag relations among abilities. The most often cited of these is the hypothesis that Gf leads to later increases in Gc due to the investment of Gf on intellectual problems. Studies of these dynamic hypotheses have not been strongly supportive of hypothesized relations, but the current development of better models to test these hypotheses may lead to more definitive results.

Horn (1985, 1998) evaluated still other kinds of research evidence, which are discussed here only briefly. Although still somewhat premature for drawing final conclusions, neurocognitive studies appear to support the hypothesis that different ability factors are subserved by different brain areas. As these findings become more firmly established, they will provide additional support for the hypotheses of Gf-Gc theory. Another type of evidence is derived from studies of heritability. Gf-Gc theory makes certain predictions regarding heritability, or the degree of genetic variance in ability factors. One such prediction is that Gf should have higher heritability than Gc. Here, the evidence is not obviously supportive of Gf-Gc theory, as most estimates of heritability show about equal heritabilities for Gf and Gc. A final kind of evidence comes from studies of achievement, in which achievement in particular curricular areas is related to second-stratum dimensions of ability. Horn (1998) noted the difficulties with such studies but concluded that achievement studies tend to support differential relations between achievements in distinct curricular areas and associated second-stratum factors of ability.

In summary, Gf-Gc theory is a complex and far-reaching enterprise. The theory makes predictions in the structural domain concerning the loading of first-stratum abilities on the broad second-stratum factors, but also makes clear predictions in several other domains. Although empirical results to date are not fully supportive of all predictions of the theory, a sufficient number of predictions have been confirmed that Gf-Gc theory is the most comprehensive and widely supported ability theory currently available. The frequent replication of the differential life-span trends for different abilities has resulted in Gf-Gc theory being the primary theoretical framework now used in studies of adulthood and aging. Moreover, the well-replicated structural results are leading the developers of intelligence tests to incorporate measures of Gf and Gc, in addition to an overall IQ in the scoring of their instruments.

Carroll's Three-Stratum Theory

In 1993, John B. Carroll published a monumental tome that reported the reanalyses of over 450 sets of data. The aim of this project was to reanalyze all previous ability studies using a constant and well-justified set of factor analytic techniques, trusting that this would lead to a more consistent set of results across studies. The factor analytic results reported by Carroll are similar to the Horn-Cattell structural results in most respects, so little detailed description is needed here. We merely recount the broad strokes of the Carroll approach.

The upshot of the reanalysis of 477 studies was the identification of approximately 65 narrow, first-stratum factors. When correlations of these first-stratum factors were analyzed, eight second-stratum factors were located. When the correlations among second-stratum ability factors were analyzed, Carroll identified a single third-stratum factor, which he interpreted as corresponding to Spearman's g. One interesting advance by Carroll was to identify both level and speed components of abilities, where appropriate. The level component involves power tests in which time limits have little effect on individual differences on the tests. In contrast, the speed (or rate) component contains tests on which time limits or the rate of presentation of information, and therefore the speediness of performance, is important to measuring individual differences on the tests. One second-stratum ability had only level indicators, and two second-stratum dimensions had only speed indicators. The remaining five second-stratum factors had both level and speed (or rate) indicators.

The eight second-stratum factors identified by Carroll (1993) are: (1) Gf (fluid intelligence), with level first-stratum factors of general reasoning, induction, and quantitative reasoning and a speed factor of speed of reasoning; (2) Gc (crystallized intelligence), with level indicators of language development, verbal comprehension, spelling, and communication and speed indicators of oral fluency and writing ability; (3) Y (general memory and learning), with a level first-stratum factor of memory span and rate (related to speed) indicators of associative memory, free recall memory, meaningful memory, and visual memory; (4) V (broad visualization), with a level factor of visualization and speed indicators of spatial relations/orientation, speed of closure, flexibility of closure, and perceptual speed; (5) U (broad auditory perception), with level indicators of hearing and speed thresholds, speech sound discrimination, and musical discrimination and no clear speed or rate indicators; (6) R (broad retrieval), with level indicators of originality and creativity and speed indicators of ideational fluency, associational fluency, expressional fluency, word fluency, and figural fluency; (7) S (broad cognitive speediness), with no level indicators but speed indicators of rate of test taking and numerical facility; and (8) T (processing speed and/or decision speed), once again with no level indicators but speed indicators such as simple reaction time, choice reaction time, semantic processing speed, and mental comparison speed.

Despite the clear similarities between the Horn-Cattell and Carroll models, some important differences are apparent. The key difference concerns the presence and nature of a general intelligence factor. Carroll (1997) argued that his work provided perhaps the strongest and most comprehensive support yet for the general intelligence factor, a position Cattell would probably have seconded. Carroll also identified the general factor as corresponding to Spearman's g, the mental ability common to all tests of ability, also a position that Cattell might have favored. However, for more than 25 years, Horn has been responsible for the current synthesis of the Horn-Cattell model. He has long disclaimed the utility of a general factor, despite the positive correlations among the second-stratum abilities. Based on other information, such as the trends of growth and decline over the life span for second-stratum abilities, any overall score approximating general intelligence would represent a changing mixture of abilities, a general level of a person's profile of second-stratum abilities, or “intelligence in general” or “on average,” rather than a single element common to all tests that retains its unitary nature across development. This striking difference of scientific opinion is reminiscent of the conflicting views on the nature of the general factor held by Vernon and Thomson, discussed above. The monumental work by Carroll (1993) was concerned almost exclusively with structural information about how variables load on factors. Carroll dismissed other forms of data, particularly differential life-span aging trends, by claiming that the data and their implications for theory were not yet sufficiently well established. In contrast, Horn has always studied structural information, but he has also monitored and integrated information from numerous other sources, such as kinematic or life-span trends, dynamic relations between abilities over time, and neurocognitive studies. Taking all of these kinds of information into consideration, Horn has argued that the existence of a single, unchanging entity common to all tests of ability cannot be supported.

The Horn-Cattell and Carroll models exhibit additional, but less important, differences. One of these is the absence of Gq, or general quantitative ability, as a second-stratum dimension in the Carroll model. Carroll considered the Gq dimension of the Horn-Cattell theory to be too narrow and lacking a sufficient research base to be accorded a position as a second-stratum factor. Also, some differences in the first-stratum factors subsumed by second-stratum dimensions can be found.

Aside from the preceding differences, eight of the second-stratum ability dimensions from the Horn-Cattell and Carroll models fall in a rather clear one-to-one relation with one another. Some second-stratum dimensions have differing names and identifying symbols across the two systems. Still, the eight second-stratum dimensions represent the current state of the science with regard to the broad abilities that span the intelligence domain. An overall score, whether corresponding to Spearman's g or to a changing composite reflecting “intelligence in general,” may be a useful summary index of a person's general level of functioning, regardless of whether one believes the score corresponds to a particular identifiable entity.

Other Theories

The preceding theories were developed in connection with the use of factor analysis, which was used to derive the dimensions underlying batteries of tests and thereby confirm or disconfirm the hypotheses put forward by the groups of researchers. In addition to these theories based on factor analysis, several additional theories of the structure of mental abilities have been developed. Most of these other theories have been based on a priori theory or summaries of previous research, but have relied much less or not at all on sophisticated measurement techniques such as factor analysis. As a result, the utility of these theories for applied work on the assessment of intelligence is much more limited, although the future may see greater application of the ideas.

The first of these other theories is embodied in the PASS model of Das, Naglieri, and Kirby (1994). PASS stands for planning, attention, simultaneous processing, and successive processing, which are processes or mental functions associated with particular brain areas by Luria (1966a, 1966b). Planning refers to processes governing cognitive control and self-regulation, enabling a person to develop or plan courses of intelligent action to be followed. Attention subsumes the processes by which a continual focus on cognitive problems is maintained. Simultaneous processing involves processing of stimuli in which the stimulus as a whole must be comprehended or in which elements must be integrated into a meaningful whole. Successive processing concerns processes in which the sequence of the processing of elements is crucial, such as language. Factor analytic studies of the PASS model have been less than fully successful, failing to establish planning and attention as empirically distinct entities. Despite this, the Cognitive Assessment System (Naglieri & Das, 1997) provides a standardized battery to assess the components of the PASS model.

A second theoretical approach encompasses information processing theories derived from cognitive psychology. For example, Campione and Brown (1978) offered an initial model that was further developed by Borkowski (1985). Information processing models of cognitive ability often distinguish the architectural and executive systems, roughly equivalent to the hardware and software components, respectively, of a computer. The architectural system is assumed to be genetically, or at least biologically, based and consists of basic operating parameters of cognitive processes, encompassing individual differences in (1) amount of information that can be processed, which is assessed using memory span, (2) durability of information storage, or the retention of memory traces, and (3) efficiency of processing, or the speed of encoding and decoding information. The executive system encompasses components that are environmentally based and guide processes comprising problem solving. The executive system subsumes components such as (1) one's knowledge base, or declarative knowledge of facts; (2) control processes, which include strategies or heuristics to aid memory or problem solution; and (3) metacognition, which involves, among other things, knowing how problems should be solved and then monitoring progress toward problem solution and evaluating outcomes to ensure successful solution of the problem. Researchers using the information processing approach have paid little attention to converting theoretical insights into usable measures of intelligence.

Sternberg (1985, 1986, 1996) has offered several theories of human intelligence, theories that have been radically reformulated over time. The most recent incarnation is Sternberg's notion of successful intelligence. The three components of successful intelligence are (1) analytic abilities, which aid in defining problems, setting up solution strategies, and monitoring solutions and presumably include many of the dimensions outlined in the Horn-Cattell and Carroll models; (2) creative abilities, which involve generating new problem solving options and attempting to convince others of their worth; and (3) practical abilities, which subsume skills in ensuring that one can implement solutions and see that they are carried out. As with information processing approaches, at present no standardized batteries are available to assess constructs within Sternberg's triarchic theories.

The final theory discussed in this section is the theory of multiple intelligences, described by Gardner (1983). According to this theory, at least eight different types of intelligence can be identified: (1) linguistic intelligence, subsuming language and communication skills; (2) musical intelligence, involving individual differences in rhythm and pitch and skills in composing music; (3) logical-mathematical intelligence, including logical reasoning and number abilities; (4) spatial intelligence, or the ability to understand spatial relations; (5) bodily-kinesthetic intelligence, assessed by skills in dancing, acting, and athletics; (6) intrapersonal intelligence, or knowledge of one's self, feelings, and motives; (7) interpersonal intelligence, or skills in discerning the feelings, beliefs, and intentions of others; and (8) naturalist intelligence, involving seeing and understanding patterns in nature. Gardner has done little research to validate his theory on the types of intelligence. To the extent that evidence supports the notion of different intelligences, the evidence is consistent with the Horn-Cattell and Carroll theories. For example, Gardner's linguistic intelligence is most similar to Gc in the Horn-Cattell model. As a result, little empirical evidence is available that uniquely supports Gardner's theory. Moreover, no standardized measures of the constructs in this theory are available.

Summary

During the 20th century, theories of the structure of mental abilities have evolved from the two-factor theory of Spearman, which hypothesized only a single factor common to all tests of ability, to the more differentiated structure of the Horn-Cattell and Carroll models. In these models, the two most widely studied of the second-stratum factors are Gc and Gf. Gc, or crystallized intelligence, reflects stored cultural knowledge and corresponds closely with the verbal factor often reported in factor analyses of the Wechsler batteries. Gf, or fluid intelligence, is a dimension representing reasoning or thinking skills; the performance factor identified in factor analyses of the Wechsler batteries appears to be an amalgamation of Gf and Gv (or visualization skills).

Some movement has already taken place in structuring intelligence tests to acknowledge the utility of the Horn-Cattell and Carroll models. For example, the Stanford-Binet IV yields a composite IQ, but it was based on a theoretical model that included subareas for crystallized abilities (verbal reasoning and quantitative reasoning), fluid-analytic abilities (abstract/visual reasoning), and short-term memory. Furthermore, one battery—the Woodcock-Johnson—was explicitly designed to assess all second-stratum dimensions in the Horn-Cattell model. During the next decade, even greater alignment of intelligence tests and the IQ scores derived from them and the Horn-Cattell and Carroll models is likely. As a result, the future will almost certainly see greater reliance on part scores, such as IQ scores for Gc and Gf, in addition to the traditional composite IQ. That is, the traditional composite IQ may not be dropped, but greater emphasis will be placed on part scores than has been the case in the past. As this movement to part scores develops, it will most likely occur first for Gc and Gf, the most central of the second-stratum factors, and then extend to other second-stratum dimensions as they are determined to be useful for differential prediction.

What is intelligence testing used for?

Intelligence testing refers to the theory and practice of measuring people's performance on various diagnostic instruments (intelligence tests) as a tool for predicting future behavior and life prospects or as a tool for identifying interventions (e.g., educational programs).

What can an intelligence test identify quizlet?

Intelligence tests are standardized measures of a person's general mental ability. The Stanford Binet Intelligence Scales and the Wechsler Intelligence Scales are examples of psychometric tests. -In group tests, paper and pencil tests are given to a group of people at the same time.

What is the purpose of giving intelligence tests to children?

The results may help you better understand your child's specific strengths, weaknesses and potential ability. They also may help your child's teacher recognize the level of reasoning, knowledge, and skills your child has already mastered so as to appropriately match curriculum and instruction to your child's abilities.