Near the dawn of the modern study of the mind, the great psychological pioneer Charles Spearman noticed that people who are good at one kind of mental activity tend to be good at most other good mental activities. Thus, the notion of g (for "general intelligence") was born: the notion that there is some underlying factor that determines -- all else equal -- how good someone is at any particular intelligent task. This of course fits folk psychology quite well: g is just another word for "smarts".
The whole idea has always been controversial, and many people have argued that there is more than one kind of smarts out there (verbal vs. numeric, logical vs. creative, etc.). Enter a recent paper by Hampshire and colleagues (Hampshire, HIghfield, Parkin & Owen, 2012) which tries to bring both neuroimaging and large-scale Web-based testing to bear on the question.
In the neuroimaging component, they asked sixteen participants to carry out twelve difficult cognitive tasks while their brains were scanned and applied principle components analysis (PCA) to the results. PCA is a sophisticated statistical method for grouping things.
A side note on PCA
If you already know what PCA is, skip to the next section. Basically, PCA is a very sophisticated way of sorting thigns. Imagine you are sorting dogs. The simplest thing you could do is have a list of dog breeds and go through each dog and sort it according to its breed.
What if you didn't already have dog breed manual? Well, German shepherds are more similar to one another than any given German shepherd is to a poodle. So by looking through the range of dogs you see, you could probably find a reasonable way of sorting them, "rediscovering" the various dog breeds in the process. (In more difficult cases, there are algorithms you could use to help out.)
That works great if you have purebreds. What if you have mutts? This is where PCA comes in. PCA assumes that there are some number of breeds and that each dog you see is a mixture of those breeds. So a given dog may be 25% German Shepherd, 25% border collie, and 50% poodle. PCA tries to "learn" how many breeds there are, the characteristics of those breeds, and the mixture of breeds that makes up each dog -- all at the same time. It's a very powerful technique (though not without its flaws).
Neuroimaging intelligence
Analysis focused only on the "multiple demands" network previously identified as being related to IQ and shown in red in part A of the graph below. PCA discovered two underlying components that accounted for about 90% of the variance in the brain scans across the twelve tasks. One was particularly important for working memory tasks, so the authors called in MDwm (see part B of the graph below), and it involved mostly the IFO, SFS and ventral ACC/preSMA (see part A below for locations). The other was mostly involved in various reasoning tasks and involved more IFS, IPC and dorsal ACC/preSMA.
Notice that all tasks involved both factors, and some tasks (like the paired associates memory task) involved a roughly equal portion of each.
Sixteen subjects isn't very many
The authors put versions of those same twelve tasks on the Internet. They were able to get data from 44,600 people, which makes it one of the larger Internet studies I've seen. The authors then applied PCA to those data. This time they got three components, two of which were quite similar to the two components found in the neuroimaging study (they correlated at around r=.7, which is a very strong correlation in psychology). The third component seemed to be particularly involved in tasks requiring language. Most likely that did not show up in the neuroimaging study because the neuroimaging study focused on the "multiple demands" network, whereas language primarily involves other parts of the brain.
The factors dissociated in other ways as well. Whereas people's working memory and reasoning abilities start to decline about the time people reach the legal drinking age in the US (coincidence?) verbal skills remain largely undiminished until around age 50. People who suffer from anxiety had lower than average working memory abilities, but average reasoning and verbal abilities. Several other demographic factors similarly had differing effects on working memory, reasoning, and verbal abilities.
Conclusions
The data in this paper are very pretty, and it was a particularly nice demonstration of converging behavioral and neuropsychological methods. I am curious what the impact will be. The authors are clearly arguing against a view on which there is some unitary notion of IQ/g. It occurred to me as I wrote this what while I've read many papers lately discussing the different components of IQ, I haven't read anything recent that endorses the idea of a unitary g. I wonder if there is anyone, and, if so, how they account for this kind of data. If I come across anything, I will post it here.
------
Hampshire, A., Highfield, R., Parkin, B., & Owen, A. (2012). Fractionating Human Intelligence Neuron, 76 (6), 1225-1237 DOI: 10.1016/j.neuron.2012.06.022
RFK Jr. is not a serious person. Don't take him seriously.
3 weeks ago in Genomics, Medicine, and Pseudoscience
1 comment:
This paper is based on a methodological misunderstanding. They did find general factors in both their brain scan and online IQ data. But then they rotated the factor solutions so that the g variance was dispersed to orthogonal factors. They regard this as evidence for their multiple intelligence theory, even though their results follow necessarily from the statistical manipulations they use.
0.7 is not a high correlation in this context. On the contrary, it means that the factors cannot be the same. A congruence coefficient of >0.95 would suggest that the factors are the same, but even that is not a stringent test of factor similarity. Their underlying data are of poor quality (brain scan on 16 people, internet convenience sample) so even if their theoretical assumptions were correct, they'd be unlikely to get very high correlations.
This paper will not have an impact on psychometrics, because the authors do not at all understand the issues involved. For example, the reason why the g factor is central to investigations of cognitive ability is that it accounts for so much of the variance in all cognitive tasks, and even more of the test's validity. For example, see this recent study of the WJ-III, a test battery that was designed to assess a wide variety of abilities (there are 42 subtests). The authors show that the g factor dwarfs the variance attributed to other factors, and that therefore the interpretation of WJ-III test results beyond the global score (=g) is a dubious practice. Whether or not one subscribes to a realist interpretation of g is ultimately irrelevant from the perspective of practical intelligence testing, because g is the only level at which you can reliably interpret test results. The significance of g goes far beyond the fact that it is ubiquitous in factor analyses of cognitive tests.
Post a Comment