Field of Science

Showing posts with label reading. Show all posts
Showing posts with label reading. Show all posts

So maybe reading *should* be harder

Some weeks back I chided Jonah Lehrer for his assertion that he'd
love [e-readers] to include a feature that allows us to undo their ease, to make the act of reading just a little bit more difficult. Perhaps we need to alter the fonts, or reduce the contrast, or invert the monochrome color scheme. Our eyes will need to struggle, and we’ll certainly read slower, but that’s the point: Only then will we process the text a little less unconsciously, with less reliance on the ventral pathway. We won’t just scan the words – we will contemplate their meaning.
This sounded like a bunch of neuro-babble to me, partly because the research he cited seemed to be about something else entirely.

Obviously, the ventral pathway is the problem.

Spoke too soon


To the rescue come Diemand-Yauman, Oppenheimer & Vaughan, who just published a new paper in my favorite journal, Cognition. The abstract says it all:
Previous research has shown that disfluency -- the subjective experience of difficulty associated with cognitive operations -- leads to deeper processing. Two studies explore the extent to which this deeper processing engendered by disfluency interventions can lead to improved memory performance. Study 1 found that information in hard-to-read fonts was better remembered that easier to read information in a controlled laboratory setting. Study 2 extended this finding to high school classrooms. The results suggest that superficial changes to learning materials could yield significant improvements in educational outcomes.
The first experiment involved remembering 21 pieces of information over a 15-minute interval, which while promising, has it's limitations. Here are the authors:
There are a number of reasons why this result might not generalize to actual classroom environments. First, while the effects persisted for 15 min, the time between learning and testing is typically much longer in school settings. Moreover, there are a large number of other substantive differences between the lab and actual classrooms, including the nature of materials, the learning strategies adopted, and the presence of distractions in the environment... Another concern is that because disfluent reading is, by definition, perceived as more difficult, less motivated students may become frustrated. While paid laboratory participants are willing to persist in the face of challenging fonts for 90 s, the increase in perceived difficulty may provide motivational barriers for actual students.
Or it could just make the students bored.

In a second, truly heroic study, the researchers talked a bunch of teachers at a public high school into sending them all their classroom worksheets and powerpoint slides. The researchers recreated two versions of these materials: one in an easy-to-read font and one in a difficult-to-read font. Each of the teachers taught at least two sections of the same course, so they were able to use one set of materials with one group of students and the other set with the another group. The classes included English, Physics, Chemistry and History.

Once again, the researchers found better learning with the hard-to-read fonts.

Notes and Caveats


The researchers seem open to a number of possibilities as to why hard-to-read fonts would lead to better learning:
It is worth noting that it is not the difficulty, per se, that leads to improvements in learning but rather the fact that the intervention engages processes that support learning.
Moreover, unlike Lehrer, they don't recommend making everything harder to read, learn or do:
Not all difficulties are desirable, and presumably interventions that engage more elaborative processes without also increasing difficulty would be even more effective at improving educational outcomes.
There is one obvious concern one might have about their Experiment 2: the teachers were blind to hypothesis, but not to condition. The authors attempt to wave this away but asserting that the teachers would likely make the wrong hypothesis (that learning should be worse when the font is hard), and thus any "experimenter" bias would be in the wrong direction. However, we have no way of knowing whether the teachers attempted to compensate for the hard-to-read materials by explaining thing better. In fact, the authors had no way of testing whether the teachers behaved similarly in both conditions.

That's not at all saying I think it was a bad study or shouldn't have been published. I think it's a fantastic study. I don't know how they roped those teachers into the project, but this is the kind of go-get-it science people should be practicing. The study isn't perfect or conclusive, but no studies are. The goal is simply to have results that are clear enough that they generate more research and new hypotheses.


-------
Connor Diemand-Yauman, Daniel M. Oppenheimer, and Erikka B. Vaughan (2011). Fortune favors the bold (and the italicized): Effects of disfluency on educational outcomes Cognition, 118, 111-115 : doi:10.1016/j.cognition.2010.09.012

Wait -- Jonah Lehrer Wants Reading to be Harder?

Recently Jonah Lehrer, now at Wired, wrote a ode to books, titled The Future of Reading. Many people are sad to see the slow replacement of physical books by e-readers -- though probably not many people who have lugged 50 pounds of books in a backpack across Siberia, though that's a different story. The take-home message appears 2/3 of the way down:
So here’s my wish for e-readers. I’d love them to include a feature that allows us to undo their ease, to make the act of reading just a little bit more difficult. Perhaps we need to alter the fonts, or reduce the contrast, or invert the monochrome color scheme. Our eyes will need to struggle, and we’ll certainly read slower, but that’s the point: Only then will we process the text a little less unconsciously, with less reliance on the ventral pathway. We won’t just scan the words – we will contemplate their meaning.
As someone whose to-read list grows several times faster than I actually do any reading, I've never wished to read more slowly. But Lehrer is a science writer, and (he thinks) there's more to this argument than just aesthetics. As far as I can tell, though, it's based on a profound misunderstanding of the science. Since he manages to get through the entire post without ever citing a specific experiment, it's hard to tell for sure, but here's what I've managed to piece together. 

Reading Research

Here's Lehrer:
Let me explain. Stanislas Dehaene, a neuroscientist at the College de France in Paris, has helped illuminate the neural anatomy of reading. It turns out that the literate brain contains two distinct pathways for making sense of words, which are activated in different contexts. One pathway is known as the ventral route, and it’s direct and efficient, accounting for the vast majority of our reading. The process goes like this: We see a group of letters, convert those letters into a word, and then directly grasp the word’s semantic meaning. According to Dehaene, this ventral pathway is turned on by “routinized, familiar passages” of prose, and relies on a bit of cortex known as visual word form area (VWFA).

So far, so good. Dehaene is a brilliant researcher who has had an enormous effect on several areas of cognition (I'm more familiar with his work on number). I'm a bit out-of-date on reading research (and remember Lehrer doesn't actually cite anything to back up his argument), but this looks like an updated version of the old distinction between whole-word reading and real-time composition. That is, it goes without saying that you must "sound out" novel words that you've never encountered before, such as gafrumpenznout. However, it seems that as you become more familiar with a particular word (maybe Gafrumpenznout is your last name), you can recognize the word quickly without sounding it out.

Here's the abstract from a relevant 2008 Dehaene group paper:
Fast, parallel word recognition, in expert readers, relies on sectors of the left ventral occipito-temporal pathway collectively known as the visual word form area. This expertise is thought to arise from perceptual learning mechanisms that extract informative features from the input strings. The perceptual expertise hypothesis leads to two predictions: (1) parallel word recognition, based on the ventral visual system, should be limited to words displayed in a familiar format (foveal horizontal words with normally spaced letters); (2) words displayed in formats outside this field of expertise should be read serially, under supervision of dorsal parietal attention systems. We presented adult readers with words that were progressively degraded in three different ways (word rotation, letter spacing, and displacement to the visual periphery).
When the words are degraded in these various ways, participants had a harder time reading and recruited different parts of the brain. A (slightly) more general public-friendly version of this story appears in this earlier paper. This appears to be the paper that Lehrer is referring to, since he says that Dehaene, in experiments, activates the dorsal pathways "in a variety of ways, such as rotating the letters or filling the prose with errant punctuation."

And the Vision Science Behind It

This work makes a lot of sense, given what we know about vision. Visual objects -- such as letters -- "crowd" each other. In other words, when there are several that are close together, it's hard to see any of them. This effect is worse in peripheral vision. Therefore, to see all the letters in a long-ish word, you may need to fixate on multiple parts of the word.

However, orthography is heavily redundant. One good demonstration of this is rmvng ll th vwls frm sntnc. You can still read with some of the letters missing (and of course some languages, like Hebrew, never print vowels). Moreover, sentence context can help you guess what a particular word is. So if you're reading a familiar word in a familiar context, you may not need to see all the letters well in order to identify it. The less certain you are of what the word is, the more carefully you'll have to look at it.

The Error

So far, this research appears to be about visual identification of familiar objects. Lehrer makes a big leap, though:

When you are a reading a straightforward sentence, or a paragraph full of tropes and cliches, you’re almost certainly relying on this ventral neural highway. As a result, the act of reading seems effortless and easy. We don’t have to think about the words on the page ...  Deheane’s research demonstrates that even fluent adults are still forced to occasionally make sense of texts. We’re suddenly conscious of the words on the page; the automatic act has lost its automaticity.
This suggests that the act of reading observes a gradient of awareness. Familiar sentences printed in Helvetica and rendered on lucid e-ink screens are read quickly and effortlessly. Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra work – the slight cognitive frisson of having to decipher the words – wakes us up.
It's based on this that he argues that e-readers should make it harder to read, because then we'd pay more attention to what we're reading. The problem is that he seems to have confused the effort expended in recognizing the visual form of a word -- the focus of Dehaene's work -- with effort expended in interpreting the meaning of the sentence. Moreover, he seems to think that the harder it is to understand something, the more we'll understand it -- which seems backwards to me. Now it is true that the more deeply we process something the better we remember it, but it's not clear that making something hard to see necessarily means we process it more deeply. In any case, we'd want some evidence that this is so, which Lehrer doesn't cite.

Which brings me back to citation. Dehaene did just publish a book on reading, which I haven't read because it's (a) long, and (b) not available on the Internet. Maybe Dehaene makes the claim that Lehrer is attributing to him in that book. Maybe there's even evidence to back that claim up. As far as I can tell, that work wasn't done by Dehaene (as Lehrer implies) since I can't find it on Dehaene's website. Though maybe it's there under a non-obvious title (Dehaene publishes a lot!). This would be solved if Lehrer would cite his sources.

Caveat


I like Lehrer's writing, and I've enjoyed the few interactions I've had with him. I think occasional (frequent?) confusion is a necessary hazard of being a science writer. I have only a very small number of topics I feel I understand well enough to write about them competently. Lehrer, by profession, must write about a very wide range of topics, and it's not humanly possible to understand many of them very well.


________________
DEHAENE, S., COHEN, L., SIGMAN, M., & VINCKIER, F. (2005). The neural code for written words: a proposal Trends in Cognitive Sciences, 9 (7), 335-341 DOI: 10.1016/j.tics.2005.05.004

Cohen L, Dehaene S, Vinckier F, Jobert A, & Montavont A (2008). Reading normal and degraded words: contribution of the dorsal and ventral visual pathways. NeuroImage, 40 (1), 353-66 PMID: 18182174

Photos: margolove, kms !

Video games, rotted brains, and book reviews

Jonah Lehrer has an extended discussion of his review of The Shallows, a new book claiming that the Internet is bad for our brains. Lehrer is skeptical, pointing out that worries about new technology are as old as time (Socrates thought books would make people stupid, too). I am skeptical as well, but I'm also skeptical of (parts of) Lehrer's arguments. The crux of the argument is as follows:
I think it's far too soon to be drawing firm conclusions about the negative effects of the web. Furthermore, as I note in the review, the majority of experiments that have looked directly at the effects of the internet, video games and online social networking have actually found significant cognitive benefits.
That, so far as it goes, is reasonable. My objection is to some of the evidence given:
A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a "book-like text." Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn't making us stupid -- it's exercising the very mental muscles that make us smarter.
This cuts several ways. Extra activation of a region in an fMRI experiment is interpreted different ways by different researchers. It could be evidence of extra specialization ... or evidence that the brain network in question is damaged and so needs to work extra hard. Lehrer is at least partially aware of this problem:
Now these studies are all imperfect and provisional. (For one thing, it's not easy to play with Google while lying still in a brain scanner.)
This is the line I have a particular issue with. If the question is whether extra Internet use makes people stupid, why on Earth would anyone need to use a $600/hr MRI machine to answer that question? We have loads of cheap psychometric tests of cognition. All methodologies have their place, and a behavior question is most easily answered with behavioral methods. MRI is far more limited.

Lehrer's discussion of the 2009 study above underscores this point: the interpretation of the brain images rests on our understanding of what behaviors the dorsolateral prefrontal cortex has shown up with in other studies. The logic is: A correlates with B correlates with C, thus A correlates with C. This is, as any logician will tell you, an unsound conclusion. When you add that using MRI can cost ten thousand dollars for a single experiment, it's a very expensive middleman!

Which isn't to say that MRI is useless or such studies are a waste of time. MRI is particularly helpful in understanding how the brain gives rise to various types of behavior, and it's sometimes helpful for analyzing behavior that we can't directly see. Neither applies here. If the Internet makes us dumb in a way only detectable with super-modern equipment, I think we can breath easy and ignore the problem. What we care about is whether people in fact are more easily distracted, have worse memory, etc. That doesn't require any special technology -- even Socrates could run that experiment.



****
Lehrer does discuss a number of good behavioral experiments. Despite my peevishness over the "Google in the scanner" line, the review is more than worth reading.

How does the brain read?

Reading is an important skill, so it's not surprising it gets a lot of attention from researchers. Reading is an ancient skill -- at least in some parts of the world -- but not so old that we don't know when it was invented (as opposed to, for instance, basic arithmetic). And, unlike language, it appeared recently enough in most of the world that it's unlikely that evolution has had time to select for reading skill...which would explain the high prevalence of dyslexia.

Some decades ago, there was a considerable amount of debate over whether reading was phonologically based -- that is, "sounding out" is crucial (CAT -> /k/ + /{/ + /t/ -> /k{t/) -- or visual-recognition based -- that is, you simply recognize each words as a whole form (CAT -> /k{t/). People who favored the former theory emphasized phonics-based reading instruction, while the latter theory resulted in "whole language" training.

At least from where I sit, this debate has been largely resolved in favor of phonics. This isn't to say that skilled readers don't recognize some high-frequency words as whole, but it does mean that sounding out words it crucial at least in learning to read. One important piece of evidence is that "phonological awareness" -- the ability to figure out that CAT has 3 sounds by COLON has 5 or that DOG and BOG rhyme -- is just about the best predictor of reading success. That is, preschoolers who are at the bottom of the pack in terms of phonological awareness tend to in the future be at the bottom of the pack in learning to read.

At least, that is the story for writing systems like English that are alphabetic. There has been some question as to the role of phonology in learning to read character-based systems like Chinese. Two years ago, a group including Li Hai Tan of Hong Kong University presented evidence that in fact phonological awareness may not be particularly important in learning to read Chinese.

I have been trying to test one aspect of their theory for some time. Not having collaborators in China or Taiwan, I have to recruit my Chinese-speakers here in Cambridge, which is harder than you might think. The first experiment I ran took nearly six months, most of which was spent trying to recruit participants, and it was ultimately inconclusive. Last spring I piloted a Web-based version of the experiment, thinking that I might have more luck finding Chinese participants through the Internet. However, that experiment failed. I think it was too complicated and participants didn't understand what to do.

I have spent the last few months thinking the problem through, and now I have a new Web-based study. I am trying it in English first, and if it works well enough, I will write a Chinese version of the experiment. If you are interested, please try it out here.