Field of Science

Wait -- Jonah Lehrer Wants Reading to be Harder?

Recently Jonah Lehrer, now at Wired, wrote a ode to books, titled The Future of Reading. Many people are sad to see the slow replacement of physical books by e-readers -- though probably not many people who have lugged 50 pounds of books in a backpack across Siberia, though that's a different story. The take-home message appears 2/3 of the way down:
So here’s my wish for e-readers. I’d love them to include a feature that allows us to undo their ease, to make the act of reading just a little bit more difficult. Perhaps we need to alter the fonts, or reduce the contrast, or invert the monochrome color scheme. Our eyes will need to struggle, and we’ll certainly read slower, but that’s the point: Only then will we process the text a little less unconsciously, with less reliance on the ventral pathway. We won’t just scan the words – we will contemplate their meaning.
As someone whose to-read list grows several times faster than I actually do any reading, I've never wished to read more slowly. But Lehrer is a science writer, and (he thinks) there's more to this argument than just aesthetics. As far as I can tell, though, it's based on a profound misunderstanding of the science. Since he manages to get through the entire post without ever citing a specific experiment, it's hard to tell for sure, but here's what I've managed to piece together. 

Reading Research

Here's Lehrer:
Let me explain. Stanislas Dehaene, a neuroscientist at the College de France in Paris, has helped illuminate the neural anatomy of reading. It turns out that the literate brain contains two distinct pathways for making sense of words, which are activated in different contexts. One pathway is known as the ventral route, and it’s direct and efficient, accounting for the vast majority of our reading. The process goes like this: We see a group of letters, convert those letters into a word, and then directly grasp the word’s semantic meaning. According to Dehaene, this ventral pathway is turned on by “routinized, familiar passages” of prose, and relies on a bit of cortex known as visual word form area (VWFA).

So far, so good. Dehaene is a brilliant researcher who has had an enormous effect on several areas of cognition (I'm more familiar with his work on number). I'm a bit out-of-date on reading research (and remember Lehrer doesn't actually cite anything to back up his argument), but this looks like an updated version of the old distinction between whole-word reading and real-time composition. That is, it goes without saying that you must "sound out" novel words that you've never encountered before, such as gafrumpenznout. However, it seems that as you become more familiar with a particular word (maybe Gafrumpenznout is your last name), you can recognize the word quickly without sounding it out.

Here's the abstract from a relevant 2008 Dehaene group paper:
Fast, parallel word recognition, in expert readers, relies on sectors of the left ventral occipito-temporal pathway collectively known as the visual word form area. This expertise is thought to arise from perceptual learning mechanisms that extract informative features from the input strings. The perceptual expertise hypothesis leads to two predictions: (1) parallel word recognition, based on the ventral visual system, should be limited to words displayed in a familiar format (foveal horizontal words with normally spaced letters); (2) words displayed in formats outside this field of expertise should be read serially, under supervision of dorsal parietal attention systems. We presented adult readers with words that were progressively degraded in three different ways (word rotation, letter spacing, and displacement to the visual periphery).
When the words are degraded in these various ways, participants had a harder time reading and recruited different parts of the brain. A (slightly) more general public-friendly version of this story appears in this earlier paper. This appears to be the paper that Lehrer is referring to, since he says that Dehaene, in experiments, activates the dorsal pathways "in a variety of ways, such as rotating the letters or filling the prose with errant punctuation."

And the Vision Science Behind It

This work makes a lot of sense, given what we know about vision. Visual objects -- such as letters -- "crowd" each other. In other words, when there are several that are close together, it's hard to see any of them. This effect is worse in peripheral vision. Therefore, to see all the letters in a long-ish word, you may need to fixate on multiple parts of the word.

However, orthography is heavily redundant. One good demonstration of this is rmvng ll th vwls frm sntnc. You can still read with some of the letters missing (and of course some languages, like Hebrew, never print vowels). Moreover, sentence context can help you guess what a particular word is. So if you're reading a familiar word in a familiar context, you may not need to see all the letters well in order to identify it. The less certain you are of what the word is, the more carefully you'll have to look at it.

The Error

So far, this research appears to be about visual identification of familiar objects. Lehrer makes a big leap, though:

When you are a reading a straightforward sentence, or a paragraph full of tropes and cliches, you’re almost certainly relying on this ventral neural highway. As a result, the act of reading seems effortless and easy. We don’t have to think about the words on the page ...  Deheane’s research demonstrates that even fluent adults are still forced to occasionally make sense of texts. We’re suddenly conscious of the words on the page; the automatic act has lost its automaticity.
This suggests that the act of reading observes a gradient of awareness. Familiar sentences printed in Helvetica and rendered on lucid e-ink screens are read quickly and effortlessly. Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra work – the slight cognitive frisson of having to decipher the words – wakes us up.
It's based on this that he argues that e-readers should make it harder to read, because then we'd pay more attention to what we're reading. The problem is that he seems to have confused the effort expended in recognizing the visual form of a word -- the focus of Dehaene's work -- with effort expended in interpreting the meaning of the sentence. Moreover, he seems to think that the harder it is to understand something, the more we'll understand it -- which seems backwards to me. Now it is true that the more deeply we process something the better we remember it, but it's not clear that making something hard to see necessarily means we process it more deeply. In any case, we'd want some evidence that this is so, which Lehrer doesn't cite.

Which brings me back to citation. Dehaene did just publish a book on reading, which I haven't read because it's (a) long, and (b) not available on the Internet. Maybe Dehaene makes the claim that Lehrer is attributing to him in that book. Maybe there's even evidence to back that claim up. As far as I can tell, that work wasn't done by Dehaene (as Lehrer implies) since I can't find it on Dehaene's website. Though maybe it's there under a non-obvious title (Dehaene publishes a lot!). This would be solved if Lehrer would cite his sources.

Caveat


I like Lehrer's writing, and I've enjoyed the few interactions I've had with him. I think occasional (frequent?) confusion is a necessary hazard of being a science writer. I have only a very small number of topics I feel I understand well enough to write about them competently. Lehrer, by profession, must write about a very wide range of topics, and it's not humanly possible to understand many of them very well.


________________
DEHAENE, S., COHEN, L., SIGMAN, M., & VINCKIER, F. (2005). The neural code for written words: a proposal Trends in Cognitive Sciences, 9 (7), 335-341 DOI: 10.1016/j.tics.2005.05.004

Cohen L, Dehaene S, Vinckier F, Jobert A, & Montavont A (2008). Reading normal and degraded words: contribution of the dorsal and ventral visual pathways. NeuroImage, 40 (1), 353-66 PMID: 18182174

Photos: margolove, kms !

5 comments:

Edward said...

Back in the day a critique of my writing--"tortured"--I took to heart, and I've been fighting it ever since. Perhaps I should instead embrace my weaknesses as a writer as they ultimately force my readers to regard my ideas something other than cliché.

Ed Yong said...

Also in fairness, Lehrer said on Twitter that he wasn't entirely convinced by his own argument and welcomed counter-opinions.

Melodye said...

If Jonah wants reading to be harder, why doesn't he write less predictable sentences? (Just kiddin'...)

But actually, what interested me about this was your comment that he doesn't cite work specifically. Do you think that should be a general requirement of science writers?

I'm half of the mind that it should be. Obviously it's annoying in-text, but he could include a reference list for interested readers. That would allow readers to actually engage with the specific research he's talking about, which opens up more of a productive conversation (in my mind).

GamesWithWords said...

@Ed Yong: In fairness, I shouldn't have to follow Lehrer on Twitter to know he doesn't believe his own argument! But I'm glad to hear that he was skeptical.

@Melodye: I think it's nice to cite at least some of what you're writing about, especially if it's not common knowledge. If you're linking, then it doesn't really interrupt the text. In this particular case, it was frustrating because I was pretty sure he was mis-citing those studies, but without knowing which studies he was citing, I couldn't really check. So for all I know he's going to whip out some paper in which Dehaene actually makes the same argument and has data to back it up.

I admit I don't always link as comprehensively as I should, either, but I usually try to cite at least *something*.

Livia said...

Hmm, yeah. I live in this part of the research-sphere -- writing my dissertation on the VWFA. I haven't read Dehaene's book, but I do believe that the direct and indirect pathways have less to do with the sentence level complexities, than the word level complexities. In other words, the number of cliches, complexity of the clauses, etc, doesn't determin the direct or indirect pathway -- that's a whole level above what we're usually thinking about with the two pathways. It's more how frequent the word itself is. We recognize "the" thruogh the direct pathway because it's so common, but not "twitterific"