Field of Science

Calling all 12 year olds

I've been analyzing data from the Memory Test. The response to that experiment has been fantastic, so I'm able to look at performance based on age, from about 14 years old to about 84 years old. Interestingly, by 14 years old, people are performing at adult levels. I have a few kids in the 10-13 range, but not quite enough. It would be nice to know at what age people hit adult competency.

So...if you or someone you know is in that age range, I'd like a few more participants in the near future. I should actually be able to put up a description of the results relatively quickly in this case, should I get enough participants.

The Value of Experiments

I have been reading Heim & Kratzer's Semantics in Generative Grammar, which is an excellent introduction to formal semantics. On the whole, I've really liked the book, until I got to an example sentence in the 8th chapter:

(1) Every man placed a screen in front of him.

The authors claimed that this sentence was synonymous with

(2) Every man placed a screen in front of himself.

I though this was absurd, because to me the first sentence must mean that there is some man (let's call him 'Jim,') and all the other men put a screen in front of Jim. It just can't have the meaning of (2). I have a great deal of respect for the authors, but my immediate reaction was that this must be one of those cases in which linguists unconciously adapt their judgments to their theory (it was important for the theory Heim & Kratzer were developing that (1) mean the same as (2)).

Just to be sure, I walked into the office down the hall and took a poll of the seven people in it, none of whom study pronouns or are particularly familiar with the literature. Two of them agreed with me, but five agreed with Heim & Kratzer. So this may be a dialectical difference.

Now I feel bad about having doubted H&K, but in any case it is a good lesson about studying language: don't trust your own intuitions. Get a second opinion.

Ant Navigation SNL-style

If you appreciated Saturday Night Live's Mother Lover, then this ode to ant navigation should be right up your alley, produced by student in Dave Barner's Developmental Psychology course at UCSD.

OK, the videos have nothing to do with each other, but both are worth watching.

Lean Times come to the World's Richest University

Academia is traditionally a good place to wait out recessions. Not so much this year. Harvard has posted a list of cost-cutting measures. Notice in particular that the number of PhD students being admitted has been reduced (no word about masters or professional school students...but then masters and professional school students pay tuition).

Copyright and Science

I imagine the academic publishing industry is either hurting from or worried about digital theft, just like all other publishers. But some of the pressure is coming from other quarters.

As I've discussed on this blog before, academic publishing is a strange industry. Researchers need to publicize their research. Publishers need research to publish. So researchers give their work for free to publishers on the understanding the publishers will publicize the work. The publishers print and distribute the work and retain all the money.

Fundamentally, publishers need researchers since there is no other source of research. Researchers, on the other hand, don't need publishers, they need distribution. And with the advent of the Internet, it's no longer so clear that expensive printed journals are the best method.

I'm thinking about this as I listen to a task by Kenneth Crews called "Protecting your scholarship: copyrights, publication agreements, and open access." He is currently suggesting that we negotiate our publication agreements with journals. For instance, he argued that academic authors should not be transferring their copyrights to publishers, but rather license the copyright to the publishers. This way, the authors retain ownership of the work, which would eliminate strange transactions where authors have to get permission from the publisher to quote from their own work in a future book.

This would seem to suggest that we have some bargaining power. And, as open-access options become more prevalent, it seems that we should. Has anyone reading actually negotiated a publication agreement.

The problem with studying pragmatics

(live-blogging Xprag)

In his introduction, Kai von Fintel tells an anecdote that I think sums up why it is sometimes difficult to explain what it is we do. Some time ago, Emmon Bach wrote a book for Cambridge University Press on if/then conditionals. The copy-editor sent it back, replacing every use of "if and only if" with a simple "if," saying the "and only if" was redundant.

As it turns out, although people often interpret "if" as meaning "if nd only if," that's simply not what the word means, despite our intuitions (most people interpret if you mow the lawn, I'll give you $5 as meaning if and only if you mow the lawn...).

Part of the mystery, then, is explaining why our intuitions are off. In the meantime, though, explaining what I do sometimes comes across as trying to prove the sky is blue.

Summa smack-down

(live-blogging Xprag)

Dueling papers this morning on the word "summa" as compared with "some of." Judith Degen presented data suggesting "summa the cookies" is more likely to be strengthened to "some but not all" than is "some of the cookies." Huang followed with data demonstrating that there is no such difference. Everybody seems to agree this has something to do with the way the two studies were designed, but not on which way is better.

I am more convinced by Huang's study, but as she is (1) a lab-mate, (2) a friend, and (3) sitting next to me as I write this, I'm probably not a neutral judge.

Speaker uncertainty

Arjen Zondervan just presented a fascinating paper with the acknowledged long title "Effects of contextual manipulation on hearers' assumptions about speaker expertise, exhaustivity & real-time processing of the scalar implicature of or." He presented two thought-provoking experiments on exhaustivity & speaker expertise, but of primary interest to me was Experiment 3.

An important debate in the field has centered around whether scalar implicatures depend on context. A couple years ago, Richard Breheny & colleagues published a nice reading-time experiment in which was consistent with scalar implicatures being computed in some contexts but not others. Roughly, they set up a contexts something along the following lines:

(1) Some of the consultants/ met the manager./ The rest/ did not manage/ to attend.
(2) The manager met/ some of the consultants./The rest/ did not manage/ to attend.

Participants read the sentences one segment at a time (the '/' marks the boundaries between segments), pressing a key when they were ready for the next sentence. For reasons that may or may not be clear, it was thought that there would be an implicature in the first sentence but not in the second, making "the rest" fairly unnatural in the second sentence, and this resulted in subjects reading "the rest" more slowly in (2) than in (1).

This was a nice demonstration, and was, I tink, the first study of scalar implicature to use an implicit measure rather than just asking participants what they think a sentence means, which has certain advantages, but there were a number of potential confounds in the stimuli in this and the two other experiments they used. Zondervan fixed some of these confounds, re-ran the study and got the same results.

I was interested because, in collaboration with Jesse Snedeker, I have also re-run the Breheny study and also got that basic result. However, Zondervan and Breheny both also got longer reading times for the scalar term (e.g., 'some') in the condition where there is a scalar implicature. Both take this as evidence that calculating an implicature is an effortful process. In a series of similar experiments using my own stimuli, and I just don't get that part of the result. I am fairly convinced this is due to differences in our stimuli, but we're still trying to figure out why and what that might mean.

That said, the effect that all three of us get is, I think, the more important part of the data, and it's nice to see another replication.

Default computation in language

(Blogging Xprag)

This morning begins with a series of talks on scalar implicature. This refers to the fact that "John ate some of the cookies" is usually interpreted as meaning "some but not all of the cookies." Trying to get this post written during a 5-minute Q&A prevents me from proving that "some" does not simply mean "some but not all," but in fact it is very clear that "some" means "some and possibly all." The question, then, is why and how do people interpret such sentences as meaning something other than what they literally mean.

The most interesting moment for me so far has been a question by Julie Sedivy during the first Q & A. A popular theory of scalar implicature argues that the computation of "some = some-but-not-all" is a default computation. A number of experiments that have shown that such computation is slow has been taken by some as evidence against a default model. Sedivy pointed out that saying a computation is done by default doesn't require that the computation be fast, so evidence about speed of computation can't be taken as evidence for or against a default-computation theory.

Liveblogging Experimental Pragmatics

This week I am in Lyon, France, for the 3rd Experimental Pragmatics meeting. I had plans to live-blog CUNY and SRCD, neither of which quite happened, but I'm giving it a go for Day 2 of Xprag, and we'll see how it goes.

Pragmatics, roughly defined, is the study of language use. In practice, this tends to mean anything that isn't semantics, syntax or phonology, though in practice the division between semantics and pragmatics tends to shift as we learn more about the system. Since Pragmatics has perhaps been studied more extensively in philosophy & linguistics, the name of the conference emphasizes that it focuses on experiments rather than just theory.

More to follow

Are Cyborgs Near?

Raymond Kurzweil, inventor and futurist, predicts that by the 2030s, it will be possible to upload your mind, experience virtual reality through brain implants, have experiences beamed into your mind, and communicate telepathically. Just to name a few predictions.

Kurzweil, as he himself recently noted on On The Media, has a track record of successful predictions over the past three decades. Past performance being the best predictor of future performance, this leads people to at least pay attention to his arguments. Nonetheless, as the mutual funds folk say, past performance is a predictor, not a guarantee.

Exponential Progress

I suspect that Kurzweil is right about many things, but I'm not sure about the telepathy. When I have heard him speak, his primary argument for his predictions is telepathy only seems like a distant achievement because we think technology moves at a linear rate, but in fact knowledge and capability increases exponentially. This has clearly been the case in terms of computing speed.

Fair enough. The problem is that we aren't sure exactly how hard the problems we are facing are. There is a famous anecdote about an early pioneer in Artificial Intelligence assigning "vision" as a summer project. This was many decades ago, and as anyone in the field knows, machine vision is improving rapidly but still not that great.

A more contemporary example: A colleague I work with closely built a computation model of a relatively simple process in human language and tried to simulate some data. However, it took too long to run. When he looked at it more carefully, he realize that his program required more cycles to complete than there are atoms in the known universe. That is, merely waiting for faster computers was not going to help; he needed to re-think his program.

The Distance

In short, even if we grant Kurzweil that computers improve exponentially, somebody still needs to program them. Our ability to program may also be improving exponentially, but I'm unconvinced that we know how far we have to go.

Suppose I wanted to walk to some destination 1,000 miles away. I walk 1 mile the first year. If I keep going at the same rate, it'll take 1000 years. But if my speed doubles each year, it will take less than 14 years. Which is a lot faster!

But we don't know -- or at least I don't know -- how far we have to walk. We may well be walking to the other side of the universe (>900,000,000,000,000,000,000,000 miles). In which case even if my speed doubles every year, it'll still take almost 80 years. Which granted is pretty quick, but not as fast as the 14 years.

Of course, notice that by the 79th year I'll be traveling at such a velocity that I'd be able to cross nearly the entire universe in a year (or, 156 billion times the speed of light), which so far as we know is impossible. The growth of our technology may similarly eventually hit hard limits.

That said...

I wouldn't terribly mind being proved wrong. Telepathy sounds neat.

More things you don't have time to read

PLoS One has published over 5,000 papers. Is that a sign of success or failure?

I've worried before on this blog about the exploding number of science publications. Publications represent completed research, which is progress, and is good. But the purpose of writing a paper is not for it to appear in print, the purpose is for people to read it. The more papers are published, it stands to reason, the fewer people read each one. Thus, there is some argument for publishing fewer, higher quality papers. I have heard that the average publication gets fewer than 1 citation, meaning many papers are never cited and thus presumably were not found to be relevant to anybody's research program.

It is in this context that I read the following excited announcement from PLoS ONE, a relatively new open-access journal:
nearly 30,000 of your peers have published over 5,000 papers with us since our launch just over two years ago.
That's a lot of papers. Granted, I admit to being part of the problem. Though I do now have a citation.

Origin of Language Pinpointed

Scientists have long debated the evolution of language. Did it emerge along with the appearance of modern homo sapiens, 130,000-200,000 years ago? Or did it happen as late as 50,000 years ago, explaining the cultural ferment at that time? What are we to make of the fact that Neanderthals may have had the ability to produce sounds similar to those of modern humans?

In a stunning announcement this morning, freelance writer Joshuah Bearman announced that he had pinpointed the exact location, if not the date, of the origin of modern language: Lake Turkana in Kenya.

WTF?

Actually, what Bearman says is
This is where Homo sapiens emerged. It is the same sunset our ancestors saw, walking through this very valley. To the north is Lake Turkana, where the first words were spoken. To the south is Laetoli, where, in 1978, Mary Leakey's team was tossing around elephant turds when they stumbled across two sets of momentous footprints: bipedal, tandem, two early hominids together...
Since this is in an article about a wedding, I suspect tha Bearman was not actually intending to floor the scientific establishment with an announcement; he assumed this was common knowledge. I can't begin to imagine where he got this idea though. I wondered if perhaps this was some sort of urban legend (like all the Eskimo words for snow), but Googling has turned up nothing, though of course some important archaeological finds come from that region.

Oops

Probably he heard it from a tour guide (or thought he had heard something like that from a tour guide). Then neither he nor his editor bothered to think through the logic: how would we know where the first words were spoken, given that there can be no archaeological record? It's unlikley we'll ever even find the first human(s), given the low likelihood of fossilization.

I have some sympathy. Bearman was simply trying to provide a setting for his story. In one of my first published travel articles, I similarly mentioned in passing that Lake Baikal (the topic of my story) was one of the last strongholds of the Whites in the Russian Revolution. I have no idea where I got that idea, since it was completely untrue. (Though, in comparison with the Lake Turkana hypothesis, at least my unfounded claim was possible.)

So I'm sympathetic. I also had to write a correction for a subsequent issue. Bearman?

How much do professors get paid?

The American Association of University Professors recently released a report on the financial situation of professors. One interesting datum apparently gleaned from the report is a ranking of universities by full professor salaries. I have heard it said that Harvard pays below market because it pays in prestige, but that doesn't jive with its industry-leading $192,600/year (keep in mind this is average for full professor, which is rarely achieved before one's 40s at best).

One interesting fact noted shown in figure 2 of the report itself is that while, yes, PhDs do earn less than professional degrees (law, business, medicine, etc.), the difference is, on average, not nearly so large as one might expect. In 2007, the average PhD made around $95,000, while the average professional school graduate earned about $115,000 (both numbers are down, incidentally, from 1997).

That said, the ceilings are probably different. The average full professor at Harvard -- arguably the pinnacle of the profession for someone with a PhD -- as already said makes just under $200,000/year...or about the same as the typical big-firm lawyer a couple years out of law school (though perhaps not this year).