Field of Science

Academics on the Web

The Prodigal Academic, in discussing "Things I Wish I Know Before Starting on the [Tenure Track]", writes
Actually spend time on my group website. This is a great recruiting tool! Students look at the departmental website before they arrive on campus to plan out their potential advisors.
As someone closer to the applying-to-school phase than TPA, I admit that there are schools I probably did not consider as carefully as I should have because their websites were skimpy and I had difficulty finding much information.

In fact, even though our department has a relatively good website, I very nearly didn't come to Harvard because I couldn't find the information I needed. I came from an adult psycholinguistics background and so I hadn't ever read any of her (developmental) papers. We went to different conferences. Harvard's departmental website is set up around five research areas: Cognition, Brain & Behavior; Developmental; Clinical; Social; Organizational Behavior. Since I was cognitive, I checked the cognitive section for psycholinguists and didn't see her. I only found out about her because I ended up working at Harvard for a year as a research assistant in a different lab.

Again, I actually like our department's website. This is just a story about how the organization of websites can ultimately have an important effect.

Websites are also important for disseminating research. When I come across an interesting paper by a researcher I don't know, I almost always check their website. If they have papers there for download, I read any that look interesting. I've discovered some very important work this way. But if they don't have papers available (or, as sometimes happens, don't even have a website), that's often the end of the journey.

The Job Search

The Prodigal Academic has a fascinating post on how candidates for job academic job searches are chosen. I've never been through a job search (on either end), so I have no real comment. The closest I came was filing job applications for two searches in the History & Religion department at Oberlin. What impressed me then, as now, is how many there were.

Revetro responds

Earlier this month I blogged about a study supposedly produced by Revetro on texting during sex. The main point of the post was that researchers have to be careful about ensuring data quality (e.g., are the participants actually paying attention to the questions?). I also remarked that I had been unable to find the original article.

Jennifer Jacobson at Revetro very kindly emailed me several days ago in order to point me to the original study. The survey question under discussion can be found in the "We interrupt this dinner" section.

So at least the study exists. I'm hoping to find out more about the methods they used.

You are what you say

I recently received an email forward about AnalyzeWords.com. According to its promoters

AnalyzeWords help reveal your personality by looking at how you use words. It is based on good scientific research connecting word use to who people are.
The way the site works is that you enter in someone's Twitter handle and the site analyzes their tweets.

The forward included the following comment from someone from whom, indirectly, I got the email, saying "So far it says everyone I've looked at (people, journals, etc) is depressed, except for an account someone set up to chronicle his battle with cancer, which it classified as 'very upbeat'." I tried a handle or two myself and got similar results.

One possible conclusion is that everyone -- or, at least, everyone who uses Twitter -- is depressed. Or the theory behind the website doesn't actually work. I found a possible hint in favor of the latter hypothesis on AnalyzeWords' "The Science Behind AnalyzeWords" page:

Across dozens of studies, junk words [closed-class words like prepositions and pronouns] have proven to be powerful markers of peoples [sic] psychological states. When individuals use the word I, for example, they are briefly paying attention to themselves. People experiencing high levels of physical or mental pain automatically orient towards themselves and begin using I-words at higher rates. I-use, then, can reflect signs of depression, stress or insecurity.
Perhaps. Or perhaps they're using Twitter to talk about themselves and their latest experiences.

Clarity

Geoffrey Pullum at Language Log, in the course of a discussion of the incredibly flexible verb see, writes

I wonder how and why human languages seem to be so completely content with the wild and multifarious polysemy and ambiguity that afflicts them. The people who think clarity involves lack of ambiguity, so we have to strive to eliminate all multiple meanings and should never let a word develop a new sense... they simply don't get it about how language works, do they?

Your age

Who participates in Web-based experiments? I recently analyzed preliminary results from about 4,500 participants in Keeping Things In Mind, an experiment I'm running in collaboration with a colleague and friend at TestMyBrain.org.

One of the things we're interested in is the age of people who participate. Here is the breakdown:


Not surprisingly, the bulk are college age (particularly freshmen). There are still a sizable number in their 30s, 40s and early 50s, but by the 60s it drops off considerably.

And then there are the few jokers who claim to be 3 or 100.

This is pretty similar to the breakdown I usually see at GamesWithWords.org, except that I usually have fewer tweens and more people in their 60s. But the mode is usually 18.

What this means for the experiment is that people in their 50s on up are woefully underrepresented. We're continuing to run the experiment in the hopes that more will participate.

The military is making telepathic helmets. Sign me up.

It appears researchers at UC-Irvine, University of Maryland and Carnegie Mellon have a DOD grant to investigate the development of "though helmets": 


The devices would harness a person´s brain waves and transmit them as radio waves, where they would be translated into words in the headphones of other soldiers.
I need one of these.


Don't blink -- I'm reading your thoughts


The proposed technology (which they don't expect to be ready for a decade or two) relies on EEG technology -- that is, measuring brain waves. As it happens, this is a method I use to run experiments. It has its limitations.


First off, it only really works if people are sitting still and not moving their eyes. EEG measures electrical activity. Ideally, it measure the brain's electrical activity, but muscles also produce electrical activity and the effects are hundreds of times larger than brain effects. People are working on snazzy new algorithms to factor out the thunderclaps of eye blinks (the huge mountains in this picture are individual blinks).




One-of-a-kind

Another problem is individual variation. While blinks are very easy to see in the raw waveforms, thoughts are hard. For instance, one of the best known EEG effects in language is the N400 -- a broadly-distributed negative deflection around 400 milliseconds after the participant sees a word. Unfortunately, the N400 is so small relative to all the noise in the signal that it's hard to see on a single trial. In a typical experiment, each participant sees 30-40 words (or more), and we average across those trials. Even then, each individual person's N400 looks very different, so we usually have to average across at least a dozen different people to get a good signal.

Thoughts

The bulk of EEG research employs a violation paradigm. We measure the brain's activity when something unexpected happens. For instance, a psycholinguist like myself might compare the two following sentences:

(1) Dog bit the man.
(2) The man bit the dog.

The second sentence is surprising relative to the first, and you can typically see a reasonably large effect in the brain waves (modulo the caveats above). Part of the reason we use violation paradigms is they produce large effects. Trying to compare two perfectly normal sentences is much, much harder.

Other typical effects people can find are differences between function words (e.g., prepositions) and content words (e.g., nouns) -- though, again, this is hard to see on a trial-by-trial basis.

Get me one

There are many, many other obstacles in the way of a mind-reading EEG helmet. That isn't to say that I don't think such a helmet will be built eventually, or that the government is wasting it's money. Technology constantly improves, and setting an on-the-face absurd goal can be excellent motivation.

But I wouldn't start saving up to buy one of your own just yet. Though I'd love one. A helmet that worked that well would make my research go much, much faster.

Lie detection

A few pioneering lawyers have been attempting to use fMRI-based lie detection tests in court. I don't have any broad numbers, but it seems most neuroimagers I talk to are deeply skeptical of such tests, at least at the current stage of technology (and whether such technology can ever catch pathological liars is yet another question).

At a recent talk at Harvard, Michael Gazzaniga related the following argument from a colleague on the law end of things: whether fMRI-based lie detection is "good enough" is not a scientific question but a legal one. After all, the law allows all kinds of scientifically-suspect "evidence" into the courtroom as is (eye-witness testimony, fingerprinting, etc.). Present all data (along with information about how reliable it is) to the jury and let the jury sort it out.

That's one conclusion that could be drawn. Another is that perhaps it's time to step back and come up with a broad policy for how evidence is introduced into the legal system.

Why Is Nobody Studying Klingon?

Doing research for the recent Scientific American Mind article, I found out that Klingon uses the incredibly rare object-verb-subject (OVS) word order. Even though some languages (like Russian) allow relatively free word-order, all languages seem to have a preferred order. There are 6 possible. The most common are SVO (English), SOV (Japanese), VSO (Classical Arabic). The 3 orders that put the object before the subject are relatively rare, with OVS nearly non-existent. It does appear occasionally in poetry or other marked uses (The drink drank I), and is claimed to be the dominant word order in at least two extremely rare languages: Guarijio and Hixkaryana. Given the degree of debate over how to correctly characterize syntax in well-studied languages like English, I'm always maintain some skepticism about rare, poorly-studied languages (and the sad truth is that all languages are poorly-studied when compared to English).

In any case, if one wanted to study the acquisition of Guarijio or Hixkaryana, one would need a decent travel budget and some infrastructure. Klingon is spoken closer to home. Yet I couldn't find any papers in Google Scholar looking at the acquisition of Klingon, even from a sociological perspective. This seems under-studied.

Science blogging and the law

The Kennedy School at Harvard had a conference on science journalism. Among the issues discussed were legal pitfalls science bloggers can run into. Check out this blog post at the Citizen Media Law Project.

Texting during sex

"Teens surprisingly OK with texting during sex," notes Slate's news aggregator. This seemed like a good lead for a piece I've wanted to write for a while: just how much we should trust claims that 10% of people agree to claim X. In many cases, we probably should put little faith in those numbers.

As usual, Stephen Colbert explains why. In his infamous roast of George W Bush, he notes

Now I know there's some polls out there that say this man has a 32 percent approval rating ... Pay no attention to people who say the glass is half empty .. because 32 percent means it's 2/3 empty. There's still some liquid in that glass, is my point. But I wouldn't drink it. The last third is usually backwash.
This was meant as a dig at those who still supported Bush, but there's a deeper point to be made: there's a certain percentage of people who, in a survey, will say "yes" to anything.

Numbers

For instance, many of my studies involve asking people's opinions about various sentences. In a recent one I ran on Amazon Mechanical Turk, I presented people with sentence fragments and asked them which pronoun they thought would likely be the next word in the sentence:

John went to the store with Sally. She/he...

In that case, it could be either pronoun, so I'm trying to get a sense of what people's biases are. However, I put in some filler trials just to make sure people were paying attention:

Billy went to the store with Alfred. She/he...

In this case, it's really, really unlikely the pronoun will be "she," since there aren't any female characters in the story. Even so, over 4% of the time participants still clicked on "she." This wasn't an issue of some of the participants simply being bad. I included 10 such sentences, and nobody only one person got more than 1 of these wrong. However, a lot of people did manage to miss 1 ... probably because they simply were sloppy, made a mistake, were momentarily not thinking ... or because they really thought the next word would be "she."

Those numbers are actually pretty good. In another, slightly harder experiment that I ran on my website, people didn't do so well. This one was shorter, so I included only 4 "catch trials" -- questions for which there was only one reasonable answer. Below is a pie chart of the participants, according to how many of these they got right:




You can see that over half got them all right, but around a quarter missed 1, and a significant sliver got no more than 50% correct. This could suggest many things: my questions weren't as well-framed as I thought, I had a lot of participants who weren't paying attention, some people were deliberately goofing off, etc.

Poll numbers

This isn't a problem specific to experiments. As we all learned in 2000, a certain number of people accidentally vote for the wrong candidate through some combination of not paying attention and poor ballot design.

So there is a difference between a survey finding that 10% of teens say that they think texting during sex is fine and 10% of teens actually thinking that texting during sex is fine. A good survey will incorporate methods of sussing out who is pulling the surveyor's leg (or not paying attention, or having a slip of the tongue, etc.).

Real Surveys

I didn't want to unnecessarily pick on this particular study, so I tried to hunt down the original source to see if they had done anything to protect against the "backwash" factor. Slate linked to a story on mashable.com. Mashable claimed that the research was done by the consumer electronics shopping and review site Retrevo, but only linked to Retrevo's main page, not any particular article. I did find a blog on Retrevo that frequently presents data from surveys, but nothing this year matched the survey in question (though this comes close). I found several other references to this study using Google, but all referenced Retrevo.

If anyone knows how to find the original study, I'd love to see it -- but if it doesn't exist, it wouldn't be the first apocryphal study. So it turns out that the backwash effect isn't the only thing to be careful of when reading survey results.

UPDATE

I have since heard from Revetro. See here.

NSF budget

It seems the current director of the National Science Foundation thinks it's unlikely NSF will get much of a budget increase this coming year (if any), despite Obama's pledge of an 8% increase. Oh well, it was nice while it lasted.

Data wants to be free

It seems that the National Science Foundation will be asking new grant applications to submit a data management plan, apparently including plans for how to make their data available to others.

I have mixed feelings about this. I certainly approve of high-value data sets being made available. I've benefitted a great deal from the wonderful people who put together Penn Tree Bank, VerbNet and similar projects. There are now some useful data sets included in libraries for R as well. I intend to make the summary data from my pronoun studies available when I publish the associated papers.

That said, getting data together in a manner that its interpretable and usable by somebody else is hard. However much I document my own data, whenever I have to go back to look at some old data it takes hours if not days to figure out what I'm looking at. And I'm the one who created it. Fully documenting a data set for someone not associated with the project takes time.

Given that NSF will be paying the salaries of the people who spend the time to document the data sets, it's reasonable to ask whether it's cost-effective. Just how much of a demand is there for data from other labs? I can think of many papers for which I wish I had the original stimuli. The number for which I want the original data are much smaller (though there are some for which it would be really useful).

How many commercial brands does your kid know?

A recently published study by Anna McAlister and T. Bettina Cornwell at the University of Michigan reports that smarter kids are more affected by branding and know more brands. A number of people are interested in this because the naive prediction might have been that smarter people (including kids) should be less impressionable and less susceptible to marketing, rather than more. 

The study caught my eye because it is a nice example of a problem that developmental psychologists run into. One question a developmental psychologist might be interested in is at what age children acquire a particular ability (like susceptibility to branding). This type of research has implications for education, public policy, etc. But the age you get depends on the age of the kids you test. 

It happens to be the case that the children who are most easily recruited into developmental psychology studies tend to be relatively advanced. This happens for a number of reasons. Developmental labs tend to be in universities, which tend to be surrounded by relatively affluent, well-educated communities. Even within a community, not all parents are equally likely to bring their kid into do a study, and those that do seem to often be the sorts of parents that have advanced children. Many studies may disproportionately test children of professors and graduate students. It's easy to imagine additional reasons.

Unfortunately, there is a problem with the direct link to the study. It should be the top study on this Google Scholar search.