Earlier this month, we presented the results of the pilot phase of VerbCorner -- our citizen science project probing the nature of linguistic structure -- at a scientific conference (the Workshop on Events in Language and Cognition). You can see the poster describing the work here.
For those who don't know or don't remember, in VerbCorner, we're trying to work out the grammar rules that apply to verbs. Why do you say Agnes looked at the wall but not Agnes saw at the wall? Why do you say Bart filled the glass with water but not Bart poured the glass with water? Many -- but not all -- linguists believe that grammatical idiosyncrasies are explained by the meanings of the verbs, but evidence is sketchy. Volunteers have been visiting our website to help analyze the meanings of verbs so we can find out.
High-Quality Analyses by Volunteers
Our initial work -- the pilot for the pilot, if you will -- suggested that we could get high-quality analyses from volunteers. But that was based on a very small sample. As of late Feb, over 10,000 volunteers had contributed over 525,000 analyses. In general, the agreement between different volunteers was pretty high -- which is a good sign. Just as importantly, we had a smaller set of 'test' items, for which we knew what professional linguists would say. When we combine the analyses of different volunteers for the same sentence in order to get a 'final answer', the results match the analyses of professional linguists very well. This shows that we can trust these results.
Where Quantity Becomes Quality
Just as importantly, we were able to analyze a lot of sentences. In the VerbCorner project, we are trying to determine which sentences have which of a very specific set of aspects of meaning. One aspect is whether the sentence involves something changing physical form (example: Agnes broke the vase as opposed to Agnes touched the vase). Another aspect is whether the sentence involves anything applying physical force to anything else (ex: Agnes pushed Bart as opposed to Agnes looked at Bart).
For purposes of bookkeeping, let's call one aspect of meaning for one sentence an 'item.' After combining across different volunteers, the results were clear enough to definitively code 31,429 items. This makes VerbCorner the largest study of it's kind by far. (A typical study might only look at a few hundred items.)
This quantity makes a big difference. Given how small studies usually are, they can only look at one tiny corner of the language. The problem is that that corner might not be representative. Imagine studying what Americans are like by only surveying people in Brooklyn. This tends to lead to disagreements between different studies; one linguist studies "Brooklyn" and another studies "Omaha", and they come to very different conclusions! Unfortunately, language is so complex and so vast, one person can only analyze one corner. This is why we are recruiting large numbers of volunteers to help!
One major question we had was how much the rules of verb argument structure (that is, the kinds of grammatical rules described above) depend on meaning. Some linguists think they depend entirely on meaning: If you know the meaning of a verb, you know what its grammar will be like. Others think meaning has very little role to play. Most linguists are probably somewhere in the middle.
The results suggest that the first group is right: These rules depend almost entirely on meaning. Or maybe even entirely; it's so close it is hard to tell.
The reason I say "suggest," however, is that while we have the biggest study of its kind, it still only covers about 1% of English. So we've gone from studying Brooklyn to studying all of NYC. It's an improvement, but not yet enough.
This is why I called this first phase a "pilot". We wanted to see if we could get high-quality, clearly-interpretable results from working with volunteers. Many researchers thought this would be impossible. After all, linguists have to go through a lot of schooling to learn how to analyze sentences. But a key finding of the Citizen Science movement is that there are a lot of smart enthusiasts out there who may not be professionals but can very much contribute to science.
The next phase
We have set a goal of reaching 50,000 completed items by July 1st. That will require upping our game and increasing the rate at which we're analyzing items by almost 4x. But the beauty of Citizen Science is that this does not really require that much work on anyone's part. If 3,000 volunteers each spend about one hour contributing to the project, we'll more than hit that goal. So please help out, and please tell your friends. You can contribute here.
It is exciting times at GamesWithWords.org as we settle into our new digs at Boston College. The brick-and-mortar lab now has a name: the Language Learning Laboratory @ Boston College (L3@BC). As we build out the group, expect to see a lot more activity around the site, including new features, projects, etc. Speaking of, we are hiring a research assistant. See the posting below:
The brand-new Language Learning Laboratory at Boston College is recruiting a full-time research assistant. The research assistant will work closely with the PI (Dr. Joshua Hartshorne) and graduate students in the lab. Primary responsibilities will include coordinating the lab's crowdsourcing and citizen science activities. For example, over 10,000 volunteers have contributed over 500,000 linguistic judgments as part of the laboratory's VerbCorner project. The research assistant will help coordinate these volunteers for this and other similar projects. S/he will also manage undergraduate researchers working on these projects and engage in public outreach activities such as blogging or creating educational materials. S/he will assist in data-analysis and have the opportunity to attend and present at major scientific conferences.
Candidates should have an undergraduate degree in psychology, neuroscience, linguistics, computer science, or a related field (or a good explanation as to why they are qualified anyway). Candidates should also have familiarity with one or more computer programming languages (e.g., Python, R, Matlab, C++) or an exceptional quantitative background (i.e., degree in mathematics). Experience with any of the following would be an added advantage: laboratory research, data analysis, management/supervision, science outreach, journalism, machine learning.
Review of applications will begin immediately. Start date is flexible but not later than 9/1/2016. Members of groups underrepresented in science are particularly encouraged to apply. International candidates are welcomed but must have an MA or equivalent.
To apply, send to email@example.com: a CV and a one-page essay explaining why you are interested in the position and how it fits with your past experiences and future goals. Please also arrange for letters of recommendation from 2-3 references to be sent to firstname.lastname@example.org.
Please be sure that your CV lists your degree(s), major/minor, GPA, any relevant classes (psychology, linguistics, computer science, etc.), programming languages with which you have experience (and the nature of that experience), and any other experiences/qualifications you feel are particularly relevant.
Suppose you want to test the effect of some training on IQ scores. You test 50 subjects: 25 in your experimental condition and 25 in the control condition. That's a fairly typical size for a psychology study. And you get a significant result. You might be tempted to conclude that your manipulation worked, but it might actually be more likely that your results are due to chance or experimenter error.
It depends on how large of an effect your manipulation ought to have. If the training typically raises people's IQs by 7.5 points, your study would only have had a 41% chance of detecting it (given normal assumptions about normal distributions). A more modest 5 point effect could be detected 20% of the time. You'd need a 14 point effect to have a 90% chance of detecting it.
For reference, a 14 point effect is enough to move someone of average intelligence into the top fifth of humanity. We are in miracle drug territory. More realistically, your manipulation is unlikely to have nudged IQ by more than a point or two, in which case there's essentially no chance your study could have detected it. So if you did detect something, it's probably an error.
Well, how much power do studies have?
Concerns about statistical power (among things) have led some researchers to declare that more than half of all published findings are false. Other researchers are more sanguine. In short, if you think that the effects we are studying tend to be pretty large (a standard deviation or more), there is little to worry about. If you think they tend to be small, the situation is dire.
Unfortunately, the only way to accurately determine how large an effect is is to measure it multiple times. Obviously, you can't have done that in advance when running a brand new study. You might be able to guestimate based on the typical effect size in your field. Unfortunately, many fields primarily publish significant results. This introduces a bias, because effect size and significance are correlated.
Suppose we ran the above experiment and the true effect size is 7.5 IQ points. On average, that is what we would find. But of course sometimes we'll run the experiment and the effect will be larger and sometimes it will be smaller, simply due to random chance. By paying attention only to the significant results, we're selectively ignoring those experiments that happened, through no fault of their own, to underestimate the effect. This skews our results, and on average we would report an effect of 11.8 IQ points -- much higher than the truth.
Effects aren't as big as you think.
The typical effect reported in the psychology literature is about half a standard deviation, which is why I've been using the example of 7.5 IQ points above. However, because of the bias against publishing null results or replications, this is inflated. That means that the expectations of psychologists are inflated. We are used to seeing effects of half a standard deviation or more. As a result, we are going to chronically underestimate the number of subjects we need to run.
Unfortunately, without widespread publication of null results and replications, we cannot say how badly our perception is distorted, because the degree of distortion depends on how large effects really are. I ran a series of simulations involving a two-condition, between-subjects design to see how bad the distortion might be. In these simulations, I assumed that null results are never reported, which is only a slight exaggeration of the psychology literature.
In the graph below, the true effect size (measured in standard deviations) is on the X-axis, and the average reported effect size is in the Y-axis. The different lines reflect different numbers of subjects per condition.
As you can see, if you have 50 or fewer subjects per condition, you'll hardly ever report an effect size smaller than half a standard deviation, even when the true effect size is one tenth of a standard deviation. This is because reliably detecting an effect of one tenth of a standard deviation requires about 2,000 subjects per condition.
Even with 1,000 subjects per condition, there is some distortion for effects smaller than one quarter standard deviation.
Note that these simulations assume that the researcher is correcting for multiple comparisons, isn't p-hacking, etc. The situation gets worse if we relax those assumptions.
In the simulation on the left, I assumed the researcher is engaging in contingent stopping. After every 5 subjects, the researcher checks her data. If the effect is significant, she stops and reports the result. If the effect is in the "wrong direction", she decides there's something wrong with her stimuli, revamps the experiment, and tries again.
While this is a little extreme, most researchers engage in some amount of contingent stopping. As you can see, this badly distorts the results. Even with 1,000 subjects, we end up distorting even large effects.
Failure to correct for multiple comparisons will have similar effects.
What this means is that if you are basing your expected effect size on prior experience, the effect you are trying to detect is probably not anywhere near as large as you think, and you may need many more subjects to detect it than you were planning.
But I usually get significant results. Doesn't that mean I have enough subjects?
Maybe. You might be testing an abnormally large effect. Such effects do exist. Alternatively, you may be engaging in contingent stopping, failing to adequately correct for multiple comparisons, or simply making a mistake somewhere in your analysis. It's probably worth checking.
To adapt a metaphor from Uri Simonsohn, you can search for exoplanets with binoculars. But you
should keep in mind that it is so unlikely you could see an exoplanet with your binoculars, that if you do see one, you are probably mistaken in some way.
I don't use t-tests. How does this apply to me?
My examples above use t-tests because they are simple and widely known. But the same basic issues apply no matter what kind of analysis you do. If you are looking for some kind of effect, and if that effect is small, you'll need a lot of data to detect it. And it's probably smaller than you think.
If you are model-fitting, the noise in your data puts an upper limit on how well you can fit the underlying phenomenon. If your data are fairly noisy and yet your model fits really well, you are probably fitting the noise, not the signal. And your data are probably noisier than you think.
I can't tell you how many subjects to run. I don't know. Nobody knows. Until there is a good mechanism for reporting null results, nobody will know.
In the meantime, I recommend running more than you think you need. If there are published studies looking at a similar phenomenon, look at their effect size and assume the true effect size is significantly smaller, then do a power analysis. If you run your study and get a positive result, it's probably a good idea to replicate it. If you get a null result, you might not wish to read too much into that.
If you don't have the means to test a lot of subjects, you have a few options. Effect size is a combination of the signal-to-noise ratio and the amount of data you collected. If you can't collect more data, you can try to decrease the noise (e.g., by using a better, more accurate instrument). You can't necessarily increase the signal, because you can't change the laws of nature. But you can decide which laws of nature to study, you and you might be better off studying one that has powerful effects.
See below for the R code to run the simulations above. If anyone knows how to convince blogspot to allow indention, please lemmeno.
It's been a very busy time at GamesWithWords. I'm pleased to announce that we'll be moving to Boston College in January. The impending move, combined with a large number of papers to write, has kept me too busy to write much on this blog.
A number of non-native English speakers get "Singaporean" as the top guess for their native language. You can actually see that by playing around in our dialect navigator. Here's screenshot of a particularly illuminating view:
As you can see, "Singaporean" is connected to a big bundle of non-native dialects. Most of the other native dialects are off in a chain in the bottom right. Here is another view with a slightly weaker filter on connectedness:
Again, you can see that most of the non-native dialects cluster together. Most of the native dialects do not connect directly to that cluster but rather connect to Singaporean. Again, you can see Standard American and AAVE off in their own cluster.
Of course, this view just tells you what is connected to what. It's possible that Swedish is actually more similar to Irish than to Singaporean, even though the chain of connections is farther for Swedish and Irish. If you click on one of the dialects, the panel on the left will show you how closely related that dialect is to all others:
We're working on a browser that will let you see *why* different dialects are more or less related -- that is, what answers in the quiz are typical of which dialects. I'm hoping it will be ready soon. In the meantime, enjoy the dialect browser.
A number of forums have picked up the WhichEnglish quiz, and have produced some really intelligent and insightful conversation. I recommend in particular this conversation on metafilter. There is also an extensive conversation at hacker news and a somewhat older discussion at reddit. And there is a lot of discussion in Finnish and Hungarian, but I have no idea what they are saying...
Around 4am EST on May 28, we started getting *a lot* of traffic to the website. This very quickly overloaded the server, resulting in the website running very slowly. We did some optimization. Things sped up, and our reward was more traffic. So we switched to a more powerful server. And so on.
Things are finally under control. At least for the moment, anyway. You can see that we've managed to get the average page load time down to a reasonable length of time for the last day or so, without any large spikes:
Of course, overwhelming amounts of traffic is a good problem to have, and I won't complain if things overheat again.
In this project, we are looking at three interrelated issues:
1. How does the age at which you start learning a language affect how well you learn that language?
2. How is learning a foreign language affected by the language you already know?
3. How are the grammars of different English dialects related?
And of course, we train an algorithm to predict participants' native language and dialect of English based on their answers. I return to that at the end.
Age of Acquisition
Although WhichEnglish has a few scientific targets, age-of-acquisition effects were the original inspiration. Everybody knows that the older you are when you start learning a foreign language, the harder it is to learn. One possibility is that there is a critical period: Up to some age, you can learn a language like a native. After that age, you will never learn it perfectly. The other possibility is that there is no specific period for language learning; rather, language-learning simply gets a little harder every day.
The evidence is unclear. Ideally, you would compare people who started learning some language (say, English) from birth with people who started as 1 year-olds and people who started as 2 year-olds, etc. Or maybe you would want something even finer-grained. The problem is that you need a decent number of people at each age (50 would be a good number), and it quickly becomes infeasible.
One study that came close to this ideal used census data. The authors -- led by Kenji Hakuta -- realized that the US census asks foreign-born residents to rate their own English ability. The authors compared this measure of English ability with the year of immigration (an approximation for the age at which the person started learning English). Their results showed a steady decline, rather than a critical period.
We are trying to build on this work in a few ways. For one, it would be nice to confirm (or disconfirm) the previous results with a more sensitive measure of English ability. So rather than just ask people how good their English is, we have them take a test. Also, we are getting more precise information about when the participant started learning English and in what contexts.
Also, there is good reason to suspect that age-of-acquisition affects different aspects of language differently. Studies have shown that even people who began learning a language as toddlers have detectable -- if very subtle -- accents. However, people who start learning foreign languages as adults usually report that learning vocabulary isn't so hard. Grammar seems to be somewhere in between. The Hakuta study didn't distinguish these different aspects of language.
WhichEnglish focuses on grammar. We also have a vocabulary quiz to look at vocabulary. A pronunciation test is in the works.
First language effects
When we started thinking about studying age-of-acquisition effects, we quickly realized a problem. We needed questions that would be difficult for someone who learned English as a second language. But which aspects of grammar are difficult seems to depend on your first language. I personally have difficulty with aspect in Russian because the English aspect system is much less complex. However, dealing with tense in Russian is relatively straightforward, since the Russian tense system is much less complex that English's.
Since we didn't know for sure what the language backgrounds of our participants would be, we wanted a range of questions that covered the different kinds of problems people with different backgrounds might have.
As we combed the literature, we realized that it was pretty fragmented. One study might say that grammar rule x is difficult for Japanese-speakers and grammar rule y is difficult for German-speakers, but there would be no information on how Japanese-speakers fare with grammar rule y and how German-speakers manage with grammar rule x. This actually makes sense: Most studies look at speakers of one or at most a handful of language backgrounds. This is partly a matter of research interest (the researchers are usually interested in some particular language) and partly a matter of feasibility (in a lab setting, you can only test so many participants). We realized that our study, by virtue of being on the Internet and recruiting people from a wide array of backgrounds, would provide an opportunity to get more systematic data across a large number of languages.
This is pretty exploratory. We don't have strong hypotheses. But as data comes in, we will be analyzing to see what we get, and we will report it here.
The Grammars of English
In designing our age-of-acquisition study, we realized a second problem. Correct English grammar varies across different dialects. In Newfoundland, you can say "Throw me down the stairs the hammer," but most places, you can't. (I have heard that this is said in parts of Rhode Island, too, but only anecdotally.) We don't want to count a late-learner of English who says "Throw me down the stairs the hammer" as not knowing English if in fact she lives in Newfoundland!
So what we really wanted were questions for which the correct answer is the same in all English dialects. But we didn't know what those were. Again, the literature was only partly helpful here. For obvious reasons, researchers tend to be interested in understanding peculiar constructions specific to certain dialects, rather than recording what is the same everywhere (boring).
We picked out a lot of grammar rules that we at least had no reason to believe varied across dialect. But we also realized that there was an opportunity here to study differences across dialects. So we included a subset of items that we thought probably would be different across dialects so that we can explore relationships across dialects.
When you take the quiz, at the end we give you our best guess as to what your native language is and what dialect of English you speak. How is that related to the three issues I just discussed?
It's deeply related. The best way of proving that you understand how people's understanding of grammar is affected by the age at which they started learning, their first language (if any), and the dialect of English they speak, is to show that you can actually distinguish people based on their grammar. In fact, training an algorithm to make just that distinction is a common way of analyzing and exploring data.
There are also obvious practical applications for an algorithm that can guess someone's language background based on their grammar (for education, localization of websites, and so on).
But an important reason we included the algorithm's predictions in the quiz itself was to present the results of the study to participants in the study as the study goes on. Certainly, you can read this and other blog posts I've written about the project as well. But it probably took you as long to read this post as to do the quiz. The algorithm and its predictions boil down the essence of the study in a compelling way. Based on the (numerous) emails I have gotten, it has inspired a lot of people to think more about language. Which is great. The best Web-based studies are a two-way street, where the participants get something out of the experience, too.
We chose the particular algorithm we use because it runs quickly and could be trained on very little data. You can read more about it by clicking on "how it works" in our data visualization. We are testing out more sophisticated algorithms as well, which are likely to do much better. Algorithms for detecting underlying patterns is actually a specialty of my laboratory, and this will be a fantastic dataset to work with. These algorithms mostly run too slowly to use as part of the quiz (nobody wants to wait 10 minutes for their results), but the plan is to describe those results in future posts and/or in future data visualizations.
If you have any questions about this work, please ask in the comments below or shoot me an email at email@example.com.
GamesWithWords.org will be experiencing periodic outages as we upgrade* the server. The incredible response we've had for WhichEnglish has completely overwhelmed the server. After bringing it back from the dead multiple times, the techs at Datarealm convinced me to upgrade to the next tier of server.
This is possibly overkill, in that we don't normally get the kind of traffic we got today. Over 12% of *all* visitors to the website since Jan. 1, 2008, came in the last 24 hours! Still, traffic has been steadily rising over the last year, and large spikes are getting much more frequent.
Worst case scenario, this should result in a faster, more stable experience for people going forward.
*Upgrading while there is heavy traffic to your website is not ideal. But then neither is having the site crash constantly.
After my optimistic comments about "overkill", I've spent most of the last 5 days performing various upgrades to the server. Traffic to the site peaked at about 100,000 visits/day (it was a little lower Sunday, but then weekend traffic is usually down).
I have updated the dialect chart based on the results for the first few days. Since the new version shows up automatically in the frame in the previous post, I haven't added it in here. And you can get a better look at it on the website.
The biggest difference is that also added several "dialects" for non-native speakers of English. That is, I added five new dialects, one each for people whose first language was Spanish, German, Portuguese, Dutch, or Finnish. I'll be adding more of these dialects in the future, but those just happen to be the groups for which we have a decent number of respondents.
As you can see, the algorithm finds that American & Canadian speakers are more likely one another than they are like anyone else. Similarly, English, Irish, Scottish, and Australian speakers are more likely one another than anyone else. And the non-native English speakers also form a group. I'll leave you to explore the more fine-grained groupings on your own.
If you are wondering why New Zealanders are off by themselves, that's mostly because we don't have very many of them, and the algorithm has difficulty classifying dialects for which there isn't much data. Same for Welsh English, South African English, and Black Vernacular English. So if you know people who speak any of those dialects...
But vocabulary and pronunciation aren't the only things that vary across different dialects of English. We are in the midst of a soft launch of a new project which will, among things, help map out the differences in English grammar around the world.
I put together a visualization of early results below (you may want to load it in its own page -- depending on your browser, the embedded version below may not work). You can use this graphic to explore the similarities among nine English dialects (American, Canadian, English English, Irish, New Zealandish, Northern Irish, Scottish, and South African).
As more results come in (about other dialects like Ebonics and Welsh, about specific parts of America or Canada, etc.), I'll be updating this graphic. So please take the survey and then check back in soon.
I just finished a radio interview about birth order.
Apparently not very much research goes into booking guests for radio & TV shows. Lately, I've been getting at least one interview request a month to talk about birth order. And every time they are disappointed that I can't tell them about how birth order affects personality, that there's little evidence to suggest it does. They *wouldn't* be surprised if they read *anything* that I had written or said on the topic. (Well, except for that FOX interview, which was edited to make it look like I said the exact opposite of what I actually said.)
It's been making me think I should do more birth order research, just so I have something to say at these interviews.