Field of Science

Magic Singlish

A number of non-native English speakers get "Singaporean" as the top guess for their native language. You can actually see that by playing around in our dialect navigator. Here's screenshot of a particularly illuminating view:


As you can see, "Singaporean" is connected to a big bundle of non-native dialects. Most of the other native dialects are off in a chain in the bottom right. Here is another view with a slightly weaker filter on connectedness:

Again, you can see that most of the non-native dialects cluster together. Most of the native dialects do not connect directly to that cluster but rather connect to Singaporean. Again, you can see Standard American and AAVE off in their own cluster.

Of course, this view just tells you what is connected to what. It's possible that Swedish is actually more similar to Irish than to Singaporean, even though the chain of connections is farther for Swedish and Irish. If you click on one of the dialects, the panel on the left will show you how closely related that dialect is to all others:

We're working on a browser that will let you see *why* different dialects are more or less related -- that is, what answers in the quiz are typical of which dialects. I'm hoping it will be ready soon. In the meantime, enjoy the dialect browser.

Updated results on the relationship between English dialects

I've updated the interactive visualization of the relationships between the Englishes of the world to include a couple dozen additional native languages. Check it out.

Forums find GamesWithWords

A number of forums have picked up the WhichEnglish quiz, and have produced some really intelligent and insightful conversation. I recommend in particular this conversation on metafilter. There is also an extensive conversation at hacker news and a somewhat older discussion at reddit. And there is a lot of discussion in Finnish and Hungarian, but I have no idea what they are saying...

Handling viral traffic

Around 4am EST on May 28, we started getting *a lot* of traffic to the website. This very quickly overloaded the server, resulting in the website running very slowly. We did some optimization. Things sped up, and our reward was more traffic. So we switched to a more powerful server. And so on.

Things are finally under control. At least for the moment, anyway. You can see that we've managed to get the average page load time down to a reasonable length of time for the last day or so, without any large spikes:



Of course, overwhelming amounts of traffic is a good problem to have, and I won't complain if things overheat again.

Which English: The Science, Part 1

I've gotten a number of questions about the science behind our WhichEnglish quiz. Actually, I had intended to post a more detailed discussion days ago, but I got distracted by other matters.

In this project, we are looking at three interrelated issues:

1. How does the age at which you start learning a language affect how well you learn that language?
2. How is learning a foreign language affected by the language you already know?
3. How are the grammars of different English dialects related?

And of course, we train an algorithm to predict participants' native language and dialect of English based on their answers. I return to that at the end.

Age of Acquisition

Although WhichEnglish has a few scientific targets, age-of-acquisition effects were the original inspiration. Everybody knows that the older you are when you start learning a foreign language, the harder it is to learn. One possibility is that there is a critical period: Up to some age, you can learn a language like a native. After that age, you will never learn it perfectly. The other possibility is that there is no specific period for language learning; rather, language-learning simply gets a little harder every day.

The evidence is unclear. Ideally, you would compare people who started learning some language (say, English) from birth with people who started as 1 year-olds and people who started as 2 year-olds, etc. Or maybe you would want something even finer-grained. The problem is that you need a decent number of people at each age (50 would be a good number), and it quickly becomes infeasible.

One study that came close to this ideal used census data. The authors -- led by Kenji Hakuta -- realized that the US census asks foreign-born residents to rate their own English ability. The authors compared this measure of English ability with the year of immigration (an approximation for the age at which the person started learning English). Their results showed a steady decline, rather than a critical period.

We are trying to build on this work in a few ways. For one, it would be nice to confirm (or disconfirm) the previous results with a more sensitive measure of English ability. So rather than just ask people how good their English is, we have them take a test. Also, we are getting more precise information about when the participant started learning English and in what contexts.

Also, there is good reason to suspect that age-of-acquisition affects different aspects of language differently. Studies have shown that even people who began learning a language as toddlers have detectable -- if very subtle -- accents. However, people who start learning foreign languages as adults usually report that learning vocabulary isn't so hard. Grammar seems to be somewhere in between. The Hakuta study didn't distinguish these different aspects of language.

WhichEnglish focuses on grammar. We also have a vocabulary quiz to look at vocabulary. A pronunciation test is in the works.

First language effects

When we started thinking about studying age-of-acquisition effects, we quickly realized a problem. We needed questions that would be difficult for someone who learned English as a second language. But which aspects of grammar are difficult seems to depend on your first language. I personally have difficulty with aspect in Russian because the English aspect system is much less complex. However, dealing with tense in Russian is relatively straightforward, since the Russian tense system is much less complex that English's.

Since we didn't know for sure what the language backgrounds of our participants would be, we wanted a range of questions that covered the different kinds of problems people with different backgrounds might have.

As we combed the literature, we realized that it was pretty fragmented. One study might say that grammar rule x is difficult for Japanese-speakers and grammar rule y is difficult for German-speakers, but there would be no information on how Japanese-speakers fare with grammar rule y and how German-speakers manage with grammar rule x. This actually makes sense: Most studies look at speakers of one or at most a handful of language backgrounds. This is partly a matter of research interest (the researchers are usually interested in some particular language) and partly a matter of feasibility (in a lab setting, you can only test so many participants). We realized that our study, by virtue of being on the Internet and recruiting people from a wide array of backgrounds, would provide an opportunity to get more systematic data across a large number of languages.

This is pretty exploratory. We don't have strong hypotheses. But as data comes in, we will be analyzing to see what we get, and we will report it here.

The Grammars of English

In designing our age-of-acquisition study, we realized a second problem. Correct English grammar varies across different dialects. In Newfoundland, you can say "Throw me down the stairs the hammer," but most places, you can't. (I have heard that this is said in parts of Rhode Island, too, but only anecdotally.) We don't want to count a late-learner of English who says "Throw me down the stairs the hammer" as not knowing English if in fact she lives in Newfoundland!

So what we really wanted were questions for which the correct answer is the same in all English dialects. But we didn't know what those were. Again, the literature was only partly helpful here. For obvious reasons, researchers tend to be interested in understanding peculiar constructions specific to certain dialects, rather than recording what is the same everywhere (boring).

We picked out a lot of grammar rules that we at least had no reason to believe varied across dialect. But we also realized that there was an opportunity here to study differences across dialects. So we included a subset of items that we thought probably would be different across dialects so that we can explore relationships across dialects.

The algorithm

When you take the quiz, at the end we give you our best guess as to what your native language is and what dialect of English you speak. How is that related to the three issues I just discussed?

It's deeply related. The best way of proving that you understand how people's understanding of grammar is affected by the age at which they started learning, their first language (if any), and the dialect of English they speak, is to show that you can actually distinguish people based on their grammar. In fact, training an algorithm to make just that distinction is a common way of analyzing and exploring data.

There are also obvious practical applications for an algorithm that can guess someone's language background based on their grammar (for education, localization of websites, and so on).

But an important reason we included the algorithm's predictions in the quiz itself was to present the results of the study to participants in the study as the study goes on. Certainly, you can read this and other blog posts I've written about the project as well. But it probably took you as long to read this post as to do the quiz. The algorithm and its predictions boil down the essence of the study in a compelling way. Based on the (numerous) emails I have gotten, it has inspired a lot of people to think more about language. Which is great. The best Web-based studies are a two-way street, where the participants get something out of the experience, too.

We chose the particular algorithm we use because it runs quickly and could be trained on very little data. You can read more about it by clicking on "how it works" in our data visualization. We are testing out more sophisticated algorithms as well, which are likely to do much better. Algorithms for detecting underlying patterns is actually a specialty of my laboratory, and this will be a fantastic dataset to work with. These algorithms mostly run too slowly to use as part of the quiz (nobody wants to wait 10 minutes for their results), but the plan is to describe those results in future posts and/or in future data visualizations.

In conclusion

If you have any questions about this work, please ask in the comments below or shoot me an email at gameswithwords@gmail.com.

Good problems to have

GamesWithWords.org will be experiencing periodic outages as we upgrade* the server. The incredible response we've had for WhichEnglish has completely overwhelmed the server. After bringing it back from the dead multiple times, the techs at Datarealm convinced me to upgrade to the next tier of server.

This is possibly overkill, in that we don't normally get the kind of traffic we got today. Over 12% of *all* visitors to the website since Jan. 1, 2008, came in the last 24 hours! Still, traffic has been steadily rising over the last year, and large spikes are getting much more frequent.



Worst case scenario, this should result in a faster, more stable experience for people going forward.

*Upgrading while there is heavy traffic to your website is not ideal. But then neither is having the site crash constantly.

**Update**

After my optimistic comments about "overkill", I've spent most of the last 5 days performing various upgrades to the server. Traffic to the site peaked at about 100,000 visits/day (it was a little lower Sunday, but then weekend traffic is usually down).

There was a lot I could do to shrink page-load time (compressing images, minimizing javascript files, etc.). But the biggest issues were with sending data to and from the database. Here, I did some work to optimize and cut down the number of calls to the database, but the real heroes are the folks at Datarealm, who -- based on the amount of time they've put into helping me with the site over the last week -- have definitely lost money on having me as a client. If you are looking for someone to host your website, I warmly recommend them.

Findings: Which English -- updated dialect chart

I have updated the dialect chart based on the results for the first few days. Since the new version shows up automatically in the frame in the previous post, I haven't added it in here. And you can get a better look at it on the website.

The biggest difference is that also added several "dialects" for non-native speakers of English. That is, I added five new dialects, one each for people whose first language was Spanish, German, Portuguese, Dutch, or Finnish. I'll be adding more of these dialects in the future, but those just happen to be the groups for which we have a decent number of respondents.

As you can see, the algorithm finds that American & Canadian speakers are more likely one another than they are like anyone else. Similarly, English, Irish, Scottish, and Australian speakers are more likely one another than anyone else. And the non-native English speakers also form a group. I'll leave you to explore the more fine-grained groupings on your own.

If you are wondering why New Zealanders are off by themselves, that's mostly because we don't have very many of them, and the algorithm has difficulty classifying dialects for which there isn't much data. Same for Welsh English, South African English, and Black Vernacular English. So if you know people who speak any of those dialects...

The English Grammars of the World

It's widely observed that not everybody speaks English the same way. Depending on where you grew up, you might say y'all, you guys, or just you. You might pronounce grocery as if it were "groshery" or "grossery." There have been some excellent, fine-grained studies of how these aspects of English vary across the United States and elsewhere, such as this one.

But vocabulary and pronunciation aren't the only things that vary across different dialects of English. We are in the midst of a soft launch of a new project which will, among things, help map out the differences in English grammar around the world.

I put together a visualization of early results below (you may want to load it in its own page -- depending on your browser, the embedded version below may not work). You can use this graphic to explore the similarities among nine English dialects (American, Canadian, English English, Irish, New Zealandish,  Northern Irish, Scottish, and South African).

As more results come in (about other dialects like Ebonics and Welsh, about specific parts of America or Canada, etc.), I'll be updating this graphic. So please take the survey and then check back in soon.



Load the graphic directly here.

Doing your homework

I just finished a radio interview about birth order.

Apparently not very much research goes into booking guests for radio & TV shows. Lately, I've been getting at least one interview request a month to talk about birth order. And every time they are disappointed that I can't tell them about how birth order affects personality, that there's little evidence to suggest it does. They *wouldn't* be surprised if they read *anything* that I had written or said on the topic. (Well, except for that FOX interview, which was edited to make it look like I said the exact opposite of what I actually said.)

It's been making me think I should do more birth order research, just so I have something to say at these interviews.

Calling all citizen scientists

SciStarter would like to know more about your experiences with Citizen Science. They are running a survey (here) in preparation for a workshop at the Citizen CyberScience Summit in London next month.

More Citizens, More Science


For the last couple years, most articles about Citizen Science -- in which amateurs contribute to scientific projects -- have been hagiography. These articles were nearly exclusively Ra! Ra!, all about the exciting new development.

It seems that we've matured a bit as a field, because lately I've run across a couple articles that, while still being positive overall, have laid out some real criticism. For instance, in an article in Harvard Magazine, Katherine Xue concludes with the worry that citizen science may be less about involving the public and more about cheap labor (full disclosure: I was interviewed for and appear in this article). Many citizen science projects, she notes, are little more than games or, worse, rote labor, with little true engagement for the volunteer in the scientific mission.

Similarly, in a much-tweeted article at The Guardian, Michelle Kilfoyle and Hayley Birch write, "Who really benefits the most from [citizen science]: the amateurs or the professionals? … Most well-known initiatives are the big crowdsourcing projects: big on the number of participants but not necessarily the level of participation."

Introducing the VerbCorner Forum

These articles resonated with me. Ever since we launched VerbCorner, our citizen science project looking at the structure of language, meaning, and thought, we've wanted to find additional ways to get our volunteers involved in the science and get more out of participation. VerbCorner is very much a crowdsourcing project -- most of what volunteers do on the site is contribute labor. We've always had this blog, where people could learn more about the project, but that's not especially interactive.

To that end, we've added a forum where anyone and everyone involved in the project can discuss the project, offer suggestions, debate the science, and discuss anything related (syntax, semantics, etc.). We have high hopes for this forum. Over the years, I have gotten a lot of emails from participants in the various projects at GamesWithWords.org, emails with questions about the projects, ideas for new experiments, and -- all too often -- reports of bugs or type-os. These emails have been extremely useful, and in a few cases have even led to entirely new research directions. But email is a blunt instrument, and I expect that for everyone who has emailed, at least ten others had similar comments but never got around to tracking down our email address.

I hope to see you on the forum!

A Great Year for GamesWithWords.org

Unique visitors at GamesWithWords.org were up 76% in 2013 over the previous year. That's after several years of fairly steady traffic.

Meanwhile, two journal papers and a conference paper involving data collected at GamesWithWords.org were accepted (and two more are currently under review). Many thanks to everyone who participated and otherwise helped out!

Results (Round 1): Crowdsourcing the Structure of Meaning & Thought

Language is a device for moving a thought from one person's head into another's. This means to have any real understanding of language, we also need to understand thought. This is what makes work on language exciting. It is also what makes it hard.

With the help of over 1,500 Citizen Scientists working through our VerbCorner project, we have been making rapid progress.

Grammar, Meaning, & Thought

You can say Albert hit the vase and Albert hit at the vase. You can say Albert broke the vase but you can't say Albert broke at the vase. You can say Albert sent a book to the boarder [a person staying at a guest house] or Albert sent a book to the border [the line between two countries], but while you can say Albert sent the boarder a book, you can't say Albert sent the border a book. And while you say Albert frightened Beatrice -- where Beatrice, the person experiencing the emotion, is the object of the verb -- you must say Beatrice feared Albert -- where Beatrice, the person experiencing the emotion, is now the subject.

How do you know which verb gets used which way? One possibility is that it is random, and this is just one of those things you must learn about your language, just like you have to learn that the animal in the picture on the left is called a "dog" and not a "perro", "xiaogou," or "sobaka." This might explain why it's hard to learn language -- so hard that non-human animals and machines can't do it. In fact, it results in a learning problem so difficult that many researchers believe it would be impossible, even for humans (see especially work on Baker's Paradox).

Many researchers have suspected that there are patterns in terms of which verbs can get used in which ways, explaining the structure of language and how language learning is possible, as well as shedding light on the structure of thought itself. For instance, the difference (it is argued) between Albert hit the vase and Albert hit at the vase is that the latter sentence means that Albert hit the vase ineffectively. You can't say Albert broke at the vase because you can't ineffectively break something: It is either broken or not. The reason you can't say Albert sent the border a book is that this construction means that the border owns the book, which a border can't do -- borders aren't people and can't own anything -- but a boarder can. The difference between Albert frightened Beatrice and Beatrice feared Albert is that the former describes an event that happened in a particular time and place (compare Albert frightened Beatrice yesterday in the kitchen with Beatrice feared Albert yesterday in the kitchen).


When researchers look at the aspects of meaning that matter for grammar across different languages, many of the same aspects pop up over and over again. Does the verb describe something changing (break vs. hit)? Does it describe something only people can do (own, know, believe vs. exist, break, roll)? Does it describe an event or a state (frighten vs. fear)? This is too suspicious of a pattern to be accidental. Researchers like Steven Pinker have argued that language cares about these aspects of meaning because these are basic distinctions our brain makes when we think and reason about the world (see Stuff of Thought). Thus, the structure of language gives us insight into the structure of thought.

The Question

The theory is very compelling and is exciting if true, but there are good reasons to be skeptical. The biggest one is that there simply isn't that much evidence one way or another. Although a few grammatical constructions have been studied in detail (in recent years, this work has been spearheaded by Ben Ambridge of the University of Liverpool), the vast majority have not been systematically studied, even in English. Although evidence so far suggests that which verbs go in which grammatical constructions is driven primarily or entirely by meaning, skeptics have argued that is because researchers so far have focused on exactly those parts of language that are systematic, and that if we looked at the whole picture, we would see that things are not so neat and tidy.

The problem is that no single researcher -- nor even an entire laboratory -- can possibly investigate the whole picture. Checking every verb in every grammatical construction (e.g., noun verb noun vs. noun verb at noun, etc.) for every aspect of meaning would take one person the rest of her life.

CrowdSourcing the Answer

Last May, VerbCorner was launched to solve this problem. For the first round of the project, we posted questions about 641 verbs and six different aspects of meaning. By October 18th, 1,513 volunteers had provided 117,584 judgments, which works out to 3-4 people per sentence per aspect of meaning. That was enough data to start analyzing.

As predicted, there is a great deal of systematicity in the relationship between meaning and grammar (for details on the analysis, see the next section). These results suggest that the relationship between grammar and meaning may indeed be very systematic, helping to explain how language is learnable at all. It also gives us some confidence in the broad project of using language as a window into how the brain thinks and reasons about the world. This is important, because the mind is not easy to study, and if we can leverage what we know about language, we will have learned a great deal. As we test more verbs and more aspects of meaning -- I recently added an additional aspect of meaning and several hundred new verbs -- that window will be come clearer and clearer.

Unless, of course, it turns out that not all of language is so systematic. While our data so far represent a significant proportion of all research to date, it's only a tiny fraction of English. That is what makes research on language so hard: there is so much of it, and it is incredibly complex. But with the support of our volunteer Citizen Scientists, I am confident that we will be able to finish the project and launch a new phase of the study of language.

That brings up one additional aspect of the results: It shows that this project is possible. Citizen Science is rare in the study of the mind, and many of my colleagues doubted that amateurs could provide reliable results. In fact, by the standard measures of reliability, the information our volunteers contributed is very reliable.

Of course, checking for a systematic relationship between grammar and meaning is only the first step. We'd also liked to understanding which verbs and grammatical constructions have which aspects of meaning and why, and leverage this knowledge into understanding more about the nature of thought. Right now, we still don't have enough data to have exciting new conclusions (for exciting old conclusions, see Pinker's Stuff of Thought). I expect I'll have more to say about that after we complete the next phase of data collection.

Details of the Analysis

Here is how we did the analyses. If meaning determines which grammatical constructions a given verb can appear in, then you would expect that all the verbs that appear in the same set of frames should be the same in terms of the core aspects of meaning discussed above. So if one of those verbs describes, for instance, physical contact, then all of them should.

Helpfully, the VerbNet project -- which was built on earlier work by Beth Levin -- has already classified over 6,000 English verbs according to which grammatical constructions they can appear in. The 641 verbs posted in the first round of the VerbCorner project consisted of all the verbs from 11 of these classes.

So is it the case that in a given class, all the verbs describe physical contact or all of them do not? One additional complication is that, as I described above, the grammatical construction itself can change the meaning. So what I did was count what percentage of verbs from the same class have the same value for a given aspect of meaning for each grammatical construction, and then I averaged over those constructions.

The "Explode on Contact" task in VerbCorner asked people to determine whether a given sentence (e.g., Albert hugged Beatrice) described contact between different people or things. Were the results for a given verb class and a given grammatical construction? Several volunteers checked each sentence. If there was disagreement among the volunteers, I used whatever answer the majority had chosen.

This graph shows the degree of consistency by verb class (the classes are numbered according to their VerbNet number), with 100% being maximum consistency. You can see that all eleven classes are very close to 100%. Obviously, exactly 100% would be more impressive, but that's extremely rare to see when working with human judgments, simply because people make mistakes. We addressed this in part by having several people check each sentence, but there are so many sentences (around 5,000), that simply by bad luck sometimes several people will all make a mistake on the same sentence. So this graph looks as close to 100% as one could reasonably expect. As we get more data, it should get clearer.

Results were similar for other tasks. Another one looked at whether the sentence described someone applying force (pushing, shoving, etc.) to something or someone else:
Maybe everything just looks very consistent? We actually had a check for that. One of the tasks measures whether the sentence describes something that is good, bad, or neither. These is no evidence that this aspect of meaning matters for grammar (again, the hypothesis is not that every aspect of meaning matters -- only certain ones that are particularly important for structuring thought are expected to matter). And, indeed, we see much less consistency:
Notice that there is still some consistency, however. This seems to be mostly because most sentences describe something that is neither good nor bad, so there is a fair amount of essentially accidental consistency within each verb class. Nonetheless, this is far less consistency that what we saw for the other five aspects of meaning studied.

Citizen Science in Harvard Magazine

A nice, extended article on recent projects, covering a wide range -- including GamesWithWords.org. Check it out.

Science Mag studies science. Forgets to include control group.

Today's issue of Science carries the most meta sting operation I have ever seen. John Bohannon reports a study of open access journals, showing lax peer review standards. He sent 304 fake articles with obvious flaws to 304 open access journals, more than half of which were accepted.

The article is written as a stinging rebuke of open access journals. Here's the interesting thing: There's no comparison to traditional journals. For all we know, open access journals actually have *stricter* peer review standards than traditional journals. We all suspect not, but suspicion isn't supposed to count as evidence in science. Or in Science.

So this is where it gets meta: Science -- which is not open access -- published an obviously flawed article about open access journals publishing obviously flawed articles.

It would be even better if Bohannon's article had run in the "science" section of Science, rather than in the news section, where it actually ran, but hopefully we can agree that Science can't absolve itself of checking its articles for factualness and logical coherence just by labeling them "news".