Field of Science

Showing posts with label On self-knowledge. Show all posts
Showing posts with label On self-knowledge. Show all posts

A Great Year for GamesWithWords.org

Unique visitors at GamesWithWords.org were up 76% in 2013 over the previous year. That's after several years of fairly steady traffic.

Meanwhile, two journal papers and a conference paper involving data collected at GamesWithWords.org were accepted (and two more are currently under review). Many thanks to everyone who participated and otherwise helped out!

Survey results: Where do you get your science news

The last poll asked people where they get their science news. Most folks reported that they get science news from blogs, which isn't surprising, since they were reading this blog. Interestingly, less than 10% reported getting science news from newspapers. This fits my own experience; once I discovered science blogs, I stopped reading science news in newspapers altogether.

I would report the exact numbers for the poll, but Blogger ate them. I can tell that it still has all the data (it remembers that I voted), but is reporting 0s for every category. I'll be switching to Google Forms for the next survey.

New editor at Cognition (eventually)

There are no doubt many psychologists who don't count Cognition as their favorite journal. I just don't happen to know very many of them. Whenever the topic of favorite journal comes up, Cognition it is. One would think that would argue in favor of continuity; whatever they're doing is working.

That's not apparently how the for-profit publishers of Cognition (Elsevier) feel, as they've decided to find a new editor, apparently without consulting anyone in the field about it. I hope they know what they are doing.

Findings: Which of my posts do you like best?

It will surprise nobody that I like data. By extension, it should surprise nobody that what I like about blogging is getting instant feedback on whether people found a post interesting and relevant or not. This is in contrast to writing a journal article, where you will wait minimally a year or two before anyone starts citing you (if they ever do).

How I feel about data.

Sometimes the results are surprising. I expected my posts on the suspicious data underlying recent graduate school rankings to make a splash, but the two posts together got a grand total of 2 comments and 16 tweets (some of which are automatically generated by FieldofScience). I didn't expect posts on my recent findings regarding pronoun processing to generate that much interest, but they got 6 comments and 26 tweets, putting them among the most popular, at least as far as Twitter is concerned.

To get a sense of which topics you, dear readers, find the most interesting, I compiled the statistics from all my posts from the fall semester and tabulated those data according to the posts' tags. Tags are imperfect, as they reflect only how I decided to categorize the post, but they're a good starting point.

Here are the results, sorted by average number of retweets:


label #Posts...Tweets_Ave... Reddit_Ave... Comments_Ave...
findings 2 13 0 3
publication 3 13 5 5
peer review 4 12 13 10
universal grammar 5 10 2 8
pronouns 3 10 0 2
GamesWithWords.org 2 9 0 1
scientific methods 7 8 7 7
neuroscience 1 8 0 5
overheard 1 7 0 1
language development 2 7 0 7
Web-based research 6 7 0 1
science and society 3 6 1 6
language 6 6 1 3
education 2 6 0 1
journalism 2 6 18 9
politics 7 6 0 2
science blogging 2 6 1 2
language acquisition 1 5 0 0
recession 2 5 1 3
the future 1 5 0 0
vision 1 5 0 1
graduate school 4 5 0 3
science in the media 3 5 12 7
method maven 2 5 18 10
media 3 4 0 1
psychology career path 1 4 0 2
lab notebook 3 3 0 1
none 4 3 0 0









Since we all know correlation = causation, if I want to make a really popular post, I should label it "findings, publication, peer review". If I want to ensure it is ignored, I shouldn't give it a label at all.

At this point, I'd like to turn it over the crowd. Are these the posts you want to see? If not, what do you want to read more about? Or if you think about your favorite blogs, what topics do you enjoy seeing on those blogs?

Does Global Warming Exist, and Other Questions We Want Answered

This week, I asked 101 people on Amazon Mechanical Turk both whether global temperatures have been increasing due to human activity AND what percentage of other people on Amazon Mechanical Turk would say yes to the first question. 78% agree with the answer to the first question. Here's the answers to the second, broken down by whether the respondent did or did not believe in man-made global warming:

Question: How many other people on Amazon Mechanical Turk believe global temperatures have been increasing due to human activity?

                     Average            1st Quartile-3rd Quartile
Believers         72%                         60%-84%
Denialists        58%                         50%-74%
Correct            78%                             ------

Notice that those who believe global warming is caused by human activity are much better at estimating how many other people will agree than are those who do not. Interestingly, the denialists' answer is much closer to the average of all Americans, rather than  Turkers (who are mostly but not exclusively American, and are certainly a non-random sample).

So what?

Why should we care? More importantly, why did I do this experiment? A major problem in science/life/everything is that people disagree about the answers to questions, and we have to decide who to believe. A common-sense strategy is to go with whatever the majority of experts says. There are two problems, though: first, it's not always easy to identify an expert, and second, the majority of experts can be wrong.

For instance, you might ask a group of Americans what the capital of Illinois or New York is. Although in theory, Americans should be experts in such matters (it's usually part of the high school curriculum), in fact the majority answer in both cases is likely to be incorrect (Chicago and New York City, rather than Springfield and Albany). This was even true in a recent study of, for instance, MIT or Princeton undergraduates, who in theory are smart and well-educated.

Which of these guys should you believe?

So how should we decide which experts to listen to, if we can't just go with "majority rules"? A long chain of research suggests an option: ask each of the experts to predict what the other experts would say. It turns out that the people who are best at estimating what other people's answers will be are also most likely to be correct. (I'd love to cite papers here, but the introduction here is coming from a talk I attended earlier in the week, and I don't have the the citations in my notes.) In essence, this is an old trick: ask people two questions, one of which you know the answer to and one of which you don't. Then trust the answers on the second question that come from the people who got the first question right. 

This method has been tested on a number of questions and works well. It was actually tested on the state capital problem described above, and it does much better than a simple "majority rules" approach. The speaker at the talk I went to argued that this is because people who are better able to estimate the average answer simply know more and are thus more reliable. Another way of looking at it though (which the speaker mentioned) is that someone who thinks Chicago is the capital of Illinois likely isn't considering any other possibilities, so when asked what other people will say guesses "Chicago." The person who knows that in fact Springfield is the capital probably nonetheless knows that many people will be tricked by the fact that Chicago is the best-known city in Illinois and thus will correctly guess lots of people will say Chicago but that some people will also say Springfield. 

Harder Questions

I wondered, then, how well it would work on for a question where everybody knows that there are two possible answers. So I surveyed Turkers about Global Warming. Believers were much better at estimating how many believers there are on Turk than were denialists.

Obviously, there are a few ways of interpreting this. Perhaps denialists underestimate both the proportion of climate scientists who believe in global warming (~100%) and the percentage of normal people who believe in global warming, and thus they think the evidence is weaker than it is. Alternatively, denialists don't believe in global warming and thus have trouble accepting that other people do and thus lower their estimates. The latter proposal, though, would suggest that believers should over-estimate the percentage of people who believe in global warming, though that is not in fact the case.

Will this method work in general? In some cases, it won't. If you asked expert physicists in 1530 about quantum mechanics, presumably none of them would believe it and all would correctly predict that none of the other would believe it. In other cases, it's irrelevant (near 100% of climatologists believe in man-made global warming, and I expect they all know that they all believe in it). More importantly, the method may work well for some types of questions and not others. I heard in this talk that researchers have started using the method to predict product sales and outcomes of sports matches, and it actually does quite well. I haven't seen any of the data yet, though.


------
For more posts on science and politics, click here and here.

Cloudier

I wasn't able to get Edward's suggestion to work, but I did sit down and paste all my posts back to 4/27/2010 into Wordle. In this more representative sample, it seems people actually beats out word and language, though probably the combination of word and words would win if Wordle could handle inflectional morphology.

Games and Words

I diligently tag posts on this blog, not because I actually think anybody clicks in the cloud to find specific types of posts, but because it's interesting to see, over time, what I usually write about.

There's another way of doing this. Wordle.net will allow you to input the feed for a blog, and it will extract the most common words from the last number of posts.


I'm gratified to see that the most common word in this blog is "data," followed by "studies," "participants" and "blog". The high frequency of "blog" and the URL for this site are a byproduct of my ATOM feed, which lists the URL of the blog after every post.

Unfortunately, the restriction of Wordle.net to the most recent posts means that some words are over-weighted. For instance, my recent post about shilling for products mentioned the word "product" enough times to make that word prominent in this word cloud.