Field of Science

Pilot data

I am back from a long semi-silence.I have been trying to finish up a number of projects, which gives me less time to write. Speaking of…

One of the focuses of my work is figuring out how children learn the meaning of verbs. This is made more complicated by the fact that we don't actually have completely solid and uncontroversial definitions of verbs. If we don't know what verbs mean, how can we tell when a child has successfully learned them?

I am working on a large scale project to get better definitions of verbs. We are developing many different tasks, each of which gets at one specific aspect of meaning that is thought to be important for at least some verbs. The traditional method would be to have skilled linguists go through verbs one at a time and consult their own intuitions, and in fact a lot of very good work has been done this way (e.g., Jackendoff's Semantic Structures, among many others). However, there are certain advantages to having this work done by a larger number of people who are naĂŻve to linguistic theory, not the least of which is that there are a very large number of verbs, and one person can't get through them all in any reasonable speed. The one disadvantage of working with naĂŻve participants is that they do not understand linguistic 
terminology, so you have to find some other way to explain the task.

I have been developing some such tasks, and I could really use some pilot data to see how well they are working. If you have a little time to spare, I would really appreciate the help. There are 3 in particular I am currently working on:


There is a comments box at the end where you can leave any feedback and mention anything you noticed or which you found confusing. I do need data on all three, so please don't everyone just do the first one. 

Fair warning: These tasks take a bit longer than the ones on my website. My guess is that they will take 20-30 minutes each, but that is a wild guess. If somebody does one and wants to leave a comment about how long it took, that would be helpful for me and also for others who might want to do it.

Many thanks.

3 comments:

Gordon P. Hemsley said...

Maybe it's just me, but I found it very difficult and tiring to determine what DIDN'T change. It's so much easier to say what DID.

It's compounded by the fact that the list is extremely long. I know you said it would take ~30 minutes, but time goes by a lot slower when it takes so much energy to answer a single question. Also, it was confusing that there was a choice for every question for "nothing changed" but not for "everything changed". If nothing changed, wouldn't we just check all the boxes anyway?

There was also at least one sentence that didn't seem grammatical to me, even discounting the fake words.

That being said, I do like how you presented the story in a way that would be accessible to non-linguists, although you did throw in a note about nouns and adjectives at the end that I thought might not have been a good idea.

So, full disclosure:
I am a linguist. I attempted the Ducks test. I didn't finish it.

HTH.

GamesWithWords said...

@Gordon:
Many thanks for trying the task. Many thanks also for noticing the problem with "everything changed". That was left over from an earlier version (we've been through a few iterations) and has now been fixed and should make more sense.
As far as whether it would be easier to identify which object was affected rather than which one wasn't ... it's an empirical question. Let's just say that we tried it the other way previously and that turned out to be hard, too. My suspicion is that the new way is actually easier, but there's only one way to find out.
What part of the nouns and adjectives did you think was not a good idea?

GamesWithWords said...

BTW in case anyone tried "Person or Thing of the Year", there was a similar issue with the available responses. I am fixing it now and should be done in ~15 minutes. "Simon Says Freeze" has actually been piloted partially already and so should work just fine.