Field of Science

Forgetting what you haven't yet learned

More than one student has complained that the space in their head is limited, and new information is simply pushing the old information out. In the terms of memory research, this is retroactive interference: learning new information causes you to forget old information.

The way this is typically studied in the laboratory is to have the participant learn something -- often a paired associate (think "Concentration") -- then learn something else, and then finally be tested on the original memory item(s). This way, one can vary that middle task in order to study how different activities cause different amounts/types of retroactive interference.

The is another type of interference: proactive interference. This is the effect that learning one piece of information has on future learning. That is, the books a student has already read make it harder to learn new information.

Just like retroactive interference, proactive interference is seen in both short-term and long-term memory. 

Memory Systems: How Does Memory Work?

The existence of interference tells us a lot about how memory works, because there is nothing necessary about it.

Consider a computer. We don't expect each new file we add to our computer to cause the computer to lose other files, short of copying over those original files. Similarly, the file I added today should not affect a file I add tomorrow, short of causing me to run out of disk space.

So why is human memory affected this way?

Overlapping Memories

There are a couple reasons it could be. One is that memory is probably overlapping. A computer -- at least, in its basic forms -- saves each file in a unique place in memory. The human brain, however, probably reuses the same units for different memories. Memories are overlapping.

How exactly this works is still very much a matter of research and debate, but it makes a certain amount of sense. Suppose you have several different memories about your mother. It would make sense for your mental representation of your mother to show up in each of those memories. For one thing, that should make it easier to relate those memories to one another.

Searching for Memories

Another way interference might appear in memory is in how it effects memory retrieval. The more files you put on your computer, the harder it is to find the files you want. This is especially true if you keep them all in one directory and use keyword searches.

Human memory retrieval probably does not work like a keyword search, but nonetheless it is reasonable to assume that the more memories you have, the more similar memories you have. Thus, finding the right memory is harder, because you have to distinguish it from similar memories.

How exactly this plays out depends on your model of memory. I will talk about one I particularly like in a future post.

Upcoming Posts

Although  my main research is in semantics and pragmatics -- aspects of language -- I have also worked on working memory. I have a paper coming out shortly based partly on an experiment I ran at my Web-based lab. Over the next week or two, I plan to write about some of the fundamental questions about memory addressed in that paper, as well as write about the paper and lay out its results.

It appears I am in the right field



You Should Get a PhD in Science (like chemistry, math, or engineering)




You're both smart and innovative when it comes to ideas.

Maybe you'll find a cure for cancer - or develop the latest underground drug.

Motivations for Science

Where do cognitive scientists get subjects for their studies?

There is a certain amount of variation, but the workhorse of cognitive science is the Psych 100 student. At many universities, introductory psychology students are required to participate in studies (though I believe there is often an option for people who strongly object).

This is billed as an educational experience, and more or less effort is put into making it educational (I've been very impressed with both Harvard and MIT on this point), but it is also part of the machinery that makes the science possible.

The other option typically is to pay participants. Currently, the going rate at Harvard is $10/hour. This is supposed to be compensation for time, travel, etc., but certainly lots of undergraduates who are not currently enrolled in a psych class use it to generate pocket cash.

Conflicting Goals

One potential drawback of this system is that the motivations of the participants and of the researchers are not always aligned. The researcher typically wants to get good data; the participant may just want their $10 or course credit.

The truth is the vast majority of participants give the experiment a good faith effort, but there are always some (I'd say about 5-10%, in my experience) who just answer randomly and quickly in order to get out as soon as possible.

There are ways to help realign the participants' and researchers' interests. One is to the program the experiment such that if you get all the answers right, you finish sooner than if you guess randomly. That makes guessing a bit less tempting a strategy. (An easy way of doing this in a computerized experiment is to have the computer respond with a long error message every time a question is answered incorrectly, with the effect that participants who make many errors take longer to finish.)

What Motivations do Parents Have?

Prior to working in a developmental lab, I wondered what motivations parents have for bringing in their children for developmental studies. It takes them more time, since unlike our "adult" subjects, they typically do not live on campus, and they get compensated less (we give our participants a cheap toy plus $5 for gas -- and $5 means a lot less to a parent than a college student).

I had heard it rumored that many are hoping the "affiliation" with Harvard will help their kids in the future or that they are very interested in having a scientist study their kid and discover what a genius the kid is. This frankly made me a bit uncomfortable.

Now I've actually interacted with a lot of parents and kids, and if those are their motivations, I don't see it. The main motivation seems to be that the kids really enjoy coming to the lab. We have a big bin full of toys, and we usually play with them for a while when they first come in. And then, the experiments are tailored such that kids really find them entertaining. Finally, many seem to really enjoy collecting the stuffed animals we give them as prizes at the end.

Parents are always looking for ways to keep their children entertained. It never occurred to me that taking the kids to a developmental lab would be one of those ways, but it appears that it is.

(Some parents are also definitely motivated by participating in science. At the end of the experiment, I always describe the experiment to them. Some are clearly not overly interested, but others may stay an extra 10-15 minutes to talk about the study.)

Does your child have philosophical potential?

Having just written about the difficulty in defining the word know, the following passage from Sperber & Wilson's Relevance:


Suppose, for example, that a child has not yet realised that X knows that P implies P, and so uses know interchangeably with believe. We would say that he had not yet mastered the concept. On the other hand, if he has grasped this logical pint but is unable to think of a single instance of something he is prepared to call knowledge, we would regard this as a failure of memory or experience (or a mark of philosophical potential) rather than of understanding.

Cheekiness aside, it actually takes children a while to fully understand the difference between know and believe. According to Bartsch & Wellman's classic corpus study, Children Talk about the Mind, children begin understanding those two verbs in their third year of life, but they don't appear to have truly mastered the concepts until around the age of 4. This is in contrast with verbs of desire like want, which kids know by the time they are 2.

Why are kids slow to understand know and believe? Some of the difficulty appears to be in understanding false beliefs -- that is, the fact that the contents of a person's mind may conflict with the actual state of the world (John believes Algeria is in South America, but it's not). Until a child has mastered that concept, there isn't any substantive difference between know and believe.

Common knowledge

Language is based on common knowledge. This is true in a trivial sense: If I say

Cats are mammals.

Your ability to interpret that sentence relies on our common knowledge that the word cat refers to a furry domestic animal that meows. Likewise, I only believe that the sentence will be successful in communicating with you based on my belief that you know what a cat is. 


Common knowledge and inference

Language requires common knowledge in a much more subtle way as well. Suppose I say:

I am going to Paris tomorrow.

Your ability to interpret this sentence correctly depends on your being able to correctly assign meaning to tomorrow. Consider the fact that the sentence means different things spoken on different days. For us to successfully communicate about tomorrow, we must have interpreted it the same way and know that we have interpreted it the same way.

Notice that the word I and even the word Paris has the same problem.

It actually gets worse, since some communication requires the even more stringent concept mutual knowledge. Suppose I ask my wife if she has fed the cats today. Technically, she could response "yes" as long as she has fed at least two cats today. But of course, I am asking whether she fed our cats. I assume she will understand that's what I mean. 

Now suppose she just answers "yes." For me to interpret this as meaning she fed our cats, I have to assume she knows that I was referring to our cats. Of course, for her to be confident that I will correctly interpret her response, she has to assume I assume that she assumes that I originally asked about her feeding our cats.

And so on.

Certain knowledge?

In their highly influential book Relevance, Sperber and Wilson argue that common knowledge cannot exist (actually, they talk about "mutual knowledge," which is something slightly different, but the differences aren't important here):

"Mutual knowledge must be certain, or else it does not exist; and since it can never be certain it can never exist." (p. 20)

Why do they think mutual knowledge can never be certain? Because, in a philosophical sense, it is true. I can never be certain my wife knows I'm talking about our cats. And she can't be certain that I am referring to our cats. Probabilities get multiplied. So if confident is always 90%, my confidence that she knows that I know that she knows that I know that she knows I'm referring to our cats is only 53%. 

Sperber and Wilson use a much-expanded version of this argument to claim that mutual knowledge just doesn't exist and can't play a role in language, beyond perhaps giving the basic meaning of basic words like cat.

Are we certain philosophers?

A potential problem with their argument is that they assume people are only certain when certainty is justified. This is clearly not the case.



In recent talks, Steven Pinker has presented evidence that, at least in some circumstances, people really do act as if they believe in mutual knowledge. Pinker is interested indirect speech, so his study involved innuendo. Suppose John says to Mary, "Would you like to come up to my apartment for a nightcap."

How certain are you that John is proposing sex? Most people are fairly certain. 

How certain are you that Mary knows that John is proposing sex? Most people are a little less certain. 

How certain are you that John knows Mary knows John is proposing sex? Certainty drops again.

etc.

Now, change the scenario. What if John is particularly crass and says to Mary, "How would you like to go back to my apartment and have sex?"

How certain are you that John is proposing sex? That Mary knows John is proposing sex? That John knows that Mary knows that John is proposing sex? Most people remain certain no matter how far out the question is extended.

Notice that, at least in theory, Sperber & Wilson's argument should have applied. Nobody should be completely certain. Mary could have misheard. John might have a really odd idiolect. But people don't seem to be phased.

Does mutual knowledge exist?

Well, at least sometimes. But I'm not completely sure how this affects Sperber & Wilson's argument. They weren't talking just about indirect speech, but about a much broader range of phenomena. They were arguing against theories that invoke mutual knowledge right and left, so it still remains to be seen whether mutual knowledge is such a pervasive phenomenon.

What is knowledge?

There are a number of good reasons to want a definition for knowledge. For instance, you might be a lexicographer. Or you might be a philosopher, wondering what knowledge is.

Either way, you're out of luck, because knowledge turns out to be a tricky beast.

Know vs. Believe

The easiest way to start is to compare know with believe. What is the difference between:

I believe it's Friday.

and

I know it's Friday.

The latter is more certain, but that's not all. It's possible to believe it's Friday on a Thursday. It's not possible to know that it's Friday on a Thursday. So we might be tempted to define

knowledge = true belief

That's not going to be enough, though. Suppose John just woke up from a coma. He knows he was in a coma, and he hasn't seen a calendar. Still, his intuition tells him it's Friday. Can he say he knows that it's Friday?

Well, he can say it. But even if it turns out that today really is Friday, we still would be uncomfortable saying John knows that it's Friday, unless we believe in ESP or some similar phenomenon.

Similarly, I might say that I know Barack Obama will be the next president of the United States. Even if it turns out that I am right and Obama does become the next president, it's a little weird to say that I knew it. It seems better to say I strongly believed it.

So we might try the following definition:

knowledge = justified true belief

The idea being that it only counts as knowledge if I have sufficient evidence.

Unfortunately, that won't work, either, though it took some fancy philosophizing to prove it. Consider the following example.

Suppose I am watching the Red Sox play the Yankees. Unbeknownst to me, there has actually been an electrical outage at Fenway, so the cameras aren't working. NESN quickly substitutes a tape of a previous game in which the Red Sox played the Yankees, but I don't realize it.

In this rebroadcast, the Red Sox beat the Yankees. At the same time as I am watching the taped game, the Red Sox are actually beating the Yankees. So if I then say, "Today, the Red Sox beat the Yankees," my statement is true (the Red Sox really did beat the Yankees) and justified (I have every reason to believe what I am saying), but still it seems very strange to say that I know that the Red Sox beat the Yankees.

Where does this leave us?

You might try to save justified true belief by fiddling with justified, but most philosophical accounts I've seen just stop there and claim there is no definition. I am inclined to agree, and this is just one more reason to suspect that words just don't have definitions.

As I've pointed out before, Greg Murphy has a pretty good explanation of why it makes sense that words don't have definitions. The original post is here, but in short, words are used to distinguish objects, but it is always possible to come up with a new object (or idea) that is midway between two words -- that is, fits both and neither, just as the baseball game example above seems to fit both knowledge and belief and neither.

I find this pretty convincing, but if he is right, it raises the following question: why do we think words have crisp definitions? Even more, why do we really want words to have crisp definitions? It seems generations of philosophers would have saved a lot of time.

Are we more moral than cavemen?

Over the weekend, Eric Posner of Slate asked whether humans have become any more moral in the last few thousand years. His target was opinion leaders (most recently, David Brooks) who decry a moral decline in our society.

Posner asks if opinion leaders have been making such statements since the beginning of time (I believe they have, but I couldn't track down the right citation), and whether if we are to take from that fact that we as humans are far less moral than our caveman ancestors.

Posner suspects that we are at least as moral as our ancestors. I'm not sure if there is good data on morality (definition would be the first problem in that study), but there is good evidence in terms of violence.

In pre-state societies, about 60% of men die in violence. Since the Middle Ages, the murder rate in Europe has fallen a hundredfold. A smaller proportion of the human population died due to violence in the 20th century than in any previous century (yes, that's including both world wars and Stalin).

--------------
Further Reading

The first two of the factoids above come from The Blank Slate by Steven Pinker. Chapter 17 is particularly relevant. 

New ideas

When I was named, my father worried that the name my parents were giving me was too unusual. He didn't know anybody named "Josh."

Steven Pinker discusses this phenomenon in terms of child naming in his most recent book, The Stuff of Thought, having the exact same experience as me; his parents thought "Steven" was a fresh new name.

This doesn't happen just with names. A couple weeks ago, I was talking with a couple friends at the excellent Semiproductivity in Grammar workshop at Tufts about a problem we seem to share. On of the friends, who works on computational modeling, had discovered another research group with a similar model. Both the friends were concerned about the fact that Bayesian-style computational modeling world has been getting a little too crowded. Myself, I recently entered the world of pragmatics research with the idea that is was relatively unexplored only to discover that many, many other researchers have had the same idea.

Of course, this goes back a long way. Darwin and Wallace arrived at the theory of natural selection independently. Something similar happened with Newton, Leibniz and calculus.

So why does it happen? Certainly some people have come up with some entertaining anthromorphic theories in which fate has a direction or it is the universe itself pushing us in one direction or another, or perhaps humans are all connected at some psychic level. Another, less extravagant possibility, is that humans are fundamentally similar and react similarly to the same environments.

In terms of computational modeling, early in the decade it was clear that connectionism had run out of speed. Yet there were many researchers who like the computational approach. Bayesian formalisms were not well-known, but were well-known enough for those interested to come across them, and they offered a good alternative to connectionism. Given that there were many people looking, it's not surprising a number found it.

This brings up a general point. Trends often occur when many people are all dissatisfied with the options available and there is a prominent available alternative that isn't too radically different, but which is just different enough. In the 1980s, boys names starting with the letter J were very popular in America. A parent might be looking for a new "J" name and stumble across Joshua, a once-popular but now under-used name. It seems fresh and new but still fits the general milieu of liking J-names. Unfortunately, there are only so many J names to go around, so many parents ended up finding the same ones.

There's a very good chapter on this phenomenon in Pinker's book, which I strongly recommend.

I'm on Google's blacklist

I have been very excited about the application of Google to research. When, as a psycholinguist, I am interested in whether a particular phrase is ever seen in normal English, it is very simple to just Google that phrase and find out. Google is a little less useful for looking at the frequency of individual words, since it automatically includes related words (e.g., search for dog and you also get dogs), but some useful information can be found. Both of these types of analyses regularly appear on Language Log.

Unfortunately, my love is not reciprocated. In the course of conducting basic research, I had my ability to use Google temporarily revoked.

What happened? Well, I lately got interested in the use of the word "because." I wanted to see how "because" is used in normal contexts. Unfortunately, types of utterances I am interested in aren't that common, so there weren't enough example available in the typical databases a researcher would use, like CHILDES. So I decided to use Google.

Basically, what I needed to do was search for phrases involving the word "because," then go to the pages in which the phrases appeared in order to copy the full context in which the phrase appears. Then I can analyze those contexts. I need a few thousand of those. I did think about having my summer research assistant do it, but it just seemed cruel. So a friend helped me (thanks Tim!) write a short script in Perl to automate the process.

I had finally gotten it up and running when it suddenly crashed on me. It took a little while to figure out that Google had identified my searches as potentially virus-induced and temporarily blocked my IP address. It appears that one can get special permission from Google in order to conduct these types of searches ... if you can actually contact anyone at Google, which is not easy.

So if you know somebody at Google...

We know toddlers can't count, but are they good at statistics?

Toddlers can't count. And, to be honest, statistics is the one field of math that has never really clicked for me. But there is mounting evidence that children and adults are very sensitive to the statistical nature of the world.

This has been shown in vision studies, some of which I've worked on, but which I won't talk about more here (but check out Brian Scholl's lab, among others). 

Statistical learning has also been found for language-like material. Certainly, statistics could help with learning language. For instance, a given words (e.g., the) is more likely to be followed by some words (e.g., book or table) than other words (e.g., contemplate or earn). Moreover, rare sound combinations typically mark boundaries between words (no word contains the sound combination thst, but it can occur -- rarely -- between words, such as with steam). Babies could, at least in theory, use that information to break the sound stream that they hear into words (contrary to popular imagination, people rarely speak in single words, even to infants, so determining that winsome is one word but withsome is two is non-trivial).

So, statistical information that could be valuable for language learning is available in what babies hear. A number of studies have shown that babies are in fact sensitive to such information (check out Gomez & Gerken, 2000, for a review). Whether or not statistical learning is actually used in real live language learning as opposed to the laboratory experiments just mentioned is an open question, but it certainly could be.

The issue that is bothering me lately is that I'm not aware of any evidence that infants are better at statistical learning than adults. Given that children tend to be much better language-learners than are adults, this raises an important question: what aspect of language learning are adults bad at? (Some people think that adults aren't impaired at learning languages; I don't think that's true and may eventually write a post about it.) The evidence lately seems to be that adults are bad at syntax, so this makes me wonder just how likely it is that statistical learning is used to help learn syntax, as some people have claimed.



Gomez, R.L., Gerken, L. (2000). Infant artificial language learning and language acquisition. Trends in Cognitive Sciences, 4(5), 178186.

Harvard Laboratory of Developmental Studies summer internship program

The summer internship program at the Harvard Laboratory of Developmental Studies began this Monday.

Lots of labs take interns. In fact, if you seem motivated, smart and competent, I suspect pretty much any lab would take you on as a volunteer for the summer. What makes the LDS internship program different is that it's an actual program.

The labs (primarily the Snedeker Lab and the Carey Lab -- Spelke Lab does not participate directly, though there is so much sharing between the three labs they often do so indirectly) take about a dozen undergraduates each summer. Each participant is assigned to a specific research project run by a specific graduate student. The projects are chosen such that there is a good chance they will be completed successfully before the internship program ends, making sure the interns have something to talk about at future job or school interviews. The faculty advisers are very concerned that the interns don't just do busy work but actually learn something, so interns participate in a weekly reading group as well as a weekly lab meeting.

In addition, there are activities such as the twice-monthly barbecue (organized this year, in part, by yours truly). Oh, and many of the summer students get financial support, which is a definite plus.

Anyway, it appears to be a good program. This is my first summer participating (my intern will be studying pronoun use), so we'll see how it goes.

Fearing the Terminator

I heard one of the guys behind Games With A Purpose speaking on Future Tense a few days ago (sadly, their names are not listed on the website, and I don't recall who it was). He pointed out that many people are concerned about his project of "making computers smarter." A good example is a recent commenter on this blog.

This researcher's response to such concerns is, essentially, "Computers have already taken over the world, and they are stupid. Wouldn't it be better if they were smarter?"

That's one argument, though I'm not sure how well it speaks to those who worry about the Terminator. Another argument would be that, sure, smart computers are scary. But the world is already pretty scary, and smart computers would make it a little less so. The question is whether the advantages outweigh the risks.

In the past, it has. Whatever side-effects technology has had -- and I'm including global warming, here -- so far it's made life unimaginably better (for one thing, without modern technology, not only could you not read this post, but many if not most people reading this post would have died before reaching their present age).

This is the essence of Dan Savage's retort to a writer who worried about the harmful effects of chlorine in water: (approximately) "I'd rather have a little bit of chlorine than a whole lot of cholera."

Autism and Vaccines

Do vaccines cause autism? It is a truism that nothing can ever be disproven (in fact, one of the most solid philosophical proofs is that neither science -- nor any other extant method of human discovery -- can prove any empirical claims either).

That said, the evidence for vaccines causing autism is about as good as the the evidence that potty-training causes autism. Symptoms of autism begin to occur around some time after the 2-year-old vaccinations, which is also about the same time potty-training typically happens. Otherwise, a number of studies have failed to find any link. Nonetheless, the believers in the vaccines-cause-autism theory have convinced some reasonably mainstream writers and even all three major presidential candidates that the evidence is, at the worst, only "inconclusive."

My purpose here is not to debunk the vaccine myth. Others have done it better than I can. My purpose is to point out that, even if the myth were true, not vaccinating your children would be a poor solution.

It has been such a long time since we've had to deal with polio and smallpox, that people have forgotten just how scary they were. In 1952, at the height of the polio epidemics, around 14 out of 100,000 of every Americans had paralytic polio. 300-500 million people died of smallpox in the 20th century. Add in hepatitis A, hepatitis B, mumps, measles, rubella, diptheria, pertussis, tetanus, HiB, chicken pox, rotavirus, meningococcal disease, pneumonia and the flu, and no wonder experts estimate that "fully vaccinating all U. S. children born in a given year from birth to adolescence saves an estimated 33,000 lives and prevents an estimated 14 million infections."

Thus, while current estimates are that 0.6% of American children develop autism, 0.8% would have died without vaccines -- and that's not counting blindness, paralysis, etc. It seems like a good trade, even if you assume that every single case of Autism is due to vaccines.

Psychologist runs for Congress

Earlier in March, a physicist won a congressional seat. This, one has to hope, will be good for science, and particularly for the embarrassing funding shortfalls at FermiLab.

With any luck, the state of Washington will turn this anecdote into a trend by sending Mark Mays to the capital to represent its 5th district. Mark Mays studied with Guy Manaster at the University of Texas, and is the type of psychologist who treats patients. He has been active in Spokane for many years, particularly in the field of mental health -- especially veterans' mental health. With an increasing number of war-weary veterans returning to the US, it would be great to have a professional who has actually treated veterans part of the decision-making process.

Oh, and he's also a great guy (full disclosure: Mays is a family friend). If you are in the 5th district, I suggest checking out his newly minted web site.