Child's Play has posted the latest in a series of provoking posts on language learning. There's much to recommend the post, and it's one of the better defenses of statistical approaches to language learning around on the Net. It would benefit from some corrections, though, and into the gap I humbly step...
The post sets up a classic dichotomy:
Some Cheek!
Here's where the post gets cheeky:
The big one, from my standpoint, is that any reasonable theory of language is going to have to have, in the adult state, a great deal of structure. That is, one wants to know why "John threw the ball AT Sally" means something different from "John threw the ball TO Sally." Or why "John gave Mary the book" and "John gave the book to Mary" mean subtly different things (if you don't see that, try substituting "the border" with "Mary"). A great deal of meaning is tied up in structure, and representing structure as statistical co-occurrences doesn't obviously do the job.
Unlike Child's Play, I'm not going to discount any possibility of the opposing theories to get the job done (though I'm pretty sure they can't). I'm simply pointing out that Nativism didn't emerge from a sustained period of collective mental alienation.
Logically Inconsistent
Here we get to the real impetus for this response, which is this extremely odd section towards the end:
Huh?
That is, they clearly understand logic quite differently from me. If something is logically impossible, it is impossible. 2 + 2 = 100 is logically impossible, and no amount of experimenting is going to prove otherwise. The only way a logical proof can be wrong is if (a) your assumptions were wrong, or (b) your reasoning was faulty. For instance, the above math problem is actually correct if the answer is written in base 2.
In general, one usually runs across this type of argument when there is a logical argument against a researcher's pet theory, and said researcher can't find a flaw with the argument. They simply say, "I'm taking a logic holiday." I'd understand saying, "I'm not sure what the flaw in this argument is, though I'm pretty sure there is one." It wouldn't be convincing (or worth publishing), but I can see that. Simply saying, "I've decided not to believe in logic because I don't like what it's telling me" is quite another thing.
The post sets up a classic dichotomy:
Does language “emerge” full-blown in children, guided by a hierarchy of inbuilt grammatical rules for sentence formation and comprehension? Or is language better described as a learned system of conventions — one that is grounded in statistical regularities that give the appearance of a rule-like architecture, but which belie a far more nuanced and intricate structure?It's probably obvious from the wording which one they favor. It's also less obviously a false dichotomy. There probably was a very strong version of Nativism that at one point looked like their description of Option #1, but very little Nativist theory I've read from the last few decades looks anything like that. Syntactic Bootstrapping and Syntactic Bootstrapping are both much more nuanced (and interesting) theories.
Some Cheek!
Here's where the post gets cheeky:
For over half a century now, many scientists have believed that the second of these possibilities is a non starter. Why? No one’s quite sure — but it might be because Chomsky told them it was impossible.Wow? You mean nobody really thought it through? That seems to be what Child's Play thinks, but it's a misrepresentation of history. There are a lot of very good reasons to favor Nativist positions (that is, ones with a great deal of built-in structure). As Child's Play discuss -- to their credit -- any language admits an infinite number of grammatical sentences, so any finite grammar will fail (they treat this as a straw-man argument, but I think historically that was once a serious theory). There are a number of other deep learning problems that face Empiricist theories (Pinker has an excellent paper on the subject from around 1980). There are deep regularities across languages -- such as linking rules -- that are crazy coincidences or reflect innate structure.
The big one, from my standpoint, is that any reasonable theory of language is going to have to have, in the adult state, a great deal of structure. That is, one wants to know why "John threw the ball AT Sally" means something different from "John threw the ball TO Sally." Or why "John gave Mary the book" and "John gave the book to Mary" mean subtly different things (if you don't see that, try substituting "the border" with "Mary"). A great deal of meaning is tied up in structure, and representing structure as statistical co-occurrences doesn't obviously do the job.
Unlike Child's Play, I'm not going to discount any possibility of the opposing theories to get the job done (though I'm pretty sure they can't). I'm simply pointing out that Nativism didn't emerge from a sustained period of collective mental alienation.
Logically Inconsistent
Here we get to the real impetus for this response, which is this extremely odd section towards the end:
We only get to this absurdist conclusion because Miller & Chomsky’s argument mistakes philosophical logic for science (which is, of course, exactly what intelligent design does). So what’s the difference between philosophical logic and science? Here’s the answer, in Einstein’s words, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”In context, this means something like "Just because our theories have been shown to be logically impossible doesn't mean they are impossible." I've seen similar arguments before, and all I can say each time is:
Huh?
That is, they clearly understand logic quite differently from me. If something is logically impossible, it is impossible. 2 + 2 = 100 is logically impossible, and no amount of experimenting is going to prove otherwise. The only way a logical proof can be wrong is if (a) your assumptions were wrong, or (b) your reasoning was faulty. For instance, the above math problem is actually correct if the answer is written in base 2.
In general, one usually runs across this type of argument when there is a logical argument against a researcher's pet theory, and said researcher can't find a flaw with the argument. They simply say, "I'm taking a logic holiday." I'd understand saying, "I'm not sure what the flaw in this argument is, though I'm pretty sure there is one." It wouldn't be convincing (or worth publishing), but I can see that. Simply saying, "I've decided not to believe in logic because I don't like what it's telling me" is quite another thing.
5 comments:
I appreciate the Ping! A couple of quick clarifications :
You have a paragraph about the subtle differences in meaning between "John threw the ball AT Sally" versus "John threw the ball TO Sally," after which you conclude that "a great deal of meaning is tied up in structure." If you check out "Computing Machinery and Understanding," we talk at great length about the problem with these kinds of accounts. Hummel responded to our paper "The Effects of Feature-Label-Order in Symbolic Learning" with exactly this kind of critique -- "CMU" is the response. I'm probably going to do a post about it in the future, but the reply itself is really quite short -- only about six pages or so long, so you should check it out. (Sorry I'm always demanding you read things -- you can feel free to wait for the post... ;)
And then -- to the whole bit about logical possible / impossible -- my point is (and this will emerge over the course of the series) : Chomsky never actually showed that learning language from the input *was* logically impossible. His logic doesn't hold. You can't take a half baked probabilistic model, show it can't learn language, and from that conclude that humans can't learn language from the input. That's not science -- it's fraud. (Oh, there I go again, being cheeky ;) I know that Chomsky had lots of other reasons for making this claim which I haven't addressed yet -- one of my aims is to show that none of them stand scrutiny.
Again, it may be that much of grammar is hardwired. But I am at pains to emphasize that this is an empirical question -- not a logical one.
Glad you're thinking about these things too -- even if we disagree!
I'd restate your point slightly: depending on the actual structure of language (which is an empirical question), it may or may not be (logically) learnable without innate constraints.
Which, by the way, *everyone* believes. *Everyone* thinks there are innate constraints (you want a statistical theory -- your statistics have to track something), the debate is only over what those innate constraints are.
As for your other point, I'll probably wait for the post. It remains a fact that different structures give you different meanings, and it's not obvious how you could reduce that to simple word order, much less co-occurrence frequencies.
The conclusions about learnability in the Pinker article you reference rely on a critical assumption: that children can only recover from error given explicit feedback.
Here's an old rat experiment by Bob Rescorla. You condition one group of rats on shocks and tones:
t - s . . . . t - s . . . . t - s . . . . t - s . . . .
and other groups on the same sequence, while progressively increasing the background rate of tones:
t - s . .t .t . t - s . .t .t . t - s . .t .t . t - s . .t .t .
Rescorla showed how the amount of conditioning varies with the background rate of the tones (the lower the background rate, the higher the conditioning).
From a learning point of view, there are some things to notice about this:
First, given that there's no change in the association rate (only the background rate varies), it follows that the difference in learning between the groups must be due to what the rats learn on the "no shock trials" ;
Second, it follows that neither explicit feedback (no one shouted "hey - you weren't shocked" at the rats) nor explicit events such as rewards or punishments (nothing happens on these trials) are necessary conditions for Pavlovian learning.
The Rescorla - Wagner model shows how this kind of behavior is easily explained by assuming that rats monitor the error that expectations based on prior experience produces. The model was published in 1972, and was at once highly influential everywhere except in linguistics.
The logic of Pinker's (1979) paper is:
1. Children make errors.
2. These errors aren't explicitly corrected.
3. Children could never learn to recover from their errors unless they were explicitly corrected.
4. Since they do recover, this cannot be due to learning.
Pinker says this proves children cannot simply learn language. There is nothing wrong with this logic at all.
The problem is that in order to buy the conclusion you have to buy into premise 3, which means you have to assume:
1. Theoretically: whatever Pinker says about leaning is true and definitive for all time (which I guess is in keeping with Miller & Chomsky).
2. Empirically: children are dumber than rats.
So one answer to the question in the title of your post is, if you base your logic on beliefs like these, it is not just possible that logically impossible stuff might happen daily, it probably will.
>> Wow? You mean nobody really thought it through?
I understood the post you pinged on "Child's Play" to be about how people who study language have bought into a lot of silly logical arguments, and how they haven't thought much about the premises the logic relies on. The claim was that this has bad consequences for scholarship, and is a bad way to do science.
With respect, your argument, your examples, and the Pinker paper you referenced all underline these points.
You also misunderstand Einstein. 2 + 2 = 100 is logically impossible in math because in math we get to define the premises. Einstein's observation is about science. Science is empirical: Nature gets to define the premises and we have to guess what her definitions are. "Linking rules" aren't premises from which logical impossibilities can be determined. They are theoretical claims. It is entirely conceivable that any model of language that characterizes any given set of linguistic regularities as "linking rules" could be wrong. The whole thing might work in a way that no one has yet thought of.
Returning once more to your question: because of the way science works, saying something is "logically impossible" is just a way of making a poorly articulated theoretical claim. It is always possible for a poorly articulated theoretical claim to turn out to be wrong. Most poorly articulated theoretical claims will turn out to be wrong.
You could make a case for the claim that suppressing an argument that you can't win isn't quite as bad as faking data, but it would miss the point. Your scientific credibility and intellectual integrity just left the building.
What's the point of games with words if you can't play without cheating?
@Dan: Now that I've discovered the spam box for comments, your old comment here has been resurrected. I think that my new post on negative evidence and also the comments on Melodye's latest post at Child's Play directly address your statements here.
Post a Comment