Hummel takes up the cause of "traditional" models on which thought and language is deeply symbolic and involves algebraic rules. Ramscar defends more "recent" alternative models that are built on associate learning -- essentially, an update on the story that was traditional before the symbolic models.
Limitations of Relational Systems
The key to Hummel's argument, I think, is his focus on explicitly relational systems:
John can love Mary, or be taller than Mary, or be the father of Mary, or all of the above. The vocabulary of relations in a symbol system is open-ended ... and relations can take other relations as arguments (e.g., Mary knows John loves her). More importantly, not only can John love Mary, but Sally can love Mary, too, and in both cases it is the very same "love" relation ... The Mary that John loves can be the very same Mary that is loved by Sally. This capacity for dynamic recombination is at the heart of a symbolic representation and is not enjoyed by nonsymbolic representations.That is, language has many predicates (e.g., verbs) that seem to allow arbitrary arguments. So talking about the meaning of love is really talking about the meaning of X loves Y: X has a particular type of emotional attachment to Y. You're allowed to fill in "X" and "Y" more or less how you want, which is what makes them symbols.
Hummel argues that language is even more symbolic than that: not only do we need symbols to refer to arguments (John, Mary, Sally), but we also need symbols to refer to predicates as well. We can talk about love, which is itself a relation between two arguments. Similarly, we can talk about friendship, which is an abstract relation. This is a little slippery if you're new to the study of logic, but doing this requires a second-order logic, which has a number of formal properties.
Where Hummel wants to go with this is that associationist theories, like Ramscar's, can't represent second-order logical systems (and probably aren't even up to the task of the types of first-order systems we might want). Intuitively, this is because associationist theories represent similarities between objects (or at least how often both occur together), and it's not clear how they would represent dissimilarities, much less represent the concept of dissimilarity:
John can be taller than Mary, a beer bottle taller than a beer can, and an apartment building is taller than a house. But in what sense, other than being taller than something, is John like a beer bottle or an apartment building? Making matters worse, Mary is taller than the beer bottle and the house is taller than John. Precisely because of their promiscuity, relational concepts defy learning in terms of simple associative co-occurrences.It's not clear in these quotes, but there's a lot of math to back this stuff up: second-order logic systems are extremely powerful and can do lots of useful stuff. Less powerful computational systems simply can't do as much.
Ramscar's response is not so much to deny the mathematical truths Hummel is proposing. Yes, associationist models can't capture all that symbolic systems can do, but language is not a symbolic system:
We think that mapping natural language expressions onto the promiscuous relations Hummel describes is harder than her does. Far harder: We think you cannot do it.Ramscar identifies a couple old problems: one is polysemy, the fact that words have multiple meanings (John can both love Mary and love a good argument, but probably not in the same way). Fair enough -- nobody has a fully working explanation of polysemy.
This is fair in the sense that nobody has fully worked out a symbolic system that explains all of language and thought. It's unfair in that he doesn't have all the details of this theory worked out, either, and his model is every bit as turtles-all-the-way-down. Specifically, concepts are defined in terms of cooccurrences of features (a dog is a unique pattern of co-occurring tails, canine teeth, etc.). Either those features are themselves symbols, or they are always patterns of co-occuring features (tail = co-occurrence of fur, flexibility, cylindrical shape, etc.), which are themselves patterns of other other feature co-occurrences, etc. (It's also unfair in that he's criticizing a very old symbolic theory; there are newer, possibly better ones around, too.)
Implicit in his argument is the following: anything that symbolic systems can do that associationist systems can't do are things that humans can't do either. He doesn't address this directly, but presumably this means that we don't represent abstract concepts such as taller than or friendship, or, if we do, it's via a method very different from formal logic (what that would be is left unspecified).
It's A Matter of Style
Here's what I think is going on: symbolic computational systems are extremely powerful and can do lots of fancy things (like second-order logics). If human brains instantiate symbolic systems, that would explain very nicely lots of the fancy things we can do. However, we don't really have any sense of how neurons could instantiate symbols, or even if it's possible. So if you believe in symbolic computation, you're basically betting that neurons can do more than it seems.
Associationist systems face the opposite problem: we know a lot about associative learning in neurons, so this seems like an architecture that could be instantiated in the brain. The problem is that associative learning is an extremely underpowered learning system. So if you like associationist systems, you're betting that humans can't actually do many of the things (some of) us think humans can do.
Dye claimed that the argument in favor of Universal Grammar was a form of Intelligent Design: we don't know how that could be learned/evolve, so it must be innate/created. I'll return the favor by labeling Ramscar's argument Intelligent Nihilism: we don't how the brain could give rise to a particular type of behavior, so humans must not be capable of it.
The point I want to make is we don't have the data to choose between these options. You do have to work within a framework if you want to do research, though, and so you pick the framework that strikes you as most plausible. Personally, I like symbolic systems.
John E. Hummel (2010). Symbolic versus associative learning Cognitive Science, 34, 958-865
Michael Ramscar (2010). Computing machinery and understanding Cognitive Science, 34, 966-971
photos: Anirudh Koul (jumping), wwarby (turtles), kaptain kobold (Darwin)