Part of Noam Chomsky's famous revolution in linguistics (and cognitive science more broadly) was to focus on linguistic competency rather than performance. People stutter, use the wrong word, forget what they planned to say, change ideas mid-sentence and occasionally make grammatical errors. Chomsky focused not on what people do say, but on what they would say without any such slip-ups.*
This certainly simplified the study of language, but one has to wonder what this spherical cow leaves out. Economists similarly made great strides by assuming all people are perfectly rational, think on the margin, and have full access to all necessary information free of cost. However, any theory based on these clearly false premises is limited in its explanatory power.
Speech errors carry information. This was brought home to me by a recent email I received which began, "Er, yes." If filler words carried no information, why transcribe them? (Lancelot once asked a similar question.) However, people clearly do. A quick Google search found over seven million hits for "uhhh" and over twenty-one million hits for "ummm." These include quotes like "Ummm... Go Twins?" and "Uhhh... What did she just say?"
These two quotes are suggestive, but I don't know if all transcription of filler words and other speech errors can be explained as a single phenomenon. I did hear of one study where listeners normally assume that if someone pauses and appears to have difficulty finding a particular word, the listeners assume the word is low-frequency. However, listeners drop this assumption if they believe the speaker has a neurological impairment that affects speech.
I expect that many phenomena dismissed as "performance" rather than "competence" are in fact important in communication. Whether one believes that communication should be part of any theory of language is debated (Chomsky seems to think language has nothing to do with communication).
*This part of linguistics is still very influential in psychology. I'm not sufficiently current in linguistics to say whether most linguists still do research this way.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
Don't tell me they found Tyrannosaurus rex meat again!2 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
-
Course Corrections4 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Angry cats
What is it like to be an angry cat? According to a new study, not much different than being an angry human.
I'm curious, though, how much evidence there is that other animals don't share many/most of their emotions with humans. It was definitely common to think once that animals didn't have emotions, but as far as I can tell, it's widely accepted now that they do.
For previous posts about cat behavior, click here or here.
I'm curious, though, how much evidence there is that other animals don't share many/most of their emotions with humans. It was definitely common to think once that animals didn't have emotions, but as far as I can tell, it's widely accepted now that they do.
For previous posts about cat behavior, click here or here.
The Heat Death of Science
Several years ago, I was fairly up-to-date on dyslexia research. A couple colleagues and I were writing a comprehensive review of the literature. Several drafts of the pape were written, but for various reasons that project got put aside and was never finished.
I'm currently preparing to overhaul that paper and update it based on recent research. To put this in perspective, 147 papers on dyslexia were published in 2007 alone (according to PsychInfo*).
Like the physical universe, the universe of knowledge has been expanding at an accelerated rate. It's hard to be current in several fields. By the time you are current in psychology, sociology has moved on. With time, it seems increasingly difficult to stay on top of multiple subfields (e.g., autism and dyslexia).
I wonder how long it will be before it is impossible to stay on top of even a single, narrow topic. This postulated moment would be the equivalent of heat death for science. Or not. Perhaps science will end in a big crunch instead.
Or will we find ways of dealing with massive amounts of information. While our technologies in this arena have improved, I take it as self-evidence that they have not improved as fast as information has increased.
Thoughts?
*If anybody for some reason wants to check for themselves, I searched for papers with the word "dyslexia" in the abstract. If you search for "dyslexia" in any field, you get 177.
I'm currently preparing to overhaul that paper and update it based on recent research. To put this in perspective, 147 papers on dyslexia were published in 2007 alone (according to PsychInfo*).
Like the physical universe, the universe of knowledge has been expanding at an accelerated rate. It's hard to be current in several fields. By the time you are current in psychology, sociology has moved on. With time, it seems increasingly difficult to stay on top of multiple subfields (e.g., autism and dyslexia).
I wonder how long it will be before it is impossible to stay on top of even a single, narrow topic. This postulated moment would be the equivalent of heat death for science. Or not. Perhaps science will end in a big crunch instead.
Or will we find ways of dealing with massive amounts of information. While our technologies in this arena have improved, I take it as self-evidence that they have not improved as fast as information has increased.
Thoughts?
*If anybody for some reason wants to check for themselves, I searched for papers with the word "dyslexia" in the abstract. If you search for "dyslexia" in any field, you get 177.
Publication bias
There is an excellent article on publication bias in Slate today. There is no question that a number of biases affect what gets published and what doesn't. Some are good (not publishing bad studies), some are bad (not publishing studies that disprove a pet theory) and some are ambiguous (not publishing papers that "aren't interesting"). The big questions are which biases have the biggest impact on what makes its way into print, and how do you take that into account when evaluating the literature.
Read the Slate article here.
Read the Slate article here.
Public relations and science
The latest issue of Seed has an excellent quote form Lawrence Krauss, a theoretical physicist at Case Western Reserve University:
Many people are aware that Mendel's groundbreaking studies of heredity were buried and what he learned had to be re-discovered independently. According to Frank Sulloway in Born to Rebel, this was partly due to Mendel's inability to communicate the importance of his findings.
Just in case anybody thinks Krauss is advocating spin:
I remember, I was on a visiting committee at MIT, and these students tend to think they're going to be successful because they're good at what they're doing. But in fact, a large barometer of their success will be how well they can communicate what they're doing. Not just to the outside public, which most scientists don't necessarily have to do -- though I think that's important, too -- but within the field, or to your company. It isn't just what you do, it's often how you present it. And, traditionally, we've spent very little time educating our students on how to communicate.I absolutely agree. This could be dismissed as "spinning," something which scientists shouldn't do. But the truth is thousands of scientific papers are published every month. Nobody has time to read them all, and most of the papers one does read can't be read carefully. If the writer doesn't do a good job of explaining what the results mean and why they are important, they are likely to be missed. If you are giving a talk about your research, hopefully the audience can figure out for themselves why your information is relevant to their work, but you're doing everybody a favor if you help them. Many very good researchers are terrible at communicating their ideas and their findings.
Many people are aware that Mendel's groundbreaking studies of heredity were buried and what he learned had to be re-discovered independently. According to Frank Sulloway in Born to Rebel, this was partly due to Mendel's inability to communicate the importance of his findings.
Just in case anybody thinks Krauss is advocating spin:
So strategies of persuasion, I think, are vitally important within the field. But -- and I should be very clear about this -- while I understand science as a sociological phenomena, I do believe in objective reality and I do believe that, ultimately, important science wins out in spite of the social constructs adn the social or peer pressures to do certain things...That's what makes science special.
Updating the website
For those who haven't noticed, this blog now has a blogroll as well as labels for posts. Both can be browsed on the right-hand side of the page. Note that I don't necessarily label every single post. Some posts are just too off-topic to be easily categorized, and I don't want to end up with 4,000 different labels. If you want to find everything, your best option is going through the archives.
"Every psychology major starts by wanting to be a therapist"
One of my officemates, also a first-year graduate student, recently claimed that all undergraduate psychology majors enter the field because they want to be therapists. This definitely wasn't true of me. I often forget that there is a branch of psychology that does therapy (this is easy to forget at my school, which doesn't offer a counseling program. In fact, I'm not sure any school I applied to offers a counseling program). But, then, I wasn't a psychology major.
Someone recently suggested to me that I write about who I am and how I got here. I personally doubt anybody is all that interested in my life story, but I do have one reason to tell bits and pieces of it. Most people seem to know very little about psychology. In fact, for all my father is a professor of psychology and I had worked in several psychology labs, I knew far too little about the career track at first, and I believe this hurt me the first time I applied to graduate school (yes, I applied twice). A lot of really crucial information simply isn't available.
This post kicks of what will be a series of probably non-consecutive posts about the psychology career track, as illustrated by my own path (keep in mind that I'm only half way there).
First, since my father was a psychologist, I was determined not to be. That had already been done. Otherwise, though, I had no good idea what I might do, other than a vague idea that maybe I'd be a writer.
One day in a deep Ohio December, as I was at my study carrel in Mudd Library studying for discrete mathematics exam, suddenly, the clouds parted, the light shone down, angels sang, and I knew what I wanted to do:
Artificial intelligence.
When asked what that meant, I typically said that I wanted to make one of these, but what really interested me was making a talking robot. Artificial intelligence wasn't a course track really offered at Oberlin, and though I majored in computer science for a while, I eventually switched to math, which I found more appealing. In the meantime, I volunteered at the Brain and Language Lab at Georgetown University.
I really enjoyed my work at Georgetown. Then, I went to a conference on natural language processing, which is essentially the field of trying to make a talking robot, and I was very disappointed. It wasn't want I imagined at all. At the time -- and maybe still now -- the most successful technique was to use a lot of templates. This seems to work very well -- and if you believe Tomasello's theory about language, it might even be how humans produce language -- but it wasn't for me. I preferred my current work in cognitive neuroscience.
So, for a while, I was going to be a cognitive neuroscientist. However, it turns out that there are very few cognitive neuroscience labs that study high-level language processing, particularly in the cities that were options for me in terms of graduate school. Relative to memory or vision, for instance, it has been difficult to use traditional cognitive neuroscience techniques to learn about language. The best fits turned out to be primarily labs in psychology departments. So, reluctantly, I decided to mostly apply to those (recall that I didn't want to be a psychologist).
It actually gets worse. I ended up in a developmental psychology lab. My father is a school psychologist, which is quite different on many levels, but it's still about the psychology of children. Thus, my path to psychology may be summarized as one man's unsuccessful fight against Nature.
For those of you working in the cognitive sciences, feel free to leave a comment with your story.
Someone recently suggested to me that I write about who I am and how I got here. I personally doubt anybody is all that interested in my life story, but I do have one reason to tell bits and pieces of it. Most people seem to know very little about psychology. In fact, for all my father is a professor of psychology and I had worked in several psychology labs, I knew far too little about the career track at first, and I believe this hurt me the first time I applied to graduate school (yes, I applied twice). A lot of really crucial information simply isn't available.
This post kicks of what will be a series of probably non-consecutive posts about the psychology career track, as illustrated by my own path (keep in mind that I'm only half way there).
First, since my father was a psychologist, I was determined not to be. That had already been done. Otherwise, though, I had no good idea what I might do, other than a vague idea that maybe I'd be a writer.
One day in a deep Ohio December, as I was at my study carrel in Mudd Library studying for discrete mathematics exam, suddenly, the clouds parted, the light shone down, angels sang, and I knew what I wanted to do:
Artificial intelligence.
When asked what that meant, I typically said that I wanted to make one of these, but what really interested me was making a talking robot. Artificial intelligence wasn't a course track really offered at Oberlin, and though I majored in computer science for a while, I eventually switched to math, which I found more appealing. In the meantime, I volunteered at the Brain and Language Lab at Georgetown University.
I really enjoyed my work at Georgetown. Then, I went to a conference on natural language processing, which is essentially the field of trying to make a talking robot, and I was very disappointed. It wasn't want I imagined at all. At the time -- and maybe still now -- the most successful technique was to use a lot of templates. This seems to work very well -- and if you believe Tomasello's theory about language, it might even be how humans produce language -- but it wasn't for me. I preferred my current work in cognitive neuroscience.
So, for a while, I was going to be a cognitive neuroscientist. However, it turns out that there are very few cognitive neuroscience labs that study high-level language processing, particularly in the cities that were options for me in terms of graduate school. Relative to memory or vision, for instance, it has been difficult to use traditional cognitive neuroscience techniques to learn about language. The best fits turned out to be primarily labs in psychology departments. So, reluctantly, I decided to mostly apply to those (recall that I didn't want to be a psychologist).
It actually gets worse. I ended up in a developmental psychology lab. My father is a school psychologist, which is quite different on many levels, but it's still about the psychology of children. Thus, my path to psychology may be summarized as one man's unsuccessful fight against Nature.
For those of you working in the cognitive sciences, feel free to leave a comment with your story.
Are birth order effects due to SES?
As early as 1874, Sir Francis Galton noted that first-borns and only children were overrepresented in English men of science, making birth order effects one of the earliest constructs studied in psychology. Over a thousand studies has since been conducted, most of them contradicting the rest. As is typical in psychology, the arguments tend to center around the correct way of measuring birth order.
A common counterargument is that birth order is also an index of SES. Poor families have more children. Just from that, you would expect that there would be few scientists with ten older siblings (Galton actually pointed this out himself, noting that among the wealthy, first-borns tend to have large inheritances, and thus are freed to pursue whatever they like). That said, many birth-order effects have held up even when holding SES constant.
The typical pro-birth-order position is that children are shaped by their environment, and their environment is shaped by the age of their siblings. Here is a quote from Alfred Adler, who developed the first full-blown theory of birth order effects:
It's hard to see why they think this is explains away birth order effects. It's simply a different explanation of birth order effects.
Anyway, I expect to finish my review of the birth order literature today or tomorrow (no, I'm not reading all 1000+ articles; I'm at 25 and counting, though), so I'll update soon with what I've learned.
A common counterargument is that birth order is also an index of SES. Poor families have more children. Just from that, you would expect that there would be few scientists with ten older siblings (Galton actually pointed this out himself, noting that among the wealthy, first-borns tend to have large inheritances, and thus are freed to pursue whatever they like). That said, many birth-order effects have held up even when holding SES constant.
The typical pro-birth-order position is that children are shaped by their environment, and their environment is shaped by the age of their siblings. Here is a quote from Alfred Adler, who developed the first full-blown theory of birth order effects:
It is a common fallacy to imagine that children of the same family are formed in the same environment. Of course there is much which is the same for all children in the same home, but the psychological situation of each child is individual and differs from that of others, because of the order of their succession.Then, this weekend, I read what was meant to be a counter-argument by Wichman, Rodgers and MacCullum (2006), who are adamantly on the side of no birth order effects:
For example, as parents age, they typically increase in SES level and also may spend more time at work and less time with their children. Thus, later-born children may, on average, mature in a slightly higher SES environment than their earlier-born siblings, but one in which parents spend less time with them and which therefore may negatively affect their intellectual development. If later-born children have lower IQs, are we observing an effect of being a later-born child (a real birth order effect) or an indirect effect of SES?
It's hard to see why they think this is explains away birth order effects. It's simply a different explanation of birth order effects.
Anyway, I expect to finish my review of the birth order literature today or tomorrow (no, I'm not reading all 1000+ articles; I'm at 25 and counting, though), so I'll update soon with what I've learned.
This week at the cognition and language lab
I just finished watching several episodes of Scrubs. If you watch enough TV, you get a sense of what it's like to be a doctor or a lobbyist or a policeman or a Mafioso. Some of these shows are more accurate, some are less. But it's very hard to get even an inaccurate sense of what it's like to be a working scientist by watching TV. Even if we go to movies, all that comes to mind is Brent Spiner or Dennis Quaid .
I have no idea what it's like to be a xenobiologist or a paleoglaciologist (though I did once spend a couple days hitchhiking with a pair of paleoglaciologist on Sakhalin, taking tree cores), but I can open a window on a week in the life of a psychology graduate student.
My first year proposal was due Tuesday. I spent last weekend reading papers in preparation to write about my work on pronoun resolution. The purpose of that project was/is to determine whether a particular odd linguistic phenomenon generalized to a large number of words in English, or if it was specific to just a relatively small number of famous examples. That didn't seem like enough to propose as a year-long project, but beyond that I didn't have any particular hypotheses.
Sunday morning I finally thought of something, but at 8pm, I decided I didn't like what I had written, gave up and wrote about a different project instead. Monday morning, I sent the project proposal to my advisor and spent most of the day extending that essay into my final paper for my developmental proseminar class. Monday night, I began our take-home final for the developmental proseminar.
I worked on the final for most of Tuesday as well, finishing in the evening. Having spent all day on a frustrating exam, I wanted to do something fun...which for me meant analyze data. I downloaded the results from the Birth Order survey. Over 2,500 people participated, and the data were the stuff of dreams -- much better than I hoped. So I archived that survey. I don't need any more data, and I'd rather people who visit the site do one of the new experiments.
Wednesday and Thursday were spent writing and testing the code for a new pilot study on a particular type of linguistic inference. There are three versions of that experiment, and I ran all three on myself (one of them more than once) until I was satisfied. I also had two coworkers give it a run-through.
On Friday, I ran 13 subjects on that pilot study. It's the end of the semester, and many of the undergraduate psych students waited to the last minute to participate in the required number of experiments in order to get credit in their classes. This is always a good time to find subjects.
Since my experiments are all computer-based, running subjects is fairly dull. I greet the participant when s/he shows up, explain the procedure, have him/her sign the consent form, and then get the program up and running. When the participant finishes, I give him/her a debriefing form and answer any questions. I spent the time while waiting reading papers about birth order effects, working on the blog, answering email, and working on the one final project of the week I haven't yet mentioned.
The results of the Video Experiment were much more interesting than expected. I don't want to say anything about it, because we may have to run more conditions in the future, but basically, we were doing what was supposed to be a confirmatory study, proving something everybody already knew. Psychologists often get criticized for spending all their time proving the obvious (people who like to eat tend to eat more, for instance), but the Video Experiment was an example of why we run these studies: we found exactly the opposite of what we and I suppose everybody else would have predicted.
My co-author is in charge of writing the paper, but he has been shooting me emails all week asking for additional analyses. I've also read and commented on several drafts. It is looking pretty good -- much better than if I had written it -- but the senior author hasn't read it yet. We'll see what she says. After she's satisfied (if she's satisfied), we'll send it off to a journal, where the reviewers will tear it to shreds and reject it. We'll rewrite it (after perhaps running another experiment or two) and then resubmit it. It's a long process.
And that was one week in the life of a psychology graduate student. There's definitely a TV show in there somewhere!
I have no idea what it's like to be a xenobiologist or a paleoglaciologist (though I did once spend a couple days hitchhiking with a pair of paleoglaciologist on Sakhalin, taking tree cores), but I can open a window on a week in the life of a psychology graduate student.
My first year proposal was due Tuesday. I spent last weekend reading papers in preparation to write about my work on pronoun resolution. The purpose of that project was/is to determine whether a particular odd linguistic phenomenon generalized to a large number of words in English, or if it was specific to just a relatively small number of famous examples. That didn't seem like enough to propose as a year-long project, but beyond that I didn't have any particular hypotheses.
Sunday morning I finally thought of something, but at 8pm, I decided I didn't like what I had written, gave up and wrote about a different project instead. Monday morning, I sent the project proposal to my advisor and spent most of the day extending that essay into my final paper for my developmental proseminar class. Monday night, I began our take-home final for the developmental proseminar.
I worked on the final for most of Tuesday as well, finishing in the evening. Having spent all day on a frustrating exam, I wanted to do something fun...which for me meant analyze data. I downloaded the results from the Birth Order survey. Over 2,500 people participated, and the data were the stuff of dreams -- much better than I hoped. So I archived that survey. I don't need any more data, and I'd rather people who visit the site do one of the new experiments.
Wednesday and Thursday were spent writing and testing the code for a new pilot study on a particular type of linguistic inference. There are three versions of that experiment, and I ran all three on myself (one of them more than once) until I was satisfied. I also had two coworkers give it a run-through.
On Friday, I ran 13 subjects on that pilot study. It's the end of the semester, and many of the undergraduate psych students waited to the last minute to participate in the required number of experiments in order to get credit in their classes. This is always a good time to find subjects.
Since my experiments are all computer-based, running subjects is fairly dull. I greet the participant when s/he shows up, explain the procedure, have him/her sign the consent form, and then get the program up and running. When the participant finishes, I give him/her a debriefing form and answer any questions. I spent the time while waiting reading papers about birth order effects, working on the blog, answering email, and working on the one final project of the week I haven't yet mentioned.
The results of the Video Experiment were much more interesting than expected. I don't want to say anything about it, because we may have to run more conditions in the future, but basically, we were doing what was supposed to be a confirmatory study, proving something everybody already knew. Psychologists often get criticized for spending all their time proving the obvious (people who like to eat tend to eat more, for instance), but the Video Experiment was an example of why we run these studies: we found exactly the opposite of what we and I suppose everybody else would have predicted.
My co-author is in charge of writing the paper, but he has been shooting me emails all week asking for additional analyses. I've also read and commented on several drafts. It is looking pretty good -- much better than if I had written it -- but the senior author hasn't read it yet. We'll see what she says. After she's satisfied (if she's satisfied), we'll send it off to a journal, where the reviewers will tear it to shreds and reject it. We'll rewrite it (after perhaps running another experiment or two) and then resubmit it. It's a long process.
And that was one week in the life of a psychology graduate student. There's definitely a TV show in there somewhere!
What is neuroimaging good for?
On page 32 of the November/December issue, Seed Magazine reports that in July of 2007
I think it's fairly obvious that people respond differently to language-specific hand gestures (for one thing, they are more likely to respond to them). If people respond differently, then their brains should also respond differently. To suggest otherwise means that you believe that the difference in behavior is due to either (1) an immaterial soul that controls that can engage in activities independently of the body, or (2) these behaviors are controlled by some organ of the body outside the brain.
These are both logically possible hypotheses, but the research over the last few centuries makes them so unlikely to be the case that unless you have a really, really good reason to suspect that the brain is not involved in interpreting hand gestures, then it's not really worth the incredible cost of fMRI to answer this particular research question.
Seed is a decent, informative magazine, so the fact that they let this slip is just more evidence of how pervasive this thinking is.
Neuroscientists seeking to discern whether culture affects the human brain examined those of a group of Americans and Nicaraguans as they watched different hand gestures specific to their respective cultures.Hopefully, this is not what said neuroscientists (no reference is given) were actually trying to do, because fMRI is very expensive (it typically costs hundreds of dollars an hour just to rent the machine), and you wouldn't really need to do an experiment to answer this question.
I think it's fairly obvious that people respond differently to language-specific hand gestures (for one thing, they are more likely to respond to them). If people respond differently, then their brains should also respond differently. To suggest otherwise means that you believe that the difference in behavior is due to either (1) an immaterial soul that controls that can engage in activities independently of the body, or (2) these behaviors are controlled by some organ of the body outside the brain.
These are both logically possible hypotheses, but the research over the last few centuries makes them so unlikely to be the case that unless you have a really, really good reason to suspect that the brain is not involved in interpreting hand gestures, then it's not really worth the incredible cost of fMRI to answer this particular research question.
Seed is a decent, informative magazine, so the fact that they let this slip is just more evidence of how pervasive this thinking is.
Does diversity increase productivity?
One of the arguments for diversity-based hiring is that a more diverse workforce is more productive. Is that true?
Scott E. Page, a professor of complex systems, political science and economics and the University of Michigan argues that it does. He uses mathematical models and case studies to support the claims, which themselves are pretty straight forward.
Here's a quote from a recent interview in the New York Times:
On a related topic, Richard Hackman of Harvard University, who also studies the productivity of work teams, is now arguing that panels of experts can be less productive due to their expertise. He specifically argues that blue-ribbon commissions like the 9/11 Commission are often unproductive because, although they are filled with people with a great deal of expertise, such panels are often very inefficient at using that expertise.
Scott E. Page, a professor of complex systems, political science and economics and the University of Michigan argues that it does. He uses mathematical models and case studies to support the claims, which themselves are pretty straight forward.
Here's a quote from a recent interview in the New York Times:
The problems we face in the world are very complicated. Any one of us can get stuck. if we're in an organization where everyone thinks in the same way, everyone will get stuck in the same place.Of course, this isn't exactly a new idea. But ideas are a dime a dozen. What Page has are data.
But if we have people with diverse tools, they'll get stuck in different places. One person can do their best, and then someone else can come in and improve on it.
On a related topic, Richard Hackman of Harvard University, who also studies the productivity of work teams, is now arguing that panels of experts can be less productive due to their expertise. He specifically argues that blue-ribbon commissions like the 9/11 Commission are often unproductive because, although they are filled with people with a great deal of expertise, such panels are often very inefficient at using that expertise.
Ambiguity
I came across this excellent quote in a journal article yesterday about ambiguity in language:
However, the flexibility of language allows us to go far beyond this. For example, as revealed by a brief Internet search, speakers can use “girl” for their dog (“This is my little girl Cassie…she's much bigger and has those cute protruding bulldog teeth”), their favorite boat (“This girl can do 24 mph if she has to”), or a recently restoredWorld War II Sherman tank (“The museum felt that the old girl was historically unique”). Such examples reveal that for nouns, it is often not enough to just retrieve their sense, i.e., some definitional meaning, from our mental dictionaries.
-Van Berkum, Koornneef, Otten, Nieuwland (2007) Establishing reference in language comprehension: an electrophysiological perspective. Brain Research, 1146, 158-171.
For more about ambiguity in language, check here, here and here.
However, the flexibility of language allows us to go far beyond this. For example, as revealed by a brief Internet search, speakers can use “girl” for their dog (“This is my little girl Cassie…she's much bigger and has those cute protruding bulldog teeth”), their favorite boat (“This girl can do 24 mph if she has to”), or a recently restoredWorld War II Sherman tank (“The museum felt that the old girl was historically unique”). Such examples reveal that for nouns, it is often not enough to just retrieve their sense, i.e., some definitional meaning, from our mental dictionaries.
-Van Berkum, Koornneef, Otten, Nieuwland (2007) Establishing reference in language comprehension: an electrophysiological perspective. Brain Research, 1146, 158-171.
For more about ambiguity in language, check here, here and here.
Why scientists need to do better PR
A couple days ago I asked whether psychology was a science. Many of the responses I got confirmed what I already knew from reading message boards and talking with other academics. Psychology must have done a terrible job of PR, given that so many well-educated folk and scientists in other fields have absolutely no idea what it's about.
I commonly hear statements like "psychologists don't do experiments" or "psychology experiments aren't well-controlled" or "psychology experiments aren't replicable." Saying psychologists don't use experimental controls is like saying that the existence of electrons is "unproven" or that evolution is a "theory in crisis." Basically, the only way somebody could say something like this is if they are entirely ignorant of the field.
The big difference, though, is I doubt it's widely accepted among biologists that electrons probably don't exist or among physicists that evolution is an unproven, shaky hypothesis, but it does seem that an embarrassingly large number of physicists and biologists (and other scientists, too -- I'm not picking on physics or biology) have similarly unfounded views about psychology.
(If you really need an example of a replicable, robust psychological phenomenon, try out the Stroop effect, which is also an example of an experiment with good controls. This is a bit like defending evolution, though. Leafing through any reputable journal should be sufficient.)
So why care if people are ignorant of psychology? For the same reasons it's important that they be informed about every branch of science. First, there's a lot of information there that would be useful to people in their daily lives. Second, if people don't understand and value a discipline, they're less likely to fund it. Science in America is largely funded directly or indirectly by the public. If you believe a particular science is important for the health of the country, then it's important that enough voters also value it.
On the question of replicability, it's true that some results in psychology don't replicate, sometimes because the results were a fluke and sometimes because the experimenters made a mistake. I wonder, though, if it's actually less common in other fields. In physics lab in college, my lab partners and I measured the speed of light, getting an answer way different from the accepted figure (no, we weren't even within measuring error of the correct number).
So that's at least an existence proof that it's possible to do an experiment in physics that won't replicate (that is, our experimental results don't replicate).
I commonly hear statements like "psychologists don't do experiments" or "psychology experiments aren't well-controlled" or "psychology experiments aren't replicable." Saying psychologists don't use experimental controls is like saying that the existence of electrons is "unproven" or that evolution is a "theory in crisis." Basically, the only way somebody could say something like this is if they are entirely ignorant of the field.
The big difference, though, is I doubt it's widely accepted among biologists that electrons probably don't exist or among physicists that evolution is an unproven, shaky hypothesis, but it does seem that an embarrassingly large number of physicists and biologists (and other scientists, too -- I'm not picking on physics or biology) have similarly unfounded views about psychology.
(If you really need an example of a replicable, robust psychological phenomenon, try out the Stroop effect, which is also an example of an experiment with good controls. This is a bit like defending evolution, though. Leafing through any reputable journal should be sufficient.)
So why care if people are ignorant of psychology? For the same reasons it's important that they be informed about every branch of science. First, there's a lot of information there that would be useful to people in their daily lives. Second, if people don't understand and value a discipline, they're less likely to fund it. Science in America is largely funded directly or indirectly by the public. If you believe a particular science is important for the health of the country, then it's important that enough voters also value it.
On the question of replicability, it's true that some results in psychology don't replicate, sometimes because the results were a fluke and sometimes because the experimenters made a mistake. I wonder, though, if it's actually less common in other fields. In physics lab in college, my lab partners and I measured the speed of light, getting an answer way different from the accepted figure (no, we weren't even within measuring error of the correct number).
So that's at least an existence proof that it's possible to do an experiment in physics that won't replicate (that is, our experimental results don't replicate).
Is there a moral grammar?
Morality may seem like a topic for philosophers and theologians rather than psychologists. While it is true that during the last few decades moral reasoning hasn't been a hot topic of psychological research, moral reasoning is a behavior -- and an important one -- and that makes it a worthy topic for psychology. (I don't mean that psychologists should study what is moral and what isn't, but rather what humans think is moral and what they think is not, and why.) In the last few years, interest in the field has exploded.
One of the most controversial new approaches, promoted by Marc Hauser of Harvard University, is to study moral reasoning by analogy to linguistics. For instance, what are the phonemes of moral reasoning? What is the grammar that determine whether an action is considered moral or not?
There has been a lot of criticism of this analogy, none of which seems to particularly bother Hauser. What is interesting is that he has put forward the analogy of moral reasoning to linguistic reasoning not so much because he thinks it's literally true (in fact, he thinks it would be bizarre if morality was exactly like language -- they are obviously different systems), but because he thinks the analogy leads to new questions about moral reasoning that nobody was asking. This leads to new experiments, new data, and hopefully better theories. Hauser argues that the linguistic analogy has does just this.
There is something to this argument. Obviously having a correct theory is ideal. However, few if any theories -- psychological or otherwise -- are through-and-through true, and so it's better to have an incorrect theory that at least points research in a profitable new direction than an incorrect theory that leads nowhere.
You can find some of his recent published papers here. For a less technical treatment, though, you might read his new book. You can also participate in his Moral Sense Test here. For more thoughts about the scientific method, read this. For more about the scientific method and psychology in particular, read yesterday's post.
One of the most controversial new approaches, promoted by Marc Hauser of Harvard University, is to study moral reasoning by analogy to linguistics. For instance, what are the phonemes of moral reasoning? What is the grammar that determine whether an action is considered moral or not?
There has been a lot of criticism of this analogy, none of which seems to particularly bother Hauser. What is interesting is that he has put forward the analogy of moral reasoning to linguistic reasoning not so much because he thinks it's literally true (in fact, he thinks it would be bizarre if morality was exactly like language -- they are obviously different systems), but because he thinks the analogy leads to new questions about moral reasoning that nobody was asking. This leads to new experiments, new data, and hopefully better theories. Hauser argues that the linguistic analogy has does just this.
There is something to this argument. Obviously having a correct theory is ideal. However, few if any theories -- psychological or otherwise -- are through-and-through true, and so it's better to have an incorrect theory that at least points research in a profitable new direction than an incorrect theory that leads nowhere.
You can find some of his recent published papers here. For a less technical treatment, though, you might read his new book. You can also participate in his Moral Sense Test here. For more thoughts about the scientific method, read this. For more about the scientific method and psychology in particular, read yesterday's post.
Is psychology a science?
Is psychology a science? I see this question asked a lot on message boards, and I thought it was time to discuss it here. The answer depends entirely on what you mean by "psychology" and what you mean by "science."
First, if by "psychology" you mean seeing clients (like in Good Will Hunting or Silence of the Lambs), then, no, it's probably not a science. But that's a bit like asking whether engineers or doctors are scientists. Scientists create knowledge. Client-visiting psychologists, doctors and engineers use knowledge. Of course, you could legitimately ask whether client-visiting psychologists base their interventions on good science. They often don't. But that can also be said about doctors and, I'd be willing to bet, engineers.
However, there is a different profession that, largely for historical reasons, shares the same name. That is the branch of science which studies human and animal behavior, and it is also called "psychology." It's not as well known, and nobody makes movies about us (though if paleoglaciologists get to save the world, I don't see why experimental psychologists don't!), but it does exist.
A friend of mine (a physicist) once claimed psychologists don't do experiments (he said this un-ironically over IM while I was killing time in a psychology research lab). My response now would be to invite him to participate in one of these experiments. Based on this Facebook group, I know I'm not the only one who has heard this.
There are also those, however, who are aware that psychologists do experiments, but deny that it's a true science. Some of this has to do with the belief that psychologists still use introspection (there are probably some somewhere, but I suspect there are also physicists who use voodoo dolls somewhere as well, along with mathematicians who play the lottery). The more serious objection has to do with the statistics used in psychology.
In the physical sciences, typically a reaction takes place or does not, or a neutrino is detected is not. There is some uncertainty given the precision of the tools being used, but on the whole the results are fairly straight-forward and the precision is pretty good.
In psychology, however, the phenomena we study are noisy and the tools lack much precision. When studying a neutrino, you don't have to worry about whether it's hungry or sleepy or distracted. You don't have to worry about whether the neutrino you are studying is smarter than average, or maybe too tall for your testing booth, or maybe it's only participating in your experiment to get extra credit in class and isn't the least bit motivated. It does what it does according to fairly simple rules. Humans, on the other hand, are terrible test subjects. Psychology experiments require averaging over many, many observations in order to detect patterns within all that noise.
Some people find this noisiness deeply unsettling and dislike the methods social scientists have developed to compensate for it, and thus would prefer to exclude the social sciences from the term "science." This is fair in the sense that you can define words however you want, but it does mean that a great deal of the world -- basically all of human and animal behavior -- is necessarily unexplainable by science.
So what do you think? Are the social sciences sciences? Comments are welcome.
First, if by "psychology" you mean seeing clients (like in Good Will Hunting or Silence of the Lambs), then, no, it's probably not a science. But that's a bit like asking whether engineers or doctors are scientists. Scientists create knowledge. Client-visiting psychologists, doctors and engineers use knowledge. Of course, you could legitimately ask whether client-visiting psychologists base their interventions on good science. They often don't. But that can also be said about doctors and, I'd be willing to bet, engineers.
However, there is a different profession that, largely for historical reasons, shares the same name. That is the branch of science which studies human and animal behavior, and it is also called "psychology." It's not as well known, and nobody makes movies about us (though if paleoglaciologists get to save the world, I don't see why experimental psychologists don't!), but it does exist.
A friend of mine (a physicist) once claimed psychologists don't do experiments (he said this un-ironically over IM while I was killing time in a psychology research lab). My response now would be to invite him to participate in one of these experiments. Based on this Facebook group, I know I'm not the only one who has heard this.
There are also those, however, who are aware that psychologists do experiments, but deny that it's a true science. Some of this has to do with the belief that psychologists still use introspection (there are probably some somewhere, but I suspect there are also physicists who use voodoo dolls somewhere as well, along with mathematicians who play the lottery). The more serious objection has to do with the statistics used in psychology.
In the physical sciences, typically a reaction takes place or does not, or a neutrino is detected is not. There is some uncertainty given the precision of the tools being used, but on the whole the results are fairly straight-forward and the precision is pretty good.
In psychology, however, the phenomena we study are noisy and the tools lack much precision. When studying a neutrino, you don't have to worry about whether it's hungry or sleepy or distracted. You don't have to worry about whether the neutrino you are studying is smarter than average, or maybe too tall for your testing booth, or maybe it's only participating in your experiment to get extra credit in class and isn't the least bit motivated. It does what it does according to fairly simple rules. Humans, on the other hand, are terrible test subjects. Psychology experiments require averaging over many, many observations in order to detect patterns within all that noise.
Some people find this noisiness deeply unsettling and dislike the methods social scientists have developed to compensate for it, and thus would prefer to exclude the social sciences from the term "science." This is fair in the sense that you can define words however you want, but it does mean that a great deal of the world -- basically all of human and animal behavior -- is necessarily unexplainable by science.
So what do you think? Are the social sciences sciences? Comments are welcome.
Subscribe to:
Posts (Atom)