(live-blogging Xprag)
In his introduction, Kai von Fintel tells an anecdote that I think sums up why it is sometimes difficult to explain what it is we do. Some time ago, Emmon Bach wrote a book for Cambridge University Press on if/then conditionals. The copy-editor sent it back, replacing every use of "if and only if" with a simple "if," saying the "and only if" was redundant.
As it turns out, although people often interpret "if" as meaning "if nd only if," that's simply not what the word means, despite our intuitions (most people interpret if you mow the lawn, I'll give you $5 as meaning if and only if you mow the lawn...).
Part of the mystery, then, is explaining why our intuitions are off. In the meantime, though, explaining what I do sometimes comes across as trying to prove the sky is blue.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
From Valley Forge to the Lab: Parallels between Washington's Maneuvers and Drug Development4 weeks ago in The Curious Wavefunction
-
Political pollsters are pretending they know what's happening. They don't.4 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
Course Corrections5 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
Summa smack-down
(live-blogging Xprag)
Dueling papers this morning on the word "summa" as compared with "some of." Judith Degen presented data suggesting "summa the cookies" is more likely to be strengthened to "some but not all" than is "some of the cookies." Huang followed with data demonstrating that there is no such difference. Everybody seems to agree this has something to do with the way the two studies were designed, but not on which way is better.
I am more convinced by Huang's study, but as she is (1) a lab-mate, (2) a friend, and (3) sitting next to me as I write this, I'm probably not a neutral judge.
Dueling papers this morning on the word "summa" as compared with "some of." Judith Degen presented data suggesting "summa the cookies" is more likely to be strengthened to "some but not all" than is "some of the cookies." Huang followed with data demonstrating that there is no such difference. Everybody seems to agree this has something to do with the way the two studies were designed, but not on which way is better.
I am more convinced by Huang's study, but as she is (1) a lab-mate, (2) a friend, and (3) sitting next to me as I write this, I'm probably not a neutral judge.
Speaker uncertainty
Arjen Zondervan just presented a fascinating paper with the acknowledged long title "Effects of contextual manipulation on hearers' assumptions about speaker expertise, exhaustivity & real-time processing of the scalar implicature of or." He presented two thought-provoking experiments on exhaustivity & speaker expertise, but of primary interest to me was Experiment 3.
An important debate in the field has centered around whether scalar implicatures depend on context. A couple years ago, Richard Breheny & colleagues published a nice reading-time experiment in which was consistent with scalar implicatures being computed in some contexts but not others. Roughly, they set up a contexts something along the following lines:
(1) Some of the consultants/ met the manager./ The rest/ did not manage/ to attend.
(2) The manager met/ some of the consultants./The rest/ did not manage/ to attend.
Participants read the sentences one segment at a time (the '/' marks the boundaries between segments), pressing a key when they were ready for the next sentence. For reasons that may or may not be clear, it was thought that there would be an implicature in the first sentence but not in the second, making "the rest" fairly unnatural in the second sentence, and this resulted in subjects reading "the rest" more slowly in (2) than in (1).
This was a nice demonstration, and was, I tink, the first study of scalar implicature to use an implicit measure rather than just asking participants what they think a sentence means, which has certain advantages, but there were a number of potential confounds in the stimuli in this and the two other experiments they used. Zondervan fixed some of these confounds, re-ran the study and got the same results.
I was interested because, in collaboration with Jesse Snedeker, I have also re-run the Breheny study and also got that basic result. However, Zondervan and Breheny both also got longer reading times for the scalar term (e.g., 'some') in the condition where there is a scalar implicature. Both take this as evidence that calculating an implicature is an effortful process. In a series of similar experiments using my own stimuli, and I just don't get that part of the result. I am fairly convinced this is due to differences in our stimuli, but we're still trying to figure out why and what that might mean.
That said, the effect that all three of us get is, I think, the more important part of the data, and it's nice to see another replication.
An important debate in the field has centered around whether scalar implicatures depend on context. A couple years ago, Richard Breheny & colleagues published a nice reading-time experiment in which was consistent with scalar implicatures being computed in some contexts but not others. Roughly, they set up a contexts something along the following lines:
(1) Some of the consultants/ met the manager./ The rest/ did not manage/ to attend.
(2) The manager met/ some of the consultants./The rest/ did not manage/ to attend.
Participants read the sentences one segment at a time (the '/' marks the boundaries between segments), pressing a key when they were ready for the next sentence. For reasons that may or may not be clear, it was thought that there would be an implicature in the first sentence but not in the second, making "the rest" fairly unnatural in the second sentence, and this resulted in subjects reading "the rest" more slowly in (2) than in (1).
This was a nice demonstration, and was, I tink, the first study of scalar implicature to use an implicit measure rather than just asking participants what they think a sentence means, which has certain advantages, but there were a number of potential confounds in the stimuli in this and the two other experiments they used. Zondervan fixed some of these confounds, re-ran the study and got the same results.
I was interested because, in collaboration with Jesse Snedeker, I have also re-run the Breheny study and also got that basic result. However, Zondervan and Breheny both also got longer reading times for the scalar term (e.g., 'some') in the condition where there is a scalar implicature. Both take this as evidence that calculating an implicature is an effortful process. In a series of similar experiments using my own stimuli, and I just don't get that part of the result. I am fairly convinced this is due to differences in our stimuli, but we're still trying to figure out why and what that might mean.
That said, the effect that all three of us get is, I think, the more important part of the data, and it's nice to see another replication.
Default computation in language
(Blogging Xprag)
This morning begins with a series of talks on scalar implicature. This refers to the fact that "John ate some of the cookies" is usually interpreted as meaning "some but not all of the cookies." Trying to get this post written during a 5-minute Q&A prevents me from proving that "some" does not simply mean "some but not all," but in fact it is very clear that "some" means "some and possibly all." The question, then, is why and how do people interpret such sentences as meaning something other than what they literally mean.
The most interesting moment for me so far has been a question by Julie Sedivy during the first Q & A. A popular theory of scalar implicature argues that the computation of "some = some-but-not-all" is a default computation. A number of experiments that have shown that such computation is slow has been taken by some as evidence against a default model. Sedivy pointed out that saying a computation is done by default doesn't require that the computation be fast, so evidence about speed of computation can't be taken as evidence for or against a default-computation theory.
This morning begins with a series of talks on scalar implicature. This refers to the fact that "John ate some of the cookies" is usually interpreted as meaning "some but not all of the cookies." Trying to get this post written during a 5-minute Q&A prevents me from proving that "some" does not simply mean "some but not all," but in fact it is very clear that "some" means "some and possibly all." The question, then, is why and how do people interpret such sentences as meaning something other than what they literally mean.
The most interesting moment for me so far has been a question by Julie Sedivy during the first Q & A. A popular theory of scalar implicature argues that the computation of "some = some-but-not-all" is a default computation. A number of experiments that have shown that such computation is slow has been taken by some as evidence against a default model. Sedivy pointed out that saying a computation is done by default doesn't require that the computation be fast, so evidence about speed of computation can't be taken as evidence for or against a default-computation theory.
Liveblogging Experimental Pragmatics
This week I am in Lyon, France, for the 3rd Experimental Pragmatics meeting. I had plans to live-blog CUNY and SRCD, neither of which quite happened, but I'm giving it a go for Day 2 of Xprag, and we'll see how it goes.
Pragmatics, roughly defined, is the study of language use. In practice, this tends to mean anything that isn't semantics, syntax or phonology, though in practice the division between semantics and pragmatics tends to shift as we learn more about the system. Since Pragmatics has perhaps been studied more extensively in philosophy & linguistics, the name of the conference emphasizes that it focuses on experiments rather than just theory.
More to follow
Pragmatics, roughly defined, is the study of language use. In practice, this tends to mean anything that isn't semantics, syntax or phonology, though in practice the division between semantics and pragmatics tends to shift as we learn more about the system. Since Pragmatics has perhaps been studied more extensively in philosophy & linguistics, the name of the conference emphasizes that it focuses on experiments rather than just theory.
More to follow
Are Cyborgs Near?
Raymond Kurzweil, inventor and futurist, predicts that by the 2030s, it will be possible to upload your mind, experience virtual reality through brain implants, have experiences beamed into your mind, and communicate telepathically. Just to name a few predictions.
Kurzweil, as he himself recently noted on On The Media, has a track record of successful predictions over the past three decades. Past performance being the best predictor of future performance, this leads people to at least pay attention to his arguments. Nonetheless, as the mutual funds folk say, past performance is a predictor, not a guarantee.
Exponential Progress
I suspect that Kurzweil is right about many things, but I'm not sure about the telepathy. When I have heard him speak, his primary argument for his predictions is telepathy only seems like a distant achievement because we think technology moves at a linear rate, but in fact knowledge and capability increases exponentially. This has clearly been the case in terms of computing speed.
Fair enough. The problem is that we aren't sure exactly how hard the problems we are facing are. There is a famous anecdote about an early pioneer in Artificial Intelligence assigning "vision" as a summer project. This was many decades ago, and as anyone in the field knows, machine vision is improving rapidly but still not that great.
A more contemporary example: A colleague I work with closely built a computation model of a relatively simple process in human language and tried to simulate some data. However, it took too long to run. When he looked at it more carefully, he realize that his program required more cycles to complete than there are atoms in the known universe. That is, merely waiting for faster computers was not going to help; he needed to re-think his program.
The Distance
In short, even if we grant Kurzweil that computers improve exponentially, somebody still needs to program them. Our ability to program may also be improving exponentially, but I'm unconvinced that we know how far we have to go.
Suppose I wanted to walk to some destination 1,000 miles away. I walk 1 mile the first year. If I keep going at the same rate, it'll take 1000 years. But if my speed doubles each year, it will take less than 14 years. Which is a lot faster!
But we don't know -- or at least I don't know -- how far we have to walk. We may well be walking to the other side of the universe (>900,000,000,000,000,000,000,000 miles). In which case even if my speed doubles every year, it'll still take almost 80 years. Which granted is pretty quick, but not as fast as the 14 years.
Of course, notice that by the 79th year I'll be traveling at such a velocity that I'd be able to cross nearly the entire universe in a year (or, 156 billion times the speed of light), which so far as we know is impossible. The growth of our technology may similarly eventually hit hard limits.
That said...
I wouldn't terribly mind being proved wrong. Telepathy sounds neat.
Kurzweil, as he himself recently noted on On The Media, has a track record of successful predictions over the past three decades. Past performance being the best predictor of future performance, this leads people to at least pay attention to his arguments. Nonetheless, as the mutual funds folk say, past performance is a predictor, not a guarantee.
Exponential Progress
I suspect that Kurzweil is right about many things, but I'm not sure about the telepathy. When I have heard him speak, his primary argument for his predictions is telepathy only seems like a distant achievement because we think technology moves at a linear rate, but in fact knowledge and capability increases exponentially. This has clearly been the case in terms of computing speed.
Fair enough. The problem is that we aren't sure exactly how hard the problems we are facing are. There is a famous anecdote about an early pioneer in Artificial Intelligence assigning "vision" as a summer project. This was many decades ago, and as anyone in the field knows, machine vision is improving rapidly but still not that great.
A more contemporary example: A colleague I work with closely built a computation model of a relatively simple process in human language and tried to simulate some data. However, it took too long to run. When he looked at it more carefully, he realize that his program required more cycles to complete than there are atoms in the known universe. That is, merely waiting for faster computers was not going to help; he needed to re-think his program.
The Distance
In short, even if we grant Kurzweil that computers improve exponentially, somebody still needs to program them. Our ability to program may also be improving exponentially, but I'm unconvinced that we know how far we have to go.
Suppose I wanted to walk to some destination 1,000 miles away. I walk 1 mile the first year. If I keep going at the same rate, it'll take 1000 years. But if my speed doubles each year, it will take less than 14 years. Which is a lot faster!
But we don't know -- or at least I don't know -- how far we have to walk. We may well be walking to the other side of the universe (>900,000,000,000,000,000,000,000 miles). In which case even if my speed doubles every year, it'll still take almost 80 years. Which granted is pretty quick, but not as fast as the 14 years.
Of course, notice that by the 79th year I'll be traveling at such a velocity that I'd be able to cross nearly the entire universe in a year (or, 156 billion times the speed of light), which so far as we know is impossible. The growth of our technology may similarly eventually hit hard limits.
That said...
I wouldn't terribly mind being proved wrong. Telepathy sounds neat.
More things you don't have time to read
PLoS One has published over 5,000 papers. Is that a sign of success or failure?
I've worried before on this blog about the exploding number of science publications. Publications represent completed research, which is progress, and is good. But the purpose of writing a paper is not for it to appear in print, the purpose is for people to read it. The more papers are published, it stands to reason, the fewer people read each one. Thus, there is some argument for publishing fewer, higher quality papers. I have heard that the average publication gets fewer than 1 citation, meaning many papers are never cited and thus presumably were not found to be relevant to anybody's research program.
It is in this context that I read the following excited announcement from PLoS ONE, a relatively new open-access journal:
I've worried before on this blog about the exploding number of science publications. Publications represent completed research, which is progress, and is good. But the purpose of writing a paper is not for it to appear in print, the purpose is for people to read it. The more papers are published, it stands to reason, the fewer people read each one. Thus, there is some argument for publishing fewer, higher quality papers. I have heard that the average publication gets fewer than 1 citation, meaning many papers are never cited and thus presumably were not found to be relevant to anybody's research program.
It is in this context that I read the following excited announcement from PLoS ONE, a relatively new open-access journal:
nearly 30,000 of your peers have published over 5,000 papers with us since our launch just over two years ago.That's a lot of papers. Granted, I admit to being part of the problem. Though I do now have a citation.
Origin of Language Pinpointed
Scientists have long debated the evolution of language. Did it emerge along with the appearance of modern homo sapiens, 130,000-200,000 years ago? Or did it happen as late as 50,000 years ago, explaining the cultural ferment at that time? What are we to make of the fact that Neanderthals may have had the ability to produce sounds similar to those of modern humans?
In a stunning announcement this morning, freelance writer Joshuah Bearman announced that he had pinpointed the exact location, if not the date, of the origin of modern language: Lake Turkana in Kenya.
WTF?
Actually, what Bearman says is
Oops
Probably he heard it from a tour guide (or thought he had heard something like that from a tour guide). Then neither he nor his editor bothered to think through the logic: how would we know where the first words were spoken, given that there can be no archaeological record? It's unlikley we'll ever even find the first human(s), given the low likelihood of fossilization.
I have some sympathy. Bearman was simply trying to provide a setting for his story. In one of my first published travel articles, I similarly mentioned in passing that Lake Baikal (the topic of my story) was one of the last strongholds of the Whites in the Russian Revolution. I have no idea where I got that idea, since it was completely untrue. (Though, in comparison with the Lake Turkana hypothesis, at least my unfounded claim was possible.)
So I'm sympathetic. I also had to write a correction for a subsequent issue. Bearman?
In a stunning announcement this morning, freelance writer Joshuah Bearman announced that he had pinpointed the exact location, if not the date, of the origin of modern language: Lake Turkana in Kenya.
WTF?
Actually, what Bearman says is
This is where Homo sapiens emerged. It is the same sunset our ancestors saw, walking through this very valley. To the north is Lake Turkana, where the first words were spoken. To the south is Laetoli, where, in 1978, Mary Leakey's team was tossing around elephant turds when they stumbled across two sets of momentous footprints: bipedal, tandem, two early hominids together...Since this is in an article about a wedding, I suspect tha Bearman was not actually intending to floor the scientific establishment with an announcement; he assumed this was common knowledge. I can't begin to imagine where he got this idea though. I wondered if perhaps this was some sort of urban legend (like all the Eskimo words for snow), but Googling has turned up nothing, though of course some important archaeological finds come from that region.
Oops
Probably he heard it from a tour guide (or thought he had heard something like that from a tour guide). Then neither he nor his editor bothered to think through the logic: how would we know where the first words were spoken, given that there can be no archaeological record? It's unlikley we'll ever even find the first human(s), given the low likelihood of fossilization.
I have some sympathy. Bearman was simply trying to provide a setting for his story. In one of my first published travel articles, I similarly mentioned in passing that Lake Baikal (the topic of my story) was one of the last strongholds of the Whites in the Russian Revolution. I have no idea where I got that idea, since it was completely untrue. (Though, in comparison with the Lake Turkana hypothesis, at least my unfounded claim was possible.)
So I'm sympathetic. I also had to write a correction for a subsequent issue. Bearman?
How much do professors get paid?
The American Association of University Professors recently released a report on the financial situation of professors. One interesting datum apparently gleaned from the report is a ranking of universities by full professor salaries. I have heard it said that Harvard pays below market because it pays in prestige, but that doesn't jive with its industry-leading $192,600/year (keep in mind this is average for full professor, which is rarely achieved before one's 40s at best).
One interesting fact noted shown in figure 2 of the report itself is that while, yes, PhDs do earn less than professional degrees (law, business, medicine, etc.), the difference is, on average, not nearly so large as one might expect. In 2007, the average PhD made around $95,000, while the average professional school graduate earned about $115,000 (both numbers are down, incidentally, from 1997).
That said, the ceilings are probably different. The average full professor at Harvard -- arguably the pinnacle of the profession for someone with a PhD -- as already said makes just under $200,000/year...or about the same as the typical big-firm lawyer a couple years out of law school (though perhaps not this year).
One interesting fact noted shown in figure 2 of the report itself is that while, yes, PhDs do earn less than professional degrees (law, business, medicine, etc.), the difference is, on average, not nearly so large as one might expect. In 2007, the average PhD made around $95,000, while the average professional school graduate earned about $115,000 (both numbers are down, incidentally, from 1997).
That said, the ceilings are probably different. The average full professor at Harvard -- arguably the pinnacle of the profession for someone with a PhD -- as already said makes just under $200,000/year...or about the same as the typical big-firm lawyer a couple years out of law school (though perhaps not this year).
Should Universities Have Standards?
This is the question asked by the Bologna Process, an alliance of some higher education authorities. The question itself is a bit of Bologna, since, at least in the United States, there is an accreditation process that ensures some minimal standards for higher education. For those who believe that is a low bar, keep in mind that some institutions fail to reach even that standard (cf Michael "Heckuva-Job" Brown's law school).
The Bologna Process have something more aggressive in mind: "quality assurance" and "easily readable and comparable degrees." As described in a recent New York Times article, this involves establishing "what students must learn" rather than simply "defining degrees by the courses taken or the credits earned":
Imagine somebody wanted to set up standards for college football teams, in order to allow prospects to better compare potential schools and also to allow pro football scouts better evaluate college talent. You can define football fundamentals and even develop a test for them, but if you want to evaluate both Michael Oher and the left tackle at the local community college by the same standards, they will be either trivially easy or so steep as to "fail" the vast majority of college football players.
Academics has the same issues. Even leaving aside the fact that different colleges attract students of different abilities, the average student at one state school I know puts in about 5-6 hours of studying per week. At other schools, that's the number of hours per course. The amount you are expected to learn is vastly different.
It's also not clear how you would deal with emerging disciplines. Just looking at my own corner of academia, not long ago, few undergraduate schools had neuroscience courses. A handful of schools (Brown & Johns Hopkins being obvious examples) have "cognitive science" degrees. My alma mater had psychology, neuroscience, and "biopsychology" -- a blend of the two former. MIT has one department cellular, systems and computational neuroscience, along with cognitive psychology (clinical and social psychology are absent). In contrast, some years back Harvard had the now-disbanded Department of Social Relations, consisting of social psychology, social anthropology and sociology.
How can we define one set of standards that would apply to all those different departments? Perhaps it is exactly this multitude of department structures that so frustrates the folks at the Bologna Process, but I'm not sure there is an alternative. The late 20th century saw incredible growth and turmoil in the social and neurosciences, and nobody is quite sure what is the appropriate way of carving up the subject matter. Both the Harvard and MIT strategies have something to recommend them, but they are polar opposites.
In the end, I'm just not convinced there is a problem. Ultimately, what employers care about is not the quality of the employee's education, but the quality of his/her work product. So maybe that's what we should be evaluating.
The Bologna Process have something more aggressive in mind: "quality assurance" and "easily readable and comparable degrees." As described in a recent New York Times article, this involves establishing "what students must learn" rather than simply "defining degrees by the courses taken or the credits earned":
The idea, as I understand it, is to help prospective students choose the best schools and to help prospective employers evaluate applicants. Although I recognize this is a problem in need of a solution, I just don't see how this is supposed to work.“Go to a university catalog and look at the degree requirements for a particular discipline,” Mr. Adelman [education policy expert] said. “It says something like, ‘You take Anthropology 101, then Anthro 207, then you have a choice of Anthro 310, 311, or 312. We require the following courses, and you’ve got to have 42 credits.’ That means absolutely nothing.”
The new approach, he said, would detail specific skills to be learned: “If you’re majoring in chemistry, here is what I expect you to learn in terms of laboratory skills, theoretical knowledge, applications, the intersection of chemistry with other sciences, and broader questions of environment and forensics.”
Imagine somebody wanted to set up standards for college football teams, in order to allow prospects to better compare potential schools and also to allow pro football scouts better evaluate college talent. You can define football fundamentals and even develop a test for them, but if you want to evaluate both Michael Oher and the left tackle at the local community college by the same standards, they will be either trivially easy or so steep as to "fail" the vast majority of college football players.
Academics has the same issues. Even leaving aside the fact that different colleges attract students of different abilities, the average student at one state school I know puts in about 5-6 hours of studying per week. At other schools, that's the number of hours per course. The amount you are expected to learn is vastly different.
It's also not clear how you would deal with emerging disciplines. Just looking at my own corner of academia, not long ago, few undergraduate schools had neuroscience courses. A handful of schools (Brown & Johns Hopkins being obvious examples) have "cognitive science" degrees. My alma mater had psychology, neuroscience, and "biopsychology" -- a blend of the two former. MIT has one department cellular, systems and computational neuroscience, along with cognitive psychology (clinical and social psychology are absent). In contrast, some years back Harvard had the now-disbanded Department of Social Relations, consisting of social psychology, social anthropology and sociology.
How can we define one set of standards that would apply to all those different departments? Perhaps it is exactly this multitude of department structures that so frustrates the folks at the Bologna Process, but I'm not sure there is an alternative. The late 20th century saw incredible growth and turmoil in the social and neurosciences, and nobody is quite sure what is the appropriate way of carving up the subject matter. Both the Harvard and MIT strategies have something to recommend them, but they are polar opposites.
In the end, I'm just not convinced there is a problem. Ultimately, what employers care about is not the quality of the employee's education, but the quality of his/her work product. So maybe that's what we should be evaluating.
Baseball Models
The Red Sox season opener was delayed yesterday by rain. In honor of Opening Day 2.0 (this afternoon), I point you to an interesting piece in the Times about statistical simulations in baseball. According to the article, the top simulator available to the public is Diamond Mind.
Subscribe to:
Posts (Atom)