Field of Science

Steven Pinker on Roberts-speak

If you haven't yet seen it, check out this New York Times editorial by Harvard Professor of Psychology, Steven Pinker. It is an analysis of (perhaps) why Chief Justice Roberts bungled the inaugural swearing-in.

The Assistant Village Idiot has a rather strange rebuttal. The author seems to believe Pinker's editorial was a political commentary. Well, it is, but Pinker is concerned about the politics of language (something he's worried about for a long time, as anyone who has read his books knows), not Supreme Court politics. The writer continues:

Pinker seems unable to restrain himself from injecting his political opinions into his discussions of language and thought. I wonder what that means?
One might ask the same question right back.

Have you ever shruck?


"Remember the time in 2003 when Bartlett came to work all hung over?" Laughs. "Nothing ever changes."
[Bush] continued: "We never shruck—"
"Shirked!" someone yelled.
"Shirked," Bush corrected, smiling. "You might have shirked; I shrucked. I mean we took the deals head on."
This is an excerpt from an account of George W. Bush's farewell party at the Spanish Ballroom in Glen Echo (which I know better as a middling swing dance venue; apparently the better places were all booked).

A number of people have been making hay about Bush's creative past tense inflection of the verb shirk. This is probably because it fits with the general perception of Bush as barely literate. Not to defend one of the nation's most disastrous presidents, but shirk is actually a hard verb to decline.

The Psychology of the Past Tense

Most of us were taught in school that to make the past tense of a verb (walk) you add an -ed (walked). Of course it turns out there are some irregular verbs (ran, slept) which have to be memorized as such. A simple theory would just state that these exceptions are on a metaphoric list: when an English speaker wants to put a verb into the past tense, she checks the list of exceptions first. If the verb is on the list, she uses that irregular form; if not, she adds -ed.

This seems like a decent theory, but it doesn't quite work. This is because people are perfectly capable of coming up with new irregular past tense forms. Suppose you heard a verb splink, which means to fall into a pool of water. What do you think the past tense would be? Many people would guess splunk. Our list model can't explain this, since that irregular isn't on the list. However, it seems clear where splunk comes from: it's an anology to sink-sunk.

In fact, historically some verbs have become irregular. Once upon a time, the past tense of creep was creeped. So clearly people are capable of inventing new irregular forms that aren't on the metaphoric list.

The Past Tense Wars

How to fix this model was the focus of the far-reaching Past Tense Debate, in which I was once a minor participant. Although everybody's theory came to predict new irregular forms, none of the theories were very good at predicting a particular form (why splunk instead of splought, on analogy to think-thought? For some very interesting recent work on this problem, check out a series of recent papers by Adam Albright).

When I was doing this work, I would present participants with made up verbs and ask them to give me a past tense. I got a lot of responses like splunk, but I also got very odd responses. It was not infrequent for a person to add or subtract a consonant (sadly, I don't remember any of the best examples). Many looked a good deal like turning shirk to shruck. Granted, few people made such big mistakes on the real words (other than the infamous brung), but it seems clear Bush has a deficiency in (linguistic) planning and monitoring, so one would expect his irregularizations to be more prominent. (I'm actually sympathetic to the linguistic malady, at least, since I'm similarly inarticulate when speaking off the cuff.)

Parting Thoughts

As usual, Language Log got to this topic first and used much more impressive vocabulary (e.g., "metathetic").

Why a People Don't Panic During a Plane Crash


A lot has been made about the the crew and passengers of United Flight 1549 and their failure to panic when their plane landed in the Hudson. For instance, here is the Well blog at the New York Times:
Amanda Ripley, author of the book “The Unthinkable: Who Survives When Disaster Strikes — and Why” (Crown, 2008), notes that in this plane crash, like other major disasters, people tend to stay calm, quiet and helpful to others.

“We’ve heard from people on the plane that once it crashed people were calm — the pervading sound was not screaming but silence, which is very typical ... The fear response is so evolved, it’s really going to take over in a situation like that. And it’s not in your interests to get hysterical. There’s some amount of reassurance in that I think.’’
On a different topic, but along the same lines, the paper's Week in Review section discusses the fact that most people are coping with the recent economic collapse reasonably well, all things considered:
Yet experts say that the recent spate of suicides, while undeniably sad, amounts to no more than anecdotal, personal tragedy. The vast majority of people can and sometimes do weather stinging humiliation and loss without suffering any psychological wounds, and they do it by drawing on resources which they barely know they have.
Should we be surprised?
This topic has come up here before. People are remarkably bad at predicting what will make them happy or sad. Evidence shows that while many people think having children will make them happy, most people's level of happiness actually drops significantly after having children and never fully recovers even after the kids grow up. On the other end of the scale, the Week in Review article notes that
In a recently completed study of 16,000 people, tracked for much of their lives, Dr. Bonanno, along with Anthony Mancini of Columbia and Andrew Clark of the Paris School of Economics, found that some 60 percent of people whose spouse died showed no change in self-reported well-being. Among people who’d been divorced, more than 70 percent showed no change in mental health.
This makes a certain amount of sense. Suppose the mafia threatens to burn down your shop if you don't pay protection money, and suppose you don't pay. They actually have very little incentive to follow through on the threat, since they don't actually want to burn down your shop -- what they want is the money. (This, according to psychology Steve Pinker, is one of the reasons people issue threats obliquely -- "That's a nice shop you have here. It'd be a shame if anything happened to it." -- so that they don't have to follow through in order to save face.)

Similarly, biology requires that we think we'll like having children in order to motivate us to have them. Biology also requires that we think our spouse dying would ruin our lives, in order to motivate us to take care of our spouse. But once we have children or our spouse dies, there is very little evolutionary benefit accrued by carrying through on the threat.

Finding the idea of a plane crash very scary: useful.
Mass panic and commotion during a crash: not so much.

Presidential Words

The New York Times has an interesting feature analyzing inaugural addresses. An interactive tool displays and visually ranks the most common words in each presidential inaugural address in US history. This is a common method for determining what the common themes are in a piece of work, and as such it captures both a moment in time and the style of a particular president.

Can Your Brain Force You to Do Something You Don't Want to Do?

I have been reading Jerome Kagan's compelling recent book on emotion. I stumbled on one particular line:

An article in the June 20, 1988, issue of Time magazine, reporting on a woman who murdered her infant, told readers that the hormonal changes that accompany the birth process create emotional states, especially in women unprepared for the care of children, that can provoke serious aggression that women are unable to control. It is thus not fair, the journalist argued, to hold such mothers responsible for their horrendous actions. This conclusion is a serious distortion of the truth. There is no known hormonal change that can force a woman to kill her infant if she does not want to do so!
This raises one of most difficult problems facing 21st Century ethics. We want to treat criminals differently if they are in control of their actions. For instance, a soldier who is ordered to commit an atrocity is, if still guilty, a bit less guilty than one who does the same thing, but just for kicks.

When the outside influence constraining your free will actually arises within your own body, it's a bit more difficult. Suppose Alfred goes on a drug-induced killing spree. Again, it's different from the just-for-kicks murderer, but then one might wonder if Alfred should have thought of the consequences before injecting himself with psychotics. Or what about somebody who had a psychotic break? Where do we draw the line between that and a bad mood?

Many people used to be comfortable drawing the line between psychosis and a bad mood using medical information. Anyone who acts under the influence of a medical condition is less culpable (or, at least, differently culpable) than somebody who is not. However, neuroscientists find the brain correlates of conditions like a bad mood and geneticists find that nearly every personality trait is heritable (including being just plain mean), this line is breaking down.

To be fair, this is in essence not a new problem. Certain strains of Christian religious thinkers have spent centuries tying themselves into knots trying to explain how, given that everything is according to God's plan, including sin, it was not sacrilege to punish sinners, who, by definition, were just carrying out God's plan.

Nonethess, Christian civilizations did not collapse under the weight of this paradox, and I suspect we'll get along for some time without a coherent answer to the Great Question of Free Will. But it would still be nice to have...

----
Kagan (2007) What Is Emotion, p. 80

What is the Longest Sentence in English?

Writers periodically compete to see who can write the longest sentence in literature. James Joyce long held the English record with a 4,391 word sentence in Ulysses. Jonathan Coe one-uped him in 2001 with a 13,955 word sentence in The Rotter's Club. More recently, a single-sentence, 469,375 word novel appeared.

Will they ever run out of words?

No. It's easy to come up with a long sentence if you want to, though typing it out may be a chore. Here's a simple recipe:

1. Pick a sentence you like (e.g., "'Twas brillig and the slithy toves did gyre and gimble in the wabe.")

2. Add "Mary said that" to the beginning of your sentence (e.g., "Mary said that 'twas brillig and the slithy toves did gyre and gimble in the wabe.")

3. Add "John said that" to the beginning of your new sentence (e.g., "John said that Mary said that 'twas brillig and the slithy toves did gyre and gimble in the wabe.")

4. Go back to step #2 and repeat.

If you keep this up long enough, you'll have the longest sentence in English or any other language.

Why this matters.

There are reasons to care about this other than immortalizing your name. This formula is a proof by demonstration that language learning is not simply a matter of copying what you have heard others say. If this was true, nobody could ever make a longer sentence than the longest one they had ever heard.

However, making longer sentences is not simply a matter of stringing words together. You can't break the longest-sentence record by stringing together the names "John" and "Mary" 469,376 times. That wouldn't be a sentence.

This exercise is one of the most famous proofs that language has structure, and speakers of a language have an intuitive understanding of that structure (the other famous proof arguably being the sentence Colorless green ideas sleep furiously.).

Modern Environmentalism, or Which Would You Rather Save: Your Planet or Your Soul?

According to Johann Hari, writing for Slate, there are at least two kinds of environmentalists: the romantics and the realists.

The romantics—a tradition you can peel back to Wordsworth's daffodils—see environmental crises as primarily spiritual. They believe concrete and cities and factories are fundamentally inhuman, alienated habitats that can only make us sick. They cut us off from the natural rhythms of the land, and encourage us to break up the world into parts and study them mechanistically—when, in fact, everything is connected...

The rational environmentalists ... believe our crisis is not spiritual at all, but physical. Human beings didn't unleash warming gases into the atmosphere out of malice or stupidity or spiritual defect: They did it because they wanted their children to be less cold and less hungry and less prone to disease. The moral failing comes only very late in the story—when we chose to ignore the scientific evidence of where wanton fossil-fuel burning would take us. This failing must be put right by changing our fuel sources, not altering our souls.
We might recast these as those who want to return to nature, and those who want to preserve nature:

Diagnose the problem differently, and you end up with fundamentally different solutions. You can see this most clearly if you look at the environmentalist clash over cities, over how we should live: Is the way forward to build more cities or to try to get people to flee to the countryside?
Personally, I'm a realist (as is Hari). Hari, though, tries to make the case for realism by pointing to the thought processes involved (e.g., "the cities of human beings are as natural .. as are the colonies of prairie dogs or the beds of oysters."). However, I think realism is the only option for those of us who live in the real world -- and I'm not using that phrase metaphorically: really mean anyone who lives on Planet Earth.

The Reality on the Ground.
Here's the problem: There are over 6,000,000,000 people on Earth. There are only 58,179,688 square miles of land. That is over 103 people per square mile. If we fan out into the countryside, there will be no countryside. And keep in mind that those 58,179,688 square miles of land include the Earth's deserts (14% of land) and high mountains (27% of land).

I love Nature. I love spending time in Nature. (I'd love to publish in Nature, too, but that's a different topic.) The only way it will be possible for any sizeable chunk of people to spend time in Nature is for most of us to live in cities -- frankly, in cities more dense than the ones that exist today in America.

It may be true that there are too many people, but before anyone suggests we start reducing the world population, keep in mind that if we want to ge to the point where there is only 1 person per square mile, over 99% of humanity must disappear. Personally, that's not my environmentalism fantasy.

Even Experts Don't Know what Brain Scans Mean

For some reason, many people find neuroscience more compelling than psychology. That is, if you tell them that men seem to like video games more than women, they are unconvinced, but if you say that brain scans of men and women playing video games showing that the pleasure centers of their brains respond to video games, suddenly it all seems more compelling.

More flavors is more fun, and the world can accept variation in what types of evidence people find compelling -- and we're probably the better for it. In this case, though, there is a problem in that neuroscientific data is very hard to interpret. Jerome Kagan said it perfectly in his latest book, so I'll leave it to him:

A more persuasive example is seen in the reactions to pictures that are symbolic of unpleasant (snakes, bloodied bodies), pleasant (children playing, couples kissing), or neutral (tables, chairs) emotional situations.The unpleasant scenes typically induce the largest eyeblink startle response to a loud sound due to recruitment of the amygdala. However, there is greater blood flow to temporal and parietal areas to the pleasant than to the unpleasant pictures, and, making matters more ambiguous, the amplitudes of the event-related waveform eight-tenths of a second after the appearance of the photographs are equivalent to the pleasant and unpleasant scenes. A scientist who wanted to know whether unpleasant or pleasant scenes were more arousing could arrive at three different conclusions depending on the evidence selected.
Daniel Engber in Slate has more excellent discussion of this problem.

Similarly, many posts ago, I noted that another Harvard psychologist, Dan Gilbert, prefers to simply ask people if they are happy rather than use a physiological measure because the only reason we think a particular physiological measure indicates happiness is because it correlates with people's self-reports of being happy. In other words, using any physiological measure (including brain scans) as indication of a mental state is circular.


----
Kagan (2007) What Is Emotion, pp. 81-82.

----
PS Since I've been writing about Russian lately, I wanted to mention an English-language Russian news aggregator that I came across. This site is from the writer behind the well-known Siberian Light Russia blog.

Fractured Consciousness


The modern scientific consensus is that the 'mind' is just a word we use to describe our experience of our own brains in action. That is, mind and brain are more or less the same thing, just described at different levels (this gets stuck in the semantics because the brain monitors some nonconscious things such as heart rate, activities not normally thought of as in the domain of the mind).

That said, some in the scientific community and many in the general community have difficulty buying this 'astonishing hypothesis' (check out the comments to my last post on the subject).

Different people arrive at the hypothesis by their own paths. To me, the most compelling evidence is the range of bizarre consequences of brain damage. For instance, check out this late-December New York Times profile of a recent case of blind-sight, a phenomenon in which a person, due to brain damage, believes herself to be blind, but is clearly able to see. Oliver Sacks books are full of such cases, such as hemispheric neglect, in which people lose their awareness of half the world, to the extent that they eat from only one side of their plate, shave only one side of their face, and may even only be able to turn in one direction. A recent obituary of a famous amnesic noted how work with amnesics has shown that losing one's ability to form memories is in essence losing part of oneself.

Data like these make it hard to save dualism. If there is a non-material soul, it is not responsible for memory, for having a sense of left or right, or probably even for consciousness itself. That doesn't seem to leave much for the non-material soul to do. This conclusion may be disheartening, but it seems inescapable.

How good is your memory?


The average 20-29 year old scores a 2.5 on my Memory Test. How well can you do?

There are, of course, different types of memory. Most people think of 'memory' as an ability to recall facts and events from days or even years ago. This is what was destroyed in the famous amnesic H. M. However, H. M. was still able to remember new information for at least a few seconds; that is, his short-term ("working") memory was spared. There are also other types of memory, such as iconic memory, also knows as "sensory" memory. Moreover, memory for facts seems to dissociate from memory for skills ("know-how").

The Memory Test tests visual working memory.

Before you take the test, please do me one favor. If you want to test yourself multiple times, feel free to do so. But please check off the "have you done this experiment before" box. Failing to do this can screw up the data, so it's important.

What Does the Test Involve?

You try to remember four simple shapes for one second. Afterwards, you are shown a single shape. You have to decide if it is one of the four you were to remember. There are 40 trials, plus some practice trials.

A note about the practice: The practice trials are really, really hard. That is to get you warmed up, just like a runner tying weights to her ankles during her warm-up. The actual test is easier.

How is the Score Calculated?

On any given trial, you get the answer either right or wrong. We could just calculate what percentage you get right, but that would mean getting a score like "80%," which isn't very satisfying. 80% of what?

A formula developed by Nelson Cowan can be used to estimate how many of the shapes, on average, actually make it into your short-term memory store. The formula is this:

(% hits + % correct rejections - 1) / (Total number of objects)

A 'hit' means answering 'yes, this is one of the four objects,' when in fact that is the correct answer. A 'correct rejection' is saying 'no, this is not one of the four object,' when in fact it is not.

From the math, the score can run from -1/4, if you get every question wrong, to 4, if you get every question right (which has happened, but rarely). If you guessed at random, you should get half the questions right, in which case your score should be 0.

Keep in mind that this depends completely on the shapes. If the shapes are really hard to remember (as the practice shapes are), scores will be lower. If they are very easy, scores will be higher. What makes a shape easy is not just how complex it is, but how similar it is to the other shapes (how easy the shapes are to confuse with one another).

What Does the Score Mean?

You could have a higher or lower score for a number of reasons. For one thing, you might have guessed abnormally well or abnormally poorly. All tests are subject to a guessing effect. On average, guessing cancels itself out, but if the test is short enough and enough people take is, somebody is likely to get everything right (or wrong) just by chance.

Luck aside, a good score could mean that you have more "room" in your short-term memory. It might also mean you are better at avoiding interference. There are several types of interference in memory, and so you could be better at avoiding any one of them. You might also be better at paying attention, or you might have developed a useful strategy for success on this task. (That said, visual short-term memory does appear to be anywhere near as susceptible to strategies as verbal short-term memory.)

Remember one thing. This is not a clinical test. Though clinical tests for verbal short-term memory exist, I'm not sure there even are clinical tests for visual short-term memory. This is just for fun. Enjoy it.

Wait. How Do you Know What the Average Score Is?

The Memory Test is nearly identical to an experiment I ran previously. I used the data from that version to estimate what the scores will be on this version.

(Photo served from the National Geographic website)

Androids Run Amok at the New York Times?


I have been reading Steve Pinker's excellent essay in the New York Times about the advent of personal genetics. Reading it, though, I noticed something odd. The Times includes hyperlinks in most of its articles, usually linking to searches for key terms within its own archive. I used to think this linking was done by hand, as I do in my own posts. Lately, I think it's done by an android (and not a very smart one).

Often the links are helpful in the obvious way. Pinker mentions Kareem Abdul-Jabbar, and the Times helpfully links to a list of recent articles that mention him. Presumably this is for the people who don't know who he is (though a link to the Abdul-Jabbar Wikipedia entry might be more useful).

Some links are less obvious. In a sentence that begins "Though health and nutrition can affect stature..." the Time sticks in a hyperlink for articles related to nutrition. I guess that's in case the word stirs me into wondering what else the Times has written about nutrition. That can't explain the following sentence though:

Another kind of headache for geneticists comes from gene variants that do have large effects but that are unique to you or to some tiny fraction of humanity.

There is just no way any human thought that readers would want a list of articles from the medical section about headaches. This suggests that the Times simply has a list of keywords that are automatically tagged in every article...or perhaps it is slightly more sophisticated and the keywords vary based on the section of the paper.

I'm not sure how useful this is even in the best of circumstances. Has anyone ever actually clicked on one of these links and read any of the articles listed? If so, comment away!

(picture from Weeklyreader.com)

Skeptical of the Skeptics

I have complained -- more than once -- that the media and the public believe a psychological fact (some people are addicted to computer games) if a neuroimaging study is somehow involved, even if the study itself is irrelevant (after all, the definition of addiction does not require a certain pattern of brain activity -- it requires a certain pattern of physical activity).

Not surprisingly, I and like-minded researchers were pleased when a study came out last year quantifying this apparent fact. That is, the researchers actually found that people rated an explanation of a psychological phenomenon as better if it contained an irrelevant neuroscience fact.

Neuroskeptic has written a very provocative piece urging us to be skeptical of this paper:

This kind of research - which claims to provide hard, scientific evidence for the existence of a commonly believed in psychological phenomenon, usually some annoyingly irrational human quirk - is dangerous; it should always be read with extra care. The danger is that the results can seem so obviously true ("Well of course!") and so important ("How many times have I complained about this?") that the methodological strengths and weaknesses of the study go unnoticed.
Read the rest of the post here.

Is language just statistics?

Many years ago, I attended a talk in which a researcher (in restrospect, probably a graduate student) was talking about some work she was doing on modeling learning. She mentioned that a colleague was very proud of a model he had put together in which he had a model world populated by model creatures which learned to avoid predators and find food.

She reported that he said, "Look, they are able to learn this without *any* input from the programmer. It's all nurture, not nature." She argued with him at length to point out that he had programmed into his model creatures the structures that allowed to them to learn. Change any of those parameters, and they ceased to learn.

There are a number of researchers in the field of language who, impressed by the success of statistical-learning models, argue that much or all of language learning can be accomplished by simply noticing statistical patterns in language. For instance, there is a class of words in English that tend to follow the word "the." A traditional grammarian might call these "nouns," but this becomes unnecessary when using statistics.

There are many variants of this approach, some more successful than others. Some are more careful in their claims (one paper, I recall, stated strongly that the described model did away with not only grammatical rules, but words themselves).

While I am impressed by much of the work that has come out of this approach, I don't think it can ever do away with complex (possibly innate) structure. The anecdote above is an argument by analogy. Here is a great extended quote from Language Learnability and Language Development, Steven Pinker's original, 1984 foray into book writing:

As I argued in Pinker (1979), in most distributional learning procedures there are vast numbers of properties that a learner could record, and since the child is looking for correlations among these properties, he or she faces a combinatorial explosion of possibilities. For example, he or she could record of a given word that it occurs int eh first (or second, or third, or nth) position in a sentence, that it is to the left (or right) of word X or word Y or ..., or that it is to the left of the word sequence WXYZ, or that it occurs n the same sentence with word X (or words X, Y, Z, or some subset of them), and so on. Adding semantic and inflectional information to the space of possibilities only makes the explosion more explosive. To be sure, the inappropriate properties will correlate with no others and hence will eventually be ignored, leaving only the appropriate grammatical properties, but only after astronomical amounts of memory space, computation, or both.

In any case, most of these properties should be eliminated by an astute learner as being inappropriate to learning a human language in the first place. For example, there is no linguistic phenomenon in any language that is contingent upon a word's occupying the third serial position in a sentence, so why bother testing for one? Testing for correlations among irrelevant properties is not only wasteful but potentially dangerous, since many spurious correlations will arise in local samples of the input. For example, the child could hear the sentences John eats meat, John eats slowly, and the meat is good and then conclude that the slowly is good is a possible English sentence.

Ultimately, a pure-statistics model still has to decide what regularities to keep track of and what to ignore, and that requires at least some innate structure. It probably also requires fairly complex grammatical structures, whether learned or innate.

The Purpose of Language

A book I'm currently reading quotes the well-known linguist Charles Fillmore as writing
the language of face-to-face conversation is the basic and primary use of language, all others being best described in terms of their manner of deviation from that base... I assume that this position is neither particularly controversial nor in need of explanation.
If only it were so. Uber-linguist Noam Chomsky said in a talk I attended that language is not "for communication." I've never been quite sure what he meant by this, so I decided this was a good time to find out.

Googling turned up this interview, in which his statement is much more mild. He seems to simply state that to the extent language is used socially, it isn't always for the purpose of communication. I can get on board with that.

This other interview, however, makes a stronger claim. Here is a representative quote:

If human language has a function at all it's for expression of thought. So if you just think about your own use of language, a rather small part is used for communication. Much of human language is just used to establish social relations. Suppose you go to a bar in Kyoto and you spend an evening talking to your friends. You're not 'communicating'. You're rarely communicating. You're not presenting them with any information that changes their belief systems. You're simply engaged in a kind of social play.
Perhaps. I'm still with Fillmore that this seems to be derivative on communication, but I'm not even sure what kind of evidence could be found that would favor one position or the other.

Cutting Down Trees to Save the Forest

It took me a while to understand the concept of ecotourism. When I signed up to spend a year with the just-emergent Great Baikal Trail organization from 2003-2004, I honestly picked it for reasons unrelated to its mission: building nature trails.

Baikal is the world's largest (by volume) and deepest lake, with 20% of the world's fresh water and a dizzying array of species found nowhere else (some of them very tasty, I have to admit). It's an incredible place to visit, and I feel lucky to have spent so long there (for reasons, read this and this). It is also one of the world's more pristine habitats, by virtue largely of being in the middle of Siberia. However, the region will eventually develop, and the question is how.

GBT operates on a If-You-Build-It-They-Will-Come principle: namely, if the right infrastructure is put in place, an industry built on tourism will develop, displacing the most likely alternative possibility, which is logging and paper mills (both of which are necessary activities, but would be a shame to see in the Baikal area).

More importantly, if the tourism economy is based on the local natural wonders, there is strong economic pressure to maintain Baikal in its pristine state. It might stay more pristine if all the humans moved elsewhere, but that's not likely to happen. For one thing, if nobody lived there and nobody visited, who has the motivation to maintain the Lake?

The Great Baikal Trail runs summer trail-building programs. An eco-trail is not just any rut in the ground, but requires sophisticated engineering (to prevent, among things, erosion). GBT has been running for over half a decade now and has built up considerable expertise. If anybody is interested in volunteering on a two-week trail project, look them up. Since the tourism economy is still nascent, this also represents one of the few opportunities for non-Russian speakers to travel extensively in the region.

P.S. If there are any Russian speakers reading this, please check out my new 5-minute experiment in Russian.

(for once, photographs are my own)