The Beautiful Languages Of People.
If you are ever lucky enough to visit the foothills of the Himalayas, you may hear a remarkable duet ringing through the forest. To the untrained ear, it might sound like musicians warming up a strange instrument. In reality, the enchanting melody is the sound of two lovers talking in a secret, whistled language. Joining just a handful of other communities, the Hmong people can speak in whistles. The sounds normally allow farmers to chat across their fields and hunters to call to each in their forest. But their language is perhaps most beautifully expressed during a now rarely-performed act of courtship, when boys wander through the nearby villages at nightfall, whistling their favourite poems between the houses. If a girl responds, the couple then starts a flirty dialogue. It’s not just the enticing melodies that make it the perfect language of love. Compared with spoken conversations, it is hard to discern the identity of the couple from their whistles – offering some anonymity to the public exchange. The couple may even create their own personal code, adding nonsense syllables to confound eavesdroppers – a bit like the Pig Latin used by English schoolchildren to fool their parents. “It gives them some intimacy,” says Julien Meyer, at the University of Grenoble, France, who visited the region in the early 2000s.
The practice not only highlights humanity’s amazing linguistic diversity; it may also help us to understand the limits of human communication. In most languages, whistles are used for little more than calling attention; they seem too simple to carry much meaning. But Meyer has now identified more than 70 groups across the world who can use whistles to express themselves with all the flexibility of normal speech. These mysterious languages demonstrate the brain’s astonishing capacity to decode information from new signals – with insights that are causing some neuroscientists to rethink the fundamental organisation of the brain. The research may even shed light on the emergence of language itself. According to one hypothesis, our first words may have sounded something like the Hmong’s courtship songs.
Meyer’s interest in whistled languages began with a 40-year-old Scientific American article about Silbo Gomero – a form of whistled Spanish ‘spoken’ on one of the Canary Islands. The trilled sounds allow shepherds to communicate across deep ravines, and they are apparently so close to the local birdsong that blackbirds have been known to learn and mimic the human dialogues. Meyer was instantly fascinated – and ended up completing a PhD on the subject. More than a decade later, he’s still hooked. “I didn’t think that one day it would give me a job,” he says. Much of Meyer’s research has focused on charting their prevalence around the globe. The ancient history books offered a few pointers. In the 5th Century BC, for instance, the Greek historian Herodotus described a group of cave-dwelling Ethiopians. “Their speech is like no other in the world: it is like the squeaking of bats,” he wrote. We can’t know for sure which communities he was describing, but Meyer says that several whistled languages can still be heard in Ethiopia’s Omo Valley. Indeed, Meyer has now identified whistled languages in every corner of the globe. Given that the whistles can travel much further than normal speech – as far as 8km (5 miles) in open conditions – they are most commonly found in the mountains, where they help shepherds and farmers to pass messages down the valley. But the sounds can also penetrate dense forests such as the Amazon, where hunters whistle to locate each other through the dense foliage. “The whistles are good for fighting against reverberation,” says Meyer. And unlike regular speech, they tend not to scare the potential prey. They can also be useful at sea: the Inuit communities of the Bering Strait whistle commands to each other as they hunt for whales. Perhaps unsurprisingly, these cryptic languages can also be a weapon of war. Meyer says that the indigenous Berber populations (also known as the Amazigh) in the Atlas Mountains used whistles to pass messages during their resistance against the French. The Australian army, meanwhile, recruited Wam speakers from Papua New Guinea to whistle messages across the radio so that they could confound Japanese eavesdroppers. And let’s not forget that whistled speech is often used for less prosaic purposes, such as religion, romance and poetry – as the Hmong show so beautifully. Ancient Chinese texts record people whistling Taoist verses – a practice that was thought to send them into a kind of meditative reverie. Meyer has found that Southern China is still a hot spot for many diverse whistling communities among its ethnic minorities, including the Hmong and the Akha.
Whistled languages are not just the stuff of legend – but a vibrant method of communication for millions of people living today. For the uninitiated, it may seem impossible to imagine the ways that the rising and falling tones could convey meaning. Meyer has found that they typically rely on one of two strategies – both of which use changes in pitch create a kind of stripped-down skeleton of the spoken language. It all depends on whether normal, everyday speech is “tonal.” In some countries, particularly in Asia, the pitch of a single syllable in a word can change its meaning. As a result, the whistles follow the melodies that are inherent in any spoken sentence. But other languages – such as Spanish or Turkish – are not naturally tonal. In these cases, the whistles instead mimic the changes in resonance that come with different vowel sounds, while the consonants can be discerned by how abruptly the whistles jump and slide from note to note.Either way, the whistles lose many of the cues that normally help us to distinguish different words – and outsiders often find it almost impossible to believe they carry intelligible messages. Meyer has found that fluent whistlers can decode the sentences with more than 90% accuracy – around the same intelligibility as speech. Meyer suspects that this relies on the same neural machinery that allows us to hold a conversation in a crowded room, or to make sense of a whispered message. “Our brains are good at reconstructing words that have been a bit destroyed by noise or other distortions,” says Meyer. We can see the same in written messages when the letters are all jumbled up or the vowels removed – yuor biran aumtoacitally flls th gpas.
Further studies of this process are causing some neuroscientists to rethink the way the brain is organised. For decades, researchers had assumed that each side of the brain is highly specialised for particular tasks – with language falling firmly in the left hemisphere. But Onur Gunturkun at Ruhr University Bochum, in Germany, wanted to find out if the same would be true of whistles. “The way you hear or read the language shouldn’t make a difference,” he says. To find out, he travelled to Kuskoy – literally, ‘the village of birds’ – which sits in a valley near the Black Sea. Like the people of La Gomera, shepherds whistle messages across the mountain plateau, while fishermen use them to cut through the roar of the river in the valley. Gunturkun still remembers watching a whistled conversation for the first time, as the mayor welcomed him to the village. The experience of hearing something so unlike regular language carry so much meaning “was like magic”, he says. Above you can hear a sound clip of a Kuskoy villager saying “we speak this whistled language”.
A brain scanner would have been too hefty to carry all the way from Germany to this isolated village, so Gunturkun improvised with a simple listening task that involves playing slightly different syllables in each ear and asking the participant to report which one they heard. The experiment centres on a peculiarity of the body’s wiring, which means that each ear feeds into the opposite side of the brain. As a result, the syllable coming in from the right tends to grab our attention, since it is fast-tracked to the dominant left hemisphere. If Gunturkun played “pah” in your left ear, and “tah” in your right ear, for instance, you would hear the “tah” – since it reaches the language processing centres first. At least, that was the theory. This was not what the people of Kuskoy heard when Gunturkun played the whistled syllables. Rather than favouring left or right, they were equally likely to discern whistles from either direction – suggesting that both sides of the brain were being co-opted to make sense of the signals. “The asymmetry was gone,” says Gunturkun. “Both hemispheres shared the work.” Not only does this demonstrate the brain’s flexibility; the results, published in 2015, might even help people rebuild their lives after a stroke. Damage to the left hemisphere can render someone unable to speak – but Gunturkun’s findings would suggest that they might still be able to shift their processing to the right hemisphere and talk in whistles instead. As he puts it: “There are many ways to Rome”. He emphasises that this was not the primary aim of the research, however. “It was just curiosity – for the sake of understanding the world around us.” The team’s own experiences show that outsiders can begin to adapt to the ‘bird language’ with regular exposure – provided you know the spoken language first. Gunturkun is fluent in Turkish, and by the end of the trip, he had begun to detect the odd whistled word from the locals’ conversations. His experience would seem to support Meyer’s most recent study, which found that people with no prior knowledge of the whistled languages can soon work out which whistles correspond with which vowels; you do not need to have been born in Kuskoy to learn to speak like a bird.
Whistled languages are also of increasing interest to neuroscientists studying one of humanity’s other unique traits – music. Growing evidence suggests that language and music both lean on many of the same brain regions: we tend to process a song’s chord progression using the same circuits that make sense of a sentence’s syntax, for instance. This may explain why music lessons can alleviate some speech or hearing problems. In 2014, a team at Northwestern University in Chicago found that musical training can even improve a child’s literacy. Whistled communication – with their entrancing melodies – would appear to naturally exemplify this close link. “It seems to be on the border of music and language,” says Aniruddh Patel at Tufts University in Massachusetts. The Hmong, for instance, may even play out their poems on a mouth harp instrument. In this case, it is impossible to separate melody and lyrics. Working out exactly how these languages are processed might, therefore, offer more precise details about the shared networks, and the ways those brain systems deal with the two types of sound, he says. Tellingly, the right brain hemisphere, which appears to be essential to comprehend the whistled syllables, has long been known to process rhythm and melody – potentially offering one example of the ways that music processing can aid the understanding of language, and vice versa.
Delve even further, and we might begin to understand how those traits arose in pre-history. Music and language both involved extraordinary changes: refined articulation, the capacity to imitate others and the ability think symbolically. But what set it all in motion? One particularly elegant solution to this conundrum dates back to the father of evolutionary theory, Charles Darwin, who proposed that the two traits arose together as a kind of “musical protolanguage”. According to this view, humans first started singing before we could talk – perhaps as a kind of courtship ritual. Like the blackbird’s song, the musical protolanguage would have been a way to show off our virtuosity, forge social bonds, and scare off rivals, without carrying specific meanings. Over time, however, the practice would have pushed us to evolve a finer control of our vocal cords, which then laid the foundations for more meaningful utterances.
The idea is attractive to some evolutionary biologists since it suggests a series of small steps, rather than a giant leap, for humankind’s journey to language. But given the cultures of people like the Akha and the Hmong, might that first protolanguage have been whistled, rather than sung? “Perhaps whistling was part of the dynamic that pushed humans to adapt their communication to something more elaborate,” says Meyer, who outlined his hypothesis in a recent monograph on whistled speech.
Meyer points out that although other primates cannot learn to speak like humans, some have mastered whistling. Bonnie, an orangutan at the US National Zoo in Washington DC was able to mimic the simple tunes of her keeper Erin Stromberg, and orangutans in the wild have even been known to make a high-pitched squeak by sucking air through a leaf. Such displays suggest that whistling may have required fewer adaptions than voiced speech, making it the ideal stepping stone to language. If so, whistled signals could have begun as a musical protolanguage, and as they became more complex and imbued with meaning, they could have also helped coordinate hunting and foraging. After all, Meyer’s research certainly suggests that whistling is ideal for communicating over distance and avoiding the attention of predators and prey – advantages that would have helped our ancestors’ survival. Later on, we could have gained control of our vocal chords too, but the whistled languages continued to be a small but crucial element of humanity’s overall repertoire. The idea is not yet the scientific consensus. But if it is correct, it would mean that those enchanting melodies of the Hmong may be the closest we will ever come to hearing the sounds of humanity’s first words. As modernisation rapidly encroaches on those remote communities, we will need to move quickly to capture these languages, before those echoes from the past are lost forever.
Credit: David Robson for The BBC, 25 May 2017.