Sprache bestimmt unser Wahrnehmen, unser Handeln, und sie begleitet uns das ganze Leben. Mit ihr können wir uns anderen Menschen mitteilen, uns mit ihnen austauschen und unserem grundlegend menschlichen Bedürfnis nach Gemeinsamkeit und Zugehörigkeit Ausdruck verleihen. Sprache beflügelt außerdem unseren Geist, unser Denken und unsere Phantasie. Sie gibt uns Begriffe, um unsere Eindrücke und Wahrnehmungen in Worte zu fassen, sie in gedankliche Bahnen zu lenken, um Ideen zu entwerfen, Wunschträume zu leben, Meinungen zu formulieren und Wissen zu erweitern. Kurz und gut: Sprache ist Teil unserer Identität und sie ist ein Schlüssel zur Welt; zu unserer äußeren ebenso wie zu unserer inneren Welt. Als Werkzeug des Denkens und der Kommunikation hilft sie, uns selbst und andere zu verstehen und das, was unsere Wirklichkeit ausmacht, zu benennen, zu ergründen und zu gestalten.
Die ersten Worte – Für alle Zeit verklungen
Wie aber kam der Mensch zur Sprache? Was waren seine ersten Worte? Fragen, die seit der Antike die Gemüter bewegen. So viel vorweg: Niemals werden wir je wissen, wann und wo unsere Sprache im modernen Sinne entstanden ist. Denn es gibt weder Tonbandaufzeichnungen aus grauer Vorzeit, noch können wir Zeitreisen in die Vergangenheit unternehmen, vorbei an Cäsar, Cleopatra und Sokrates, vorbei an unseren Vorfahren aus der Steinzeit bis hin zur Ära des sogenannten Tier-Mensch-Übergangsfeld, das vielleicht zwei Millionen Jahre, vielleicht auch sieben Millionen Jahre zurück liegt. Hier irgendwo in dieser Phase der Menschwerdung ist die Sprache entstanden. Und mit der Entwicklung des Menschen hat sich seine Sprache entwickelt, ließ sie sich dort nieder, wo immer es den Menschen hinzog, erweiterte und veränderte sie sich, erfand sie sich neu und passte sie sich dem Lauf der Geschichte an. Die ersten Worte aber, die der Mensch jemals sprach, sind für alle Zeit verklungen.
Einer Ursprungsprache auf der Spur – Experimente, Theorien und Mythen
Der Wunsch, Sprache auf ihre Wurzeln zurückzuführen, hat die Menschheit stets bewogen, Nachforschungen anzustellen. Kaiser Friedrich II. (1194 bis 1250) etwa ordnete an, Neugeborene nicht anzureden und nur mit dem Lebensnotwendigsten zu versorgen. Herausfinden wollte er, in welcher Sprache sie denn sprechen würden, das sei dann wohl die Ursprungssprache. Ohne sprachliche, vor allem aber emotionale Zuwendung, fehlten den Babys allerdings sprichwörtlich die Worte, sie starben früh. Der Regent des Heiligen Römischen Reiches war nicht der Einzige, der solch grausame Experimente, teils aus anderen Gründen, mit Kindern durchführte. Seit Hunderten von Jahren wird immer wieder von Mädchen oder Jungen berichtet, die isoliert von jeglichem sozialem Kontakt oder in der Wildnis aufwuchsen. Eine wie auch immer geartete Ursprungssprache konnte man an ihnen freilich nicht ausfindig machen.
Die frühe Sprachwissenschaft konzentrierte sich stattdessen auf Theorien, für die, das ist ein nur allzu häufiges Schicksal der Theorie, der wissenschaftliche Beleg jedoch ausblieb. Zumal von ihren Kritikern mit Spitznamen versehen, muten die Überlegungen einstiger Gelehrter durchaus drollig an. Die «WauWau“-Theorie vertritt zum Beispiel die Auffassung, die Menschen hätten die Geräusche ihrer Umgebung, vor allem Tierlaute, als Lautmalereien nachgeahmt, um die mit ihnen verbundenen Sachverhalte und Objekte zu bezeichnen. Sprache ginge doch eher aus instinktiven Lauten des Schmerzes, der Freude, der Wut oder anderer Gefühlslagen hervor, das behauptet die „PuhPuh“ (oder «Aua“)-Theorie, während die «Hauruck-Theorie“ der festen Überzeugung ist, Sprache sei aus rhythmischen Lautierungen bei der gemeinsamen Arbeit entstanden, die sich zunächst als Gesänge und später dann als Sprache äußerten. Der Beweis: die prosodischen Merkmale aller heutigen Sprachen, also Sprachmelodie und -rhythmus.
Keine dieser Annahmen kann die Wissenschaftsgemeinde wirklich überzeugen. Zwar gehören Lautmalereien wie «Wauwau“ oder «Kikeriki“ zu unserem Sprachrepertoire, doch besitzt jede Sprache davon nur sehr wenige, als dass sich daraus die Entwicklung eines komplexen Wortschatzes und Regelsystems erklären ließe, mit dem sich moderne Sprache, gleich welcher Kultur sie angehört, auszeichnet. Ähnliche Kritik gilt den anderen Theorien, zumal auch sie sich darüber ausschweigen, wie es zu der Entwicklung von etwa 6.000 Sprachen und zigtausend Dialekten kam, die derzeit die Menschheit spricht, und das, obwohl der Hund in China sicher nicht anders bellt als in Brasilien. Die christliche Legende vom Turmbau zu Babel, der zufolge der erzürnte Schöpfer die Sprache der Menschen verwirrte und es darauf hin die Völker in alle Welt versprengte, mag da dem Forschergeist ebenfalls keine befriedigende Antwort sein.
Der Evolution der Sprache nachspüren – Neuere wissenschaftliche Ansätze
Ob sich die Sprache des Menschen von Naturtönen inspirieren ließ, sie ihm aus Freuden- oder Schmerzensschreien oder gar aus der Kombination von Lauten und Gesten erwuchs, das wird keine Wissenschaft je herausbringen, es fehlt ihr dazu, das wurde schon gesagt, schlicht die Methode. Ohnehin möchte man von einer Ursprungssprache heute nichts mehr wissen. Den Voraussetzungen, Gründen und Folgen von Sprechfähigkeit und Sprachentwicklung für die biologische und kulturelle Entwicklung des Menschen nachzuspüren, das treibt nun die Forschung an. Natur- und Geisteswissenschaften arbeiten dazu Hand in Hand: Paläoanthropologie, Anthropologie, Archäologie, Neurologie, Genetik und Anatomie, auch die Linguistik sitzt, heute klüger, mit im Forscherboot.
Sprache, davon gehen neuere Ansätze aus, entwickelte sich nicht zufällig, sondern in einem komplexen Zusammenspiel gegenseitiger Abhängigkeiten. Äußere Einflüsse wie Klimawandel, veränderte Ökosysteme und die für den Menschen daraufhin notwendigen Anpassungsleistungen spielten ebenso eine Rolle, wie seine Entwicklung vom „aufrecht gehenden“ Lebewesen bis hin zum modernen Menschentypen, der Feuer machen konnte, Werkzeug und Waffen herstellte, auf die Jagd ging, ein geselliges Leben in der Gruppe führte und vom afrikanischen Kontinent aus bis in den hintersten Winkel der Welt zog, um sie zu erobern. Geistig und sozial war der Mensch in seinem Werden stets aufs Neue gefordert. Seine biologische Evolution legte die Grundlagen zur Sprechfähigkeit, dazu gehörten die Entwicklung des Rachenraums infolge der Absenkung von Kehlkopf und Gaumensegel, der Ausbau eines fein abgestimmten Stimmtrakts, die neuronale Kontrolle der Sprechmotorik und ein Gehirn, das all das zu regulieren und zu steuern vermochte. Die kulturelle Evolution des Menschen dagegen trieb seine Sprache in ihrer Bedeutungsentwicklung voran, erweiterte und wandelte sie und ließ sie wiederum Spuren im Gehirn oder besser: im Bewusstsein des werdenden Menschen hinterlassen.
Sprache – Unser gemeinsam geschaffenes kulturelles Erbe
Sprache im heutigen Sinn spricht der Mensch seit höchstens 125.000 Jahren, mindestens 40.000 Jahre. Eine Art Vorsprache dürfte es aber schon früher gegeben haben, davon zeugen fossile Funde und Rekonstruktionen steinzeitlichen Alltags. Nötige Absprachen zur Feindesabwehr oder Nahrungssuche und die Positionierung innerhalb der Gruppe, all das mag die frühen Vertreter unserer Ahnengalerie dazu bewogen haben, sich lautlich zu äußern und ihre Lautäußerungen als ein Instrument sozialer und geistiger Organisation zu kultivieren, bis hin zu einer differenzierten Sprache, die sich im Kampf ums Überleben bewährte.
Als, vor etwa 50.000 Jahren, eine nur kleine Gruppe aus der Spezies „Homo sapiens“ sich aufmachte, Afrika zu verlassen, hatte sie eine komplette Sprache mit Wortschatz und Grammatik im Gepäck. Und vermutlich trug ihr Mitbringsel dazu bei, dass sich der moderne Menschentyp in seiner neuen Lebensumgebung gegenüber den in Asien und Europa schon eingesessenen, sprachlich aber weniger gewandten Artgenossen, den Neandertalern, behaupten und damit seine Sprache zu einer Vielzahl an reich gegliederten Sprachen ausbauen konnte. So tritt das heute spracherwerbende Kind überall auf der Welt ein kulturelles Erbe an, das im Lauf der Menschheitsgeschichte erarbeitet worden ist und das jede Generation mit ihrem Angebot zur Interaktion an die nächste weiter gibt. Es ist faszinierend zu beobachten und eine beachtliche Leistung der Kleinen, wie sie sich (jede) Sprache zu eigen machen. Im Gegensatz zu unseren Vorfahren treffen sie aber auf ein bereits bestehendes Sprachsystem, und schon im Mutterleib sind sie für seine Laute empfänglich. Sprache muss sich nicht erst erfinden, sie ist schon da, doch auch in Zukunft wird sie sich mit uns Menschen verändern.
Vermutlich ist die Sprache relativ spät entstanden. Aber natürlich gibt es kaum archäologische Zeugnisse. Dennoch gibt es ein paar Anhaltspunkte.
Erste Ansätze: Sprachzentren bilden sich vor 2 Millionen Jahren im Gehirn aus
Wir können von der Paläanthropologie her sagen, dass die Sprachfähigkeit oder anatomische Sprechfähigkeit wahrscheinlich sehr viel älter ist als unsere heutige symbolhafte Sprache.
Sehr allgemein formuliert: Vor zwei Millionen Jahren sieht man schon, dass die Sprachzentren «Broca» und «Wernicke» beginnen sich auszubilden. Das sieht man an Innenausgüssen des Gehirns. Allerdings kennen wir die Funktion nicht, sehen nur die Ausstülpungen dieser Hirnregionen.
Wir wissen zudem heute, dass das Kleinhirn sehr viel wichtiger ist bei der Sprache – da ist die motorische Kontrolle usw. Wir wissen aber auch, dass das Kleinhirn vor zwei Millionen Jahren begonnen hat, sich zu vergrößern. Das heißt also, diese anatomische Sprechfähigkeit könnte sehr viel älter sein als das, was wir heute als moderne Sprache ansehen.
Sprache im Sinne von Informationsaustausch haben Menschenaffen auch, das geht ewig weit zurück in den Primatenstammbaum. Aber die Fähigkeit zur symbolhaften und abstrakten Sprache, also Dinge nicht nur additiv aneinander zu setzen – das können übrigens auch Menschenaffen – sondern etwas Neues zu schaffen, kreativ zu sein, ist wahrscheinlich vor relativ kurzer Zeit entstanden. Also vielleicht mit Auftreten des modernen Homo sapiens.
Homo sapiens gibt es schon seit 200.000 Jahren in der heutigen Form, aber Sprache ist sogar noch später anzusiedeln, vielleicht vor 30 oder 40.000 Jahren. Vielleicht fällt das sogar zusammen mit den ersten Kunstwerken, die wir vorher auch nicht kennen – symbolhafte Kunst, symbolhafte Sprache. Und möglicherweise ist das vielleicht eine Genmutation. Das FOXP2-Gen wird dafür verantwortlich gemacht, dass Sprache entstanden ist. Lustigerweise wird das an Mäusen getestet, die ja nun eher schlecht sprechen können – also da muss man mal abwarten, was die Forschung noch erbringt.
Wie kam es zur regionalen Verschiedenheit der Sprachen?
Das kommt natürlich viel später, das kommt durch die Ausbreitung der verschiedenen Menschen über die Erde. Vor 200.000 Jahren steht der Homo sapiens, vor 60.000 Jahren ist schon Australien besiedelt, dann kam der heutige Mensch vor 40.000 Jahren nach Europa – da gab es dann schon die Neandertaler, die möglicherweise auch sprechen konnten. Gefunden wurde ein Zungenbein von Neandertalern, das relativ modern aussieht.
Letztendlich hat Sprache die Funktion der gesellschaftlichen Auseinandersetzung – also muss hier auch das Sozialverhalten beachtet werden. Und Neandertaler hatten ein sehr ausgeprägtes Sozialverhalten – haben Tote begraben, haben Angehörige gepflegt usw.
Die Regionalisierung der Sprache ist erst in den letzten 20.000 Jahren entstanden, durch die Besiedlung der gesamten Erde. Dabei war Nordamerika vor 15.000 Jahren dann das letzte. Oft wird von «ursprünglichen» afrikanischen Sprachen geredet. Da ist nichts ursprünglich, sondern alles hochspezialisiert. Da muss man dann Sprachwissenschaftler fragen, die verfolgen das genauer.
Sprache entwickelt sich natürlich auch viel schneller als biologische Evolution. Biologische Evolution geht ja über die Gene, diese werden von Generation zu Generation weitergegeben – und das dauert dann Tausende von Generationen.
Dann gibt es noch die kulturelle Evolution. Hier wird die Sprache als Transportmedium genutzt und das geht natürlich viel schneller. Das geht von Individuum zu Individuum, das geht von einer Generation zur nächsten und sogar zurück in den Generationen. Das heißt also, allein durch das Medium Sprache ist klar, dass diese Entwicklung auch regional viel schneller geht, als die biologische.
Es ist schwer, die ersten Worte zu finden. Wer nach dem Ursprung der Sprache sucht, kann sich – anders etwa als bei frühen Werkzeugen – nicht auf prähistorische Funde verlassen. Viele Forscher vermuten, Sprache sei eher eine relativ junge Kommunikationsform, die der moderne Mensch vor rund 50.000 bis 100.000 Jahren entwickelt hat, möglicherweise ausgelöst durch eine einzelne genetische Mutation, die zur Sprechfähigkeit geführt hat.
Darauf würden die meisten von ihnen ausgewerteten Studien hindeuten. Möglicherweise enthalten sogar moderne Sprachen noch Elemente der Ur-Sprachen, meinen die Forscher. Die beiden Linguisten glauben sogar, dass diese die heutige Sprachenvielfalt erst möglich gemacht haben. Damit liefern Dediu und Levinson einen neuen Beitrag zu einer lange und intensiv geführten Debatte über den Ursprung der menschlichen Sprache. Die Forscher stützen sich nach eigenen Angaben auf alle verfügbaren archäologischen, anatomischen und genetischen Indizien, die verraten könnten, wann unsere Vorfahren zu sprechen begannen.
Mittlerweile wisse man, schreiben die Wissenschaftler, dass verschiedene Menschenformen, also Neandertaler, moderner Mensch und etwa die jüngst entdeckten Denisova-Menschen, die noch vor rund 40.000 Jahren in Mittelasien lebten, nicht nur miteinander in Kontakt standen, sondern sich auch paarten. Wobei es regionale Unterschiede gibt.
Die Neandertaler haben mehr genetische Gemeinsamkeiten mit den Menschen außerhalb Afrikas als mit den Afrikanern. Auch das Erbgut von Europäern und Asiaten unterscheidet sich deutlich. Generell belegen die Analysen, dass sich Mensch und Neandertaler genetisch stark ähneln. Daraus folgern die Forscher, dass beide ähnliche geistige und kulturelle Fähigkeiten besaßen.
Auch Fossilienfunde stärken diese Überlegungen. Neandertaler, die vor rund 40.000 Jahren im Westen des heutigen Frankreichs lebten, haben sich von modernen Menschen beim Fertigen komplexer Werkzeuge sowie Körperschmuck beeinflussen lassen. Sie waren somit in der Lage, kulturelle Anregungen aufzunehmen. Warum also sollten sie nicht auch schon gesprochen haben? «Neandertaler, Denisova-Menschen und heute lebende moderne Menschen teilten eine ähnliche Fähigkeit für Sprache und Kultur», schreiben Dediu und Levinson.
Die Vorfahren des Homo sapiens nutzten noch eine sehr einfache Sprache. Sie kommunizierten hauptsächlich über Gesten, Grunzlaute und Schreie miteinander.
Primaten verständigen sich auf ähnliche Weise. Sie warnen so ihre Artgenossen vor Gefahren oder teilen dem Gegenüber die eigene Gefühlslage mit. Von Wörtern und komplexen Sätzen war die Sprache unserer Vorfahren noch weit entfernt.
Von Lauten zu Worten Im Laufe der Evolution veränderte sich die Kopfform des Urmenschen. Der Schädel streckte sich, das Hirn wuchs und auch die Zunge und der Kehlkopf nahmen mehr Raum ein als zuvor, ebenso Rachen und Nasenhöhle. Vermutlich war bereits der Homo erectus zumindest anatomisch dazu in der Lage, artikulierte Laute zu produzieren.
Sprachforscher vermuten, dass sich die Sprachfähigkeit des Menschen vor etwa 1,5 Millionen bis 40.000 Jahren herausbildete. Ein genauerer Zeitraum ließ sich bisher nicht bestimmen. Anhand von Fossilien oder urzeitlichen Werkzeugen können Forscher zwar erklären, wie intelligent unsere Vorfahren waren, nicht aber, ob und wie gut sie sprechen konnten.
Möglicherweise konnte bereits Homo erectus sprechen, der vor etwa 1,5 Millionen Jahren lebte. Er hatte immerhin ein größeres Gehirn als andere Menschen vor ihm.
Vermutlich war aber erst der moderne Mensch (Homo sapiens) dazu fähig, zu sprechen. Er tauchte erst vor etwa 150.000 Jahren auf. Die Form und Stellung seines Kehlkopfes gaben ihm die Möglichkeit, mehr Laute hervorzubringen als jeder andere Urzeitmensch.
Der Mensch konnte nicht vom einen auf den anderen Tag sprechen. Diese Fähigkeit hat er erst nach und nach im Laufe der Zeit erworben. Die Sprache verschaffte ihm einen evolutionären Vorteil: Er musste sich fortan nicht mehr nur mit Gesten verständigen, die Hände konnte er nun für andere Dinge nutzen.
Was den Menschen vom Tier unterscheidet
Sei es über Duftstoffe, Laute oder Gesten – die Lebewesen auf der Erde haben ihre Wege gefunden, miteinander zu kommunizieren. Ihre Kommunikation beschränkt sich jedoch meist auf Überlebenswichtiges: die Fortpflanzung, die Futtersuche oder die Warnung vor Feinden.
2005 fanden die britischen Forscher Karen McComb und Stuart Semple in ihren Untersuchungen heraus, dass Bonobos sich mit 38 verschiedenen Lauten verständigen, meist mit Schreien.
Biologen der Universität St. Andrews entdeckten 2010, dass der Orang-Utan 64 unterschiedliche Gesten verwendet, um mit seinen Artgenossen zu kommunizieren. Zwar können manche Menschenaffenarten Gesten und Schreie kombinieren – doch für einen Roman mit tausend Seiten oder ein romantisches Gedicht reicht das nicht aus.
Der Kreativität sind keine Grenzen gesetzt
Das Lautrepertoire der deutschen Sprache besteht zwar nur aus 40 verschiedenen Lauten. Mit diesen kann der Mensch aber zahlreiche Wörter formen und so nahezu alles ausdrücken, was er möchte.
Nach Schätzungen des Duden gibt es im Deutschen zwischen 300.000 und 500.000 Wörter. Eine genaue Zahl gibt es nicht, weil der Wortschatz sich stetig wandelt. Es kommen neue Wörter hinzu, wie «Flashmob» oder «Shitstorm«, andere verschwinden oder werden nur sehr selten benutzt, wie «Stickhusten» oder «Mohammedanismus».
Etwa 50.000 Wörter kann ein Mensch in Deutschland im Schnitt mühelos verstehen. In seinem aktiven Wortschatz hat er zwischen 12.000 und 16.000 Wörter. Das sind die Vokabeln, deren Bedeutung er kennt und die er sicher anwenden kann.
Mit diesem Handwerkszeug kann ein Mensch ohne Probleme neue Wörter erfinden, die zuvor noch niemand gesagt hat. Er kann neue Theorien entwickeln, fantasievolle Geschichten erzählen sowie anderen seine Wünsche und Gefühle mitteilen.
Um eine Sprache zu erwerben, muss ein Mensch nicht einmal besonders intelligent sein. Sofern er geistig oder körperlich nicht so stark eingeschränkt ist, dass die Sprachentwicklung darunter leidet, wird er die Sprache, mit der er aufwächst, mit Eintritt in die Pubertät beherrschen.
Die Grammatik bringt Ordnung in die Sprache
Wer spricht, reiht nicht bloß einzelne Wörter aneinander. Ein kurzer, aber falscher Satz wie «Schöner heute Tag ein ist» lässt sich noch einigermaßen verstehen. Bei längeren Sätzen ist das schon schwieriger.
Je länger und verschachtelter der Satz, desto mehr ist er auf eine Struktur angewiesen, eine Grammatik. Gäbe es keine Regeln, wäre es kaum möglich, dass zwei Menschen sich gepflegt unterhalten. Das Gegenüber wäre nicht vernünftig zu verstehen.
Die Grammatik einer Sprache gibt vor, wie die Wörter angeordnet werden müssen, damit der Satz am Ende Sinn ergibt. Eine Grammatik kann komplex sein wie im Lateinischen oder simpel wie im Englischen.
Das Lateinische beispielsweise unterscheidet in seinen Wortformen sechs verschiedene Fälle (Nominativ, Genitiv, Dativ, Akkusativ, Vokativ und Ablativ). Das Englische hat nur für den Genitiv eine eigene Regel – das Genitiv-S.
Die Grammatik bestimmt, in welcher Reihenfolge die Wörter in einem Satz stehen dürfen, und ob etwa die Endungen einzelner Wörter verändert werden müssen, um etwas möglichst eindeutig auszudrücken.
Sprachen, die einen gemeinsamen Ursprung haben, also der gleichen Sprachfamilie angehören, haben eine ähnliche Grammatik. Italienisch, Spanisch und Französisch stammen etwa aus der Familie der romanischen Sprachen. Sie alle entwickelten sich aus dem Lateinischen. Wer Italienisch spricht, dem fällt es meist leichter, auch Spanisch zu verstehen.
Das Deutsche stammt – wie das Niederländische und Englische – aus der germanischen Sprachfamilie. Vor allem das Niederländische können viele Menschen, die Deutsch als Muttersprache sprechen, gut verstehen, auch wenn sie es nie gelernt haben.
Die Sprachen aus zwei verschiedenen Sprachfamilien unterscheiden sich hingegen meist sehr deutlich, sowohl was die Vokabeln anbelangt als auch die Grammatik.
Wenn ein Deutscher einem anderen Deutschen den Weg erklärt, verwendet er Wörter wie links und rechts. Ein Aborigine erklärt einem anderen Aborigine den Weg, indem er Himmelsrichtungen angibt. Ohne Kompass wären diese Informationen für viele von uns kaum von Nutzen. Ein Aborigine dagegen findet sich mit den Angaben bestens zurecht.
Beekeeping also provides an important source of income for many rural livelihoods. According to IPBES, the western honey bee is the most widespread managed pollinator globally, and more than 80 million hives produce an estimated 1.6 million tonnes of honey annually.
When animals and insects pick up the pollen of flowers and spread it, they allow plants, including many food crops, to reproduce. Birds, rodents, monkeys and even people pollinate, but the most common pollinators are insects, and among them, bees.
Bees at risk from pesticides, air pollution
But sadly, bees and other pollinators, such as butterflies, bats and hummingbirds, are increasingly under threat from human activities.
Bee populations have been declining globally over recent decades due to habitat loss, intensive farming practices, changes in weather patterns and the excessive use of agrochemicals such as pesticides. This in turn poses a threat to a variety of plants critical to human well-being and livelihoods.
Air pollution is also thought to be affecting bees. Preliminary research shows that air pollutants interact with scent molecules released by plants which bees need to locate food. The mixed signals interfere with the bees’ ability to forage efficiently, making them slower and less effective at pollination.
While the vast majority of pollinator species are wild, including more than 20,000 species of bees, the mass breeding and large-scale transport of pollinators can pose risks for the transmission of pathogens and parasites. According to the IPBES report, better regulation of their trade can decrease the risk of unintended harm.
Taking urgent action
But there are positive signs.
In May 2018, the European Union upheld a partial ban on three insecticides known as neonicotinoids to mitigate the lethal threat they pose to bees and their trickle-down effect on pollination as a whole.
This August, when world leaders gather in Kunming, China, at the United Nations Biodiversity Conference (COP 15), they are expected to finalize the post-2020 biodiversity framework, which calls for, amongst other things, the reduction of pesticides by at least two thirds by 2030.
“Increasing crop and regional farm diversity as well as targeted habitat conservation, management or restoration, is one way of combating climate change and promoting biodiversity,” says UN Environment Programme (UNEP) biodiversity specialist Marieta Sakalian. “Governments need to take the lead.”
It is precisely to encourage governments, organizations, civil society and concerned citizens to protect pollinators and their habitats that the UN has declared 20 May World Bee Day.
World Bee Day raises awareness of the essential role bees, and other pollinators play in keeping people and the planet healthy. The date coincides with the birthday of Anton Janša, who in the 18th century pioneered modern beekeeping techniques in his native Slovenia and praised the bees for their ability to work so hard while needing so little attention.
For further information please contact Marieta Sakalian, Senior Programme Management Officer and Coordinator for Healthy and Productive Ecosystems at UNEP.
Here are some actions you can take to help preserve bees and other pollinators:
Plant nectar-bearing flowers such as marigolds or sunflowers for decorative purposes on balconies, terraces, and gardens
Buy honey and other hive products from your nearest local beekeeper
Raise awareness among children and adolescents on the importance of bees and express your support for beekeepers
Set up a pollinator farm on your balcony, terrace, or garden
Preserve old meadows, which feature a more diverse array of flowers, and sow nectar-bearing plants
Cut grass on meadows only after the nectar-bearing plants have finished blooming
Use pesticides that do not harm bees, and spray them in windless weather, either early in the morning or late at night, when bees withdraw from blossoms
With its black and gold stripes, translucent wings and signature furry body, the bee is a common sight in woods and gardens around the world. But do you know how important this creature is to our planet, and the threats it is facing?
Often seen buzzing from flower to flower, bees have been around for millions of years. But they’re in severe decline and a world without these winged invertebrates is sadly becoming more of a possibility. These tiny creatures are an integral part of most ecosystems and if they became extinct, the planet could be in serious trouble.
Why are bees important?
Bees have an important part to play in maintaining our planet. We need them to pollinate the food we need to survive and many of the trees and flowers that provide habitats for lots of other wildlife.
Perfect pollinators
According to the Food and Agricultural Organisation of the United Nations, approximately 80% of all flowering plants are specialised for pollination by animals, mostly insects like bees. Pollination is crucial because many of our vegetables, fruits and the crops that feed our livestock rely on it to be fertilised, so without it, we could go hungry.
While there are other methods of pollination, including by other animals and wind, wild bees can pollinate on a much bigger scale. Estimates suggest it would cost UK farmers an incredible £1.8bn a year to manually pollinate their crops.
Honey makers
Honey bees, Apis mellifera, produce honey. This sickly sweet golden liquid is valuable not only for its saccharine taste, but also due to its medicinal properties and the fact it is so energy dense.
Why are bees disappearing?
Bees are in decline on a global scale as they face many threats, some of which threaten our trees and woods too.
Habitat loss
An increase in urban developments and invasive farming methods means that many of the areas bees once called home no longer exist. In the wild, several species of bee nest in hollow trees, so as more trees are destroyed, so are the bees’ homes. Wildflower meadows and other areas abundant in flowering plants are also in serious decline, reducing an important food resource for bees.
Climate change
Climate change and the extreme weather it can cause disrupts bee nesting behaviour and alters the normal seasonal timings, so flowers may bloom earlier or later than expected. Planting more trees is helping to mitigate some of the effects of climate change, but it’s still a serious issue that could prove deadly for many of our bees.
Parasites and diseases
Parasites and diseases are another big threat. The varroa mite, Varroa destructor, is a parasitic mite which clings to the back of the honey bee, passing diseases and viruses to it and gradually draining its strength.
Invasive species
Some non-native species can cause havoc for our native species – the Asian hornet, Vespa velutina nigrithorax, for example, eats honey bees.
How you can help bees
Grow bee-friendly flowers
One of the easiest ways to help out bees is by planting lots of bee-friendly flowers. Dedicate an area of your garden if you have one – even a single window box or pot on a balcony can be a lifeline. Bees favour a wide range of flowering plants, including foxglove, birdsfoot trefoil and red clover. Plant them easily with our seedball mix tailored to attract bees – you don’t need to be green-fingered, just scatter them and watch them grow!
Stop using pesticides
You can make a big difference to bees and other wildlife by stopping using pesticides in your garden. Some pests provide food for crucial pollinators, so leaving them to be controlled naturally is the best choice if you want to help save bees.
Help a bee in need
Often during summer you can spot a solitary bee unmoving on the ground. Chances are it’s exhausted and in need of a quick pick-me-up. You can help out a tired bee by mixing two tablespoons of white, granulated sugar with one tablespoon of water and placing it near the bee so it can help itself to this homemade energy drink.
Provide shelter for bees
Like most invertebrates, bees need shelter to nest and hibernate in. You can create your own or buy a ready-made bee hotel – just hang it up in a sunny sheltered spot and watch bees filling the tubes during the spring and summer months.
Save the bees is a common plea — not just from Friends of the Earth. You’ve likely seen it plastered on t-shirts, on mugs, and on images online. But why are bees so important?
The simplest answer is that bees pollinate food crops. Pollination occurs when insects — like bees and other pollinators — travel from one plant to the next, fertilizing the plant. Bees have little hairs all over their body that attract pollen. When the pollen from a flower sticks to a bee, it then travels with the bee to its next destination. When a bee lands on the next plant or flower, the pollen is distributed which results in cross-pollination. This process is what later yields fruits, vegetables, and seeds.
Bees are one of the world’s most important pollinators for food crops — each day we rely on bees and other pollinators. In fact, out of every three bites we consume relies on pollination. Yet bee populations continue to decline due to the rampant use of pesticides and other environmental factors — like climate change. We cannot stress how important bees are — not only to people, but to the overall environment.
Reasons Why Bees Are Important
A vast majority of flowering plants rely on pollinators to reproduce. As we mentioned above, fruits and vegetables all rely on pollinators transporting pollen from one flower to the next. But the value of bees goes beyond just supplying countless fruits and vegetables. Bees are also responsible for nuts, coffee, and even spices — but we can get to that shortly. Here are the top 5 reasons why bees are important, they:
Help produce 1/3 of our food supply
Help provide ½ of the world’s fibers, oils, and other raw materials
Help create many medicines
Provide food for wildlife
Help prevent soil erosion
Why Are Bees Important To Humans
Could you imagine walking into your grocery store and not seeing the shelves lined with food? There would be few fresh fruits or vegetables. There would be no products containing honey, almonds, coffee, and chocolate! Even milk and beef would be in short supply. That’s right — pollinators like bees are responsible for plants that feed cattle — like alfalfa. Pollinators are responsible for an extensive amount of the foods that we consume. Pollinators (including moths, flies, beetles, birds, bats and more) are also part of maintaining the lungs of our planet. Pollinators help the growth of many trees as well as other plants! They are truly a keystone species – a vital part of many ecosystems.
So taking a step back to look at why bees are important to humans, the answer is very clear. Bees are a cornerstone of our food system and along with other pollinators, they help support the plants that provide the air we breathe.
Why Are Bees Important To The Environment
Nearly 90% of flowering plant species on the planet require pollinators to aid in reproduction. This not only provides a food system for wildlife, but it also creates shelter for animals.
Why Are Honey Bees Important
There are over 20,000 bee species across the globe with 4,000 of them being native to the U.S. One of the most notable bees is the honey bee.
The honeybee is responsible for nearly a third of crop pollination. But these hardworking insects produce more than that! They have six hive products — pollen, honey, beeswax, royal jelly, propolis, and venom. These are all collected and used for nutritional and medical purposes by humans.
What Will Happen If Bees Keep Dying?
There is no way to overstate the importance of bees.
The wind does play a role in pollination, so we cannot overlook the fact that we would have some food to eat — just nothing that would be tasty to eat day in and day out. The wind is responsible for the pollination of corn and wheat. But the majority of our nutritious foods are bee-assisted. No almonds. No apples. No cherries. The list goes on and on. Humans would lose a healthy, vibrant diet.
Then what? It’s possible that advancements in technology would find ways to pollinate by “hand” — or by drone — we really don’t know what the future holds. But hand-pollinating is expensive. And guess where those costs would be absorbed? By consumers! IF you could get your hands on fresh fruits and vegetables — it would cost you an arm and a leg.
Bee declines warn us that we are on a toxic track. The same pesticides that are killing bees threaten many other insects, which are the basis of our ecosystems. And these toxic chemicals contaminate the air and water. They even linger in soil for months or years, harming the vital soil communities we need to grow our food. If we step up to save the bees, it means that we’ll be saving so many other creatures, too. But we know that if we keep losing our bees, we’ll be living in a world with troubles greater than paying $50 for an orange.
The vast majority of plant species– almost 90%, in fact– rely on pollinators to reproduce. Pollination is the process by which pollinators help plants to produce fruit (technically anything with seeds on the inside, so that includes things we normally think of as vegetables, like cucumbers, green beans and tomatoes) by transporting pollen from one flower to another. There are approximately 200,000 different species of animals around the world that act as pollinators. Of these, about 1,000 are vertebrates, such as birds, bats, and small mammals, and the rest are invertebrates, including flies, beetles, butterflies, moths, and bees. Pollinators provide pollination services to over 180,000 different plant species.
Pollinators help plants survive, and plants:
Produce ⅓ of our food supply by giving us countless fruits, vegetables, and nuts
Provide ½ of the world’s oils, fibers (such as the cotton used to make clothes), and other raw materials
Are used to create many medicines
Provide food and cover for wildlife
Keep waterways clean
Prevent soil erosion
Produce the oxygen we breathe
Absorb CO2, counteracting global climate change
Globally, pollinators are responsible for pollinating more than 1,200 crops. 87 of the leading 115 food crops, or about 75%, depend on pollinators. Every year, pollinators contribute more than $217 billion to the global economy, and $24 billion to the US economy. If we consider the indirect products of plants, such as milk and beef from cows fed on alfalfa, the value of pollinator services in the US would increase to an incredible $40 billion.
THE SIGNIFICANCE OF HONEY BEES
Honey bees are among the most numerous and efficient pollinator species in the world. Considering that the average honey bee can visit more than 2,000 flowers in one day, these bees greatly increase the chances of a plant producing a fruit or vegetable.
Honey bees are the species most commonly used as commercial pollinators in the US. They are managed and used to pollinate over 100 crops grown in North America, and contribute $15 billion to the US economy every year. Many crops, such as almonds, which contribute $4.8 billion to the US industry each year, rely on honey bees for more than 90% of their pollination.
But honey bees don’t only pollinate crops– they also pollinate wild and native plants, thus contributing to all the environmental and societal benefits attributed to pollinators in general above.
HONEY BEE DECLINE
Honey bees are clearly vital parts of our ecosystem, acting as highly efficient pollinators of our food crops as well as for wild flora. We need bees to keep our crops and earth healthy, but in recent years their numbers have been decreasing by the billions. This decline has been linked to several factors, including parasites such as varroa mites, which bite bees and infect them with fatal viruses (read more about varroa mites here!), the use of pesticides which poison bees, and monoculture farming, which prevents them from having a varied diet.
Last year, in 2016, 44% of managed beehives in the US died. The number of managed honey bee colonies in the United States has declined steadily over the past 60 years, from 6 million colonies (beehives) in 1947 to 4 million in 1970, 3 million in 1990, and just 2.5 million today. Overwintering loss rates have increased from the historical rate of 10-15% to approximately 30%, and beekeepers have collectively lost approximately 10 million beehives.
THE ROLE OF NATIVE BEES
Did you know that there are 4,000 different bee species native to North America? These bees vary widely, from cuckoo bees to bumble bees. Some are smaller than an eighth of an inch, while others are more than an inch long. They range in color from metallic green or blue to dark brown or black to striped red or orange.
Native bees are often overlooked because they aren’t domesticated, or because some of them don’t look like «traditional» bees (fuzzy, black and yellow). But these bees are the original residents of North America, who quietly and industriously pollinate our crops side by side with the honey bee.
Native bees might not spend much time in the spotlight, but they make a huge contribution to our environment and our economy. In 2009, the crop benefits from native insect pollination in the United States were valued at more than $9 billion dollars.
Sadly, native bees are struggling just as much as honey bees. Many species are endangered (read more about their endangerment here), and a few have already gone extinct. The factors that harm managed honey bees also harm wild bees, such as parasites, pathogens, and poor nutrition due to monoculture farms. A study published by the National Academy of Sciences last year found that wild bees may be disappearing in California’s Central Valley, the Midwest’s corn belt, the Mississippi River Valley and other key farm regions. Between 2008 and 2013, modeled bee abundance declined across 23% of US land area.
Now more than ever we must find new and innovative ways to protect these national treasures and preserve the balance of our ecosystem.
Are you feeling cooped up and restless during the COVID-10 outbreak? One way to take a break and make a difference is by helping the bees and the environment… from your home! Research shows that being outside can help improve your physical health and mental outlook. Even better, helping our furry friends during their peak season of busy-bee activity will substitute positive vibes for the constant negative news stream and relieve tinges of cabin fever – a win-win! Luckily, you can do three simple things from your backyard (or even your small apartment balcony) to help our local pollinators!
Put Out a Water Feeder for Bees
Bees are extremely busy at this time of year. Forager bees visit up to 2,000 flowers a day and can fly up to six miles a day! Bees drink water like all creatures, but they also need water to bring back to their hive or nest. Native bees need water for mud, which they nest with. Honey bees use water to dilute honey, feed babies, and regulate the temperature within the hive. They do the renowned waggle dance to communicate to their sisters where good nectar sources are located, but they can also spread the word on a good water source that way as well.
One great way to help hydrate our thirsty pollinator friends is to provide a bee bath or bee water feeder! Take a shallow dish or bowl, put some rocks or marbles in it, and add just enough fresh water that the tops of the stones are not submerged. If you have a garden, you can put your bowl or dish on top of an upside-down flower pot somewhere protected and shady. If you don’t have a garden, set your dish outside wherever you are able. Change the water frequently and clean the bee bath weekly! Here is a great example.
Materials:
Shallow dish or bowl
Stones or marbles
Fresh water
Optional: plant pot (as a base for the dish)
Plant Pollinator-Friendly Plants
Another way to help is to cultivate a bee-friendly garden. Bees are at their busiest right now and they can use all the nectar and pollen they can find! To start, learn about your area’s microclimate. Check out the USDA Plant Hardiness Map to see where plants will thrive in your space. Planting native plants and flowers is key for attracting bees!
Do you want to ONLY eat kale all day every day? Like us, bees love a diversity of food to eat! Planting a variety of plants is a great way to attract many types of pollinators. This resource from the University of California Agriculture and Natural Resources has all you need to know on how to plant a successful bee garden in California! Some fan-favorite plants are lavender, lilacs, clover, honeysuckle, bee balm, aster, buddleia, (aka butterfly flowers, but bumblebees love them too), sunflowers, and poppies. Bees also love easy-to-grow herbs like sage, rosemary, oregano, mint, chives, fennel, and thyme!
Low on space or resources? Even a balcony in a busy urban area is a promising location for flowers! To maximize space and minimize cost, consider plants that have both nectar and pollen such as coneflower, thyme, or blanketflower. Your local pollinators will appreciate whatever you can provide!
Another thing to do is to relax on the weeding! Some weeds do an excellent job of supporting wildlife; lawn clovers and dandelions are quite popular for pollinators. Oh, and remember to stay away from using pesticides!
Find out how to avoid seeds with pesticides and where to acquire seeds here!
Provide Homes for Native/Wild Bees
Honey bees get a lot of media attention, but native or wild bees are struggling quite a lot too. North America has around 4,000 native bee species and according to the Center of Biological Diversity, 1 in 4 native bee species in North America is currently at risk of extinction! One of the greatest hurdles that native bees face is habitat loss. Honey bees can live with up to 50,000 bees in a hive, whereas almost every native bee species is solitary and prefers to live and nest alone.
You can easily provide a place for native bees to live! Most native bee habitats are in the ground or in other cavities, like a rolled-up leaf or a hollowed-out piece of wood. Leave a small section of your available landscape unmulched for ground-nesting bees! Another great way to help these important creatures is by making a bee house. One popular way to make these is by drilling holes in a block of wood, but make sure to replace them every year, as the bees nest in them by creating mud walls to keep their larvae safe. You could also leave a small area of the garden completely undisturbed to let nature take its course!
Der Körper ist niemals stumm. Wenn Menschen zusammenkommen, reden sie miteinander – sogar wenn sie nicht sprechen. Die vorgereckte Brust ist ebenso eine Botschaft wie die kleine Veränderung der Sitzhaltung, die geöffnete Handfläche, aber auch die Farbe der Krawatte oder das dezente Parfüm.
Mimik, Gestik, Haltung und Bewegung, die räumliche Beziehung, Berührungen und die Kleidung sind wichtige Mittel der nonverbalen Kommunikation – eine uralte Form der zwischenmenschlichen Verständigung. Auf diese Weise klären wir untereinander, ob wir uns sympathisch sind und ob wir uns vertrauen können.
Der Körper verrät unsere wirklichen Gefühle, wer wir sind und was wir eigentlich wollen. Die nonverbalen Botschaften sind oft unbewusst und gerade deshalb so machtvoll. Ohne Körpersprache sind die täglichen sozialen Beziehungen gar nicht denkbar.
Wissenschaftler haben herausgefunden, dass 95 Prozent des ersten Eindrucks von einem Menschen bestimmt werden von Aussehen, Kleidung, Haltung, Gestik und Mimik, Sprechgeschwindigkeit, Stimmlage, Betonung und Dialekt – und nur fünf Prozent davon, was jemand mit Worten sagt.
Und die Einschätzung der Person geschieht in weniger als einer Sekunde. Weil wir das körperliche Verhalten schwerer kontrollieren und beherrschen können als die verbalen Aussagen, gilt die Körpersprache als wahrer und echter.
Aber lauern da nicht viele Missverständnisse? Stimmt unser Eindruck? Sind unsere Botschaften eindeutig und werden wir verstanden? Die Wissenschaft geht davon aus, dass bestimmte Basis-Gefühle wie Angst, Furcht, Glück, Trauer, Überraschung und Abscheu bei allen Menschen bestimmte nonverbale Ausdrucksformen hervorrufen.
So gilt beispielsweise das Stirnrunzeln in so gut wie allen menschlichen Kulturen als Zeichen von Ärger. Das Lächeln wird weltweit als positives Signal und Sympathiezeichen eingesetzt. Auch die Deutung solcher Signale ist universell, sie werden überall verstanden.
Es gibt aber auch viele Körpersignale, die sich kulturell entwickelt haben und so missverständlich sind wie die verschiedenen Wortsprachen. So kann eine für uns gewöhnliche Haltung in anderen Teilen der Welt Empörung hervorrufen. Zum Beispiel ist das Übereinanderschlagen der Beine für viele Araber und Asiaten eine Beleidigung, weil so die Sohlen von Füßen und Schuhen sichtbar werden – und die gelten in manchen Kulturkreisen als unrein.
Gruppen von Menschen, Gesellschaften und Kulturen entwickeln ein eigenes System von nonverbalen Botschaften, einen eigenen Code. Nur wenn man diesen Code kennt, kann man ihn richtig verstehen und benutzen.
Es gibt also Körpersignale, die wir alle verstehen und anwenden und solche, die kultur- oder regionalspezifisch sind. Hilfreich ist es in jedem Fall, die Möglichkeiten der Körpersprache gut zu kennen, sie lesen und einsetzen zu lernen.
Der Blick der Augen hinterlässt einen intensiven Eindruck, nicht nur beim Flirten. Wenn wir angeblickt werden, fühlen wir uns beachtet. Blickzuwendung kann Aufmerksamkeit, Zuneigung oder Freundlichkeit bedeuten. Den Blickkontakt zu meiden signalisiert dagegen oft Desinteresse, Gleichgültigkeit oder auch Scham. Und ein zu langes Anstarren wird meist als aufdringlich und aggressiv empfunden.
Die Augenbewegung ist ein wichtiger Bestandteil der sogenannten Mimik, dem Begriff für die Ausdrucksbewegungen des Gesichts. An der Mimik können wir die seelischen Vorgänge in einem Menschen am besten ablesen. Pokerspieler versuchen deshalb, durch starren Gesichtsausdruck zu verhindern, dass ihr Gesicht verrät, wie gut oder schlecht ihre Karten sind.
Wissenschaftler dagegen versuchen, auch den besten Lügnern im Gesicht zu lesen. Kalifornische Forscher haben die kleinen, unbewussten Muskelbewegungen bei Mimikveränderungen intensiv untersucht. Damit wollen sie eine eindeutige Beziehung zwischen der Bewegung der Gesichtsmuskeln und den zugrunde liegenden Gefühlen der Menschen herausfinden.
Eine Faust mit nach oben gestrecktem Daumen wird in vielen Teilen der Welt als Zeichen der Zustimmung verstanden. Aber in manchen Gegenden ist es eine Geste der Obszönität: in Sardinien zum Beispiel, in Teilen von Westafrika, Kolumbien und Nahost.
So ist es mit vielen der bewusst geformten Handzeichen. Sie sind ein Bestandteil der Kommunikation einer bestimmten Kultur und können auch nur dort richtig verstanden werden.
Diese bewussten Gesten machen jedoch nur einen Teil der Gestik aus, die die Gesamtheit unserer Handbewegungen bezeichnet.
Häufiger und vielfältiger bewegen sich die Hände, während wir sprechen. Diese Gesten sind meist unbewusst. Sie verstärken und begleiten die verbale Rede. Auch Menschen, die glauben, ihre Hände ruhig zu halten, unterstreichen ihre Worte durch Handbewegungen.
Sogar am Telefon gestikulieren wir. Forscher haben herausgefunden, dass im Gehirn die Zentren für Sprache und Handbewegungen im selben Bereich angesiedelt sind und vermuten daher die fast zwangsläufige Verbindung von Wort und Hand.
Wer sicher steht, hat einen ausgeprägten Realitätssinn, sagt der Volksmund. Und eine gerade Haltung zeige einen aufrechten Charakter. Die Körperhaltung soll demnach Aufschluss über die Wesenszüge des Menschen geben.
So weit geht die wissenschaftliche Theorie nicht, aber einen Zusammenhang zwischen der seelischen und der körperlichen Lage stellt auch sie fest. Wenn wir trauern, sind wir zusammengesunken, die Schultern hängen herab und wir wirken kraftlos und verschlossen.
Eine offene Haltung im Brust- und Halsbereich dagegen signalisiert Furchtlosigkeit und Selbstbewusstsein. Ähnliches gilt für Bewegungen. Wer sich im Gespräch vorbeugt, zeigt Aufmerksamkeit. Wer verkrampft an der Kleidung fummelt und nur auf der Stuhlkante sitzt, gilt als unsicher.
Auch der Gang des Menschen spiegelt die emotionale Befindlichkeit. Versuche haben ergeben, dass wir erkennen, ob die Person, die vor uns läuft, männlich oder weiblich ist, und auch, ob sie fröhlich oder traurig daherkommt.
Körperhaltungen können auch antrainiert sein und gezielt eingesetzt werden, um eine bestimmte Wirkung zu erzielen. So reckt ein Mann seine Brust, um stark und selbstbewusst zu erscheinen. Eine Frau schlägt die Beine übereinander, weil sie anmutig wirken will und ein Jugendlicher hängt lässig auf dem Stuhl, um seinen Protest auszudrücken.
«Störe meine Kreise nicht!» So soll Archimedes den anrückenden Römern zugerufen haben und daraufhin erschlagen worden sein. Die Anwesenheit und Nähe eines anderen Menschen bis hin zum Körperkontakt besitzen eine direkte und starke Wirkung. Eine Ohrfeige oder ein Kuss sind körperliche Botschaften, die jeder versteht.
Für die richtige Distanz zu anderen Menschen haben wir ein feines Gespür und instinktiv nehmen wir in einem Raum den Platz ein, der für uns angenehm ist. Wenn wir zu Nähe gezwungen werden, wie zum Beispiel im Fahrstuhl, versuchen wir, die anderen zu ignorieren, und vermeiden jeden Blickkontakt.
Das Distanzempfinden ist kulturell geprägt. In Japan etwa gilt ein größerer Abstand als angenehm als in Europa. Ein Japaner könnte daher einen Europäer im Gespräch als aufdringlich empfinden, wenn dieser immer etwas näher kommen möchte, als es dem Japaner lieb ist. Der Europäer hält dagegen möglicherweise den Japaner für distanziert, wenn dieser immer etwas zurückweicht.
Auch bei Berührungen sind kulturelle Unterschiede festzustellen. In den westlichen Ländern haben sich Berührungen zwischen Freunden und Bekannten, Umarmungen und Küssen auf Wange oder Mund weitgehend durchgesetzt. Dennoch ist Europa eine Region, in der der Körperkontakt im Vergleich zu anderen Kulturen eher selten ist.
Im Karneval sieht man ganze Gruppen von verkleideten Marsmenschen, Clowns, Hexen – oder auch Cola-Dosen. Durch das gleiche Kostüm zeigen die Menschen ihre Zugehörigkeit zu einem Verein.
Im Alltag ist dies nicht anders. Jede Gemeinschaft oder Gesellschaft hat einen Kleidungs-Code. Vor einem Vorstellungsgespräch überlegen wir sorgfältig, was wir anziehen. Wir wissen, wie wir Trauer durch unsere Kleidung zeigen oder wie wir durch ausgefallene Accessoires im Freundeskreis beeindrucken können.
Auch wer sich den gängigen Kleidernormen nicht anpassen will, sendet eine deutliche Botschaft. Täglich entscheiden wir bewusst oder unbewusst darüber, wie wir durch unsere äußere Erscheinung wirken wollen: indem wir uns schminken, Rock oder Hose anziehen, durch die Wahl der Krawatten-Farbe und den Schmuck, den wir anlegen.
Die Kleidungs-Codes unterscheiden sich stark in den verschiedenen Kulturen – besonders die Ansichten darüber, wie viel nackte Haut in der Öffentlichkeit präsentiert werden darf. Auch werden unterschiedliche Teile des Körpers tabuisiert. In vielen europäischen Ländern zeigen sich Frauen mit unverhüllten Haaren in der Öffentlichkeit, was in islamisch geprägten Ländern undenkbar ist.
Dagegen ist es bei einigen afrikanischen und südamerikanischen Völkern bis heute üblich, dass weder Frauen noch Männer im Alltagsleben ihren Oberkörper bedecken – zum Beispiel bei den Himba in Namibia, den Nyangatom und den Hamar in Äthiopien und den Huaorani in Ecuador –, was wiederum in westlichen Ländern einen Skandal verursachen würde.
Kleidung und Schmuck sind also Ausdrucksformen der Körpersprache, die wie kein anderes Mittel den kulturellen Gepflogenheiten folgen.
Manche Menschen haben die Körpersprache zu ihrem Beruf gemacht. Die Pantomime ist eine sehr alte darstellende Kunst, bei der die Handlung und der Charakter nur durch Mimik, Gestik und Bewegung ausgedrückt werden. Bereits um 400 vor Christus ist die Pantomime als Kunstform in Griechenland nachgewiesen.
Auch der Clown-Künstler verzichtet meist auf Worte. Da er die Menschen zum Lachen bringen will, setzt er Körpersprache meist übertrieben ein, etwa indem er Grimassen schneidet oder stolpert. Charlie Chaplin war einer der berühmtesten wortlosen Darsteller des vergangenen Jahrhunderts.
Eine weitere besondere Form der Körpersprache ist der Tanz. Bewegung ist ihre Form des Ausdrucks. Die Geheimnisse der nonverbalen Kommunikation beherrschen diese Profis perfekt.
Wie unser Körper spricht und warum wir nichts davon wissen
Wenn wir uns unterhalten, wählen wir unsere Worte genau. Wir versuchen, alles, was wir sagen, passend zu formulieren: nett, aggressiv oder ärgerlich. Doch etwas an uns spricht viel lauter – ohne dass es unser Gegenüber versteht: unser Körper.
Marietta und Ole sitzen sich in der Mittagspause gegenüber. Sie reden über den Unterricht und was sie von der Lehrerin halten. Marietta stützt ihren rechten Ellbogen auf den Tisch vor sich. Sie lächelt. Ole nickt. Er freut sich schon auf die nächste Stunde. Oberflächlich sprechen die beiden nur über die Schule. Wer aber genauer hinsieht, erkennt eine zweite Sprache: die Sprache des Körpers. Auch Ole hat seinen Ellbogen auf den Tisch aufgestellt, aber seinen linken. Er lächelt ebenfalls und sein Oberkörper ist Marietta zugewandt. Die beiden sitzen nebeneinander auf der grauen Holzbank. Es sieht fast so aus, als würde Marietta in einen imaginären Spiegel blicken. Denn Ole spiegelt Mariettas Körperhaltung in vielen Punkten. Was das mit dem Gespräch zu tun hat? Mit dem Inhalt wenig, aber auf einer anderen, der nonverbalen Ebene, sprechen die beiden auch miteinander. Sie sagen: «Hey, ich find’ dich nett. Du bist mir sympathisch.»
Unbewusste Botschaften
Körpersprache ist nicht nur etwas, das wir sehen können. Der Mensch hat fünf Sinne: Hören, Sehen, Schmecken, Riechen und Fühlen. Mit diesen Sinnen nimmt er die Körpersprache seines Gegenübers wahr. Alles, was nonverbal ist, also ohne Worte läuft, zählt zur Körpersprache. Die Kommunikation zwischen zwei Menschen läuft in drei Ebenen ab. Die anscheinend offensichtlichste ist die verbale Ebene. Das, was inhaltlich gesprochen wird. Die tonale Ebene meint das Wie: Wie sage ich etwas. Auf der nonverbalen Ebene spricht dann unser Körper in Mimik, Gestik, Körperhaltung, Kleidung und vielem mehr. «Diese drei Ebenen müssen als Einheit funktionieren», erklärt Meike Fabian. Sie ist die stellvertretende Leiterin der Akademie für Darstellende Kunst in Regensburg und schult ihre Schüler unter anderem auch in der Wahrnehmung der Körpersprache. «Körpersprache geht schon los bei Dingen, die ich selbst beeinflussen kann, also meinen Schmuck, meine Kleidung, mein Make-Up», zählt Meike Fabian auf. «Meine Haltung, meine Mimik und Gestik kann ich auch noch etwas beeinflussen. Das ist aber schon schwerer.» Dinge, die von innen kommen, wie Atmung oder Körpergeruch sind demnach ebenfalls Teil der Körpersprache.
Erster Eindruck entscheidet
Aber auch Eigenschaften, die nicht in meiner Hand liegen, zählen zur Körpersprache. Zum Beispiel: Bin ich ein Mann oder eine Frau. Bin ich dick oder dünn. Durch diese Dinge schließe der Gegenüber sofort auf die Lebenserfahrung eines Menschen. «Jeder erzählt seine Geschichte, schon lange bevor er den Mund aufgemacht hat», bringt es Meike Fabian auf den Punkt.
Das bestätigt auch Andrea Nitzsche. Sie ist Diplom-Sozialpädagogin und Trainerin für Körpersprache. Der erste Eindruck entsteht innerhalb von Sekunden, in denen wir jemanden wahrnehmen. «Das ist unser Instinkt, der immer noch vorhanden ist. Es war früher besonders wichtig, sofort zu wissen, ob der Mensch gegenüber eine Bedrohung ist oder nicht.
Vorurteil auf. Natürlich könne uns unser Körper verraten, wenn wir gerade schwindeln, aber es reiche eben nicht nur ein Zeichen wie die Hand am Mund aus. Ein weiteres Zeichen dafür könne laut Andrea Nitzsche zum Beispiel ein eingefrorenes Lächeln sein. «Hier lächelt nur der Mund. Das hat ein bisschen was von Zähne zeigen. Bei einem echten Lächeln sieht man das auch an den Augen. Sie strahlen dann richtig», erklärt die Expertin. Nervöse Stressflecken oder auch ein hektisches Stolpern beim Sprechen können ebenfalls darauf hindeuten – müssen es aber nicht.
Den Körper programmieren
Wer nervös ist, neige übrigens auch zu Schattenbewegungen. Es kann sein, dass sich jemand gern die Haare aus dem Gesicht streicht, obwohl sie gar nicht stören. Diese Bewegung gibt demjenigen Sicherheit in einer Situation, in der er sich gerade überfordert fühlt. Das kann bei Referaten in der Schule oder auch beim ersten Date sein. Andrea Nitzsche hat für solche Situationen einen besonderen Tipp: «Mehr ausatmen als einatmen kann helfen, etwas ruhiger zu werden.» Ansonsten helfe es, seinen Körper positiv zu programmieren. «Das geht. Ich muss von dem überzeugt sein, was ich gerade mache. Dann wirkt auch mein Körper souveräner», erklärt Andrea Nitzsche. Will ich also dieses Referat für eine gute Note halten und will ich das für mich selbst, strahlt auch mein Körper mehr Souveränität aus, als wenn ich mir sage: «Hilft ja nicht, da muss ich durch.»
Für besonders Nervöse hat Andrea Nitzsche noch einen Geheimtipp: «Wer seine Lieblingsklamotten anzieht, fühlt sich schon viel wohler. Auch das wirkt auf mein Gegenüber. Außerdem hilft es, sich am Morgen schon seine Lieblingssongs vorzusingen und sich zu sagen: Jetzt geht’s mir gut. Was ich heute mache, ist etwas, wofür es sich lohnt.»
Wer etwas aufmerksam ist und auch darauf schaut, was seine Mitmenschen sagen, obwohl sie eigentlich nichts sagen, versteht seinen Gegenüber oft besser. Das kann auch bei Streitereien helfen. Aber keine Angst: Völlig durchschaubar werden wir deshalb nicht für andere: Körpersprache wirkt genauso wie Wortsprache und Stimmlage nur als Gesamtpaket. Gedankenlesen können auch Körpersprache-Experten nicht.
So wirkt deine Körpersprache auf andere
Selbstbewusst
Wie viel Platz wir brauchen, also wie viel Anspruch wir auf unser Territorium haben, zeigt wie selbstsicher wir sind.
Hier nimmt Marietta viel Platz ein durch die weit auseinanderstehenden Beine, ihren offenen Oberkörper und ihre Hände, die sie in die Hüfte gestemmt hat.
Schüchtern
Hier ist das Gegenteil zu sehen. Marietta braucht so wenig Platz wie sie nur kann. Sie verschränkt ihre Arme vorm Körper genauso wie ihre Beine. Außerdem hat sie ihren Kopf leicht eingezogen.
Misstrauisch
Verschränkte Arme, vom anderen abgewandter Oberkörper, hochgezogene Augenbrauen
Die Emotion steckt im Detail und benötigt einen geübten Blick, um decodiert zu werden: Gefühle drücken sich oft in Mimik und Gestik aus. Forschern gibt diese wortlose Sprache Rätsel auf.
Das Lächeln, das die Mundwinkel umspielt, der leicht zurückgeneigte Kopf, die sich unmerklich aufrichtende Haltung des Oberkörpers – es handelt sich um die typischen Ausdrücke von Stolz. Auch Scham entfaltet sich innerhalb von nur vier bis fünf Sekunden, in denen eine Reihe von kleinsten Gesten aufeinanderfolgt: der Blick wird abgelenkt, ein Lachen geht in ein Lächeln und wieder in kontrolliertes Lachen über, der Kopf neigt sich nach unten, die Hände fassen unwillkürlich ins Gesicht.
Für Gestikforscher sind solche Körperreaktionen leicht entschlüsselbar. Die Fragen, die sich an das menschliche Gestikrepertoire anschließen, sind indes mannigfaltig und beschäftigen Neurowissenschaftler, Anthropologen und Linguisten gleichermaßen. Wie entsteht gestische Bedeutung? Wie setzen sich verschiedene Gesten zusammen, um eine Emotion abzubilden? Welche Bedeutung haben Gesten für Alltagskonversationen? Welche Gesten sind erlernt, welche gehören zum Grundrepertoire menschlicher Affekte? Sind sie universell oder unterscheiden sich bestimmte Gesten innerhalb der Kulturen?
Es braucht nicht nur Interdisziplinarität, sondern auch ein ganzes Arsenal an Geisteskraft, diesen Fragen nachzugehen, und so kamen jetzt über 300 Wissenschaftler der „Internationalen Gesellschaft für Gestikforschung“ (ISGS) zu einer einwöchigen Konferenz an der Europa-Universität Viadrina in Frankfurt/Oder zusammen. Unterteilt in mehrere Themenkomplexe (Zeichensprache, Kunst und Film, Neurobiologie oder Kognitionswissenschaft) widmeten sich insgesamt knapp 200 Vorträge den neuesten Ergebnissen der Gestikforschung. „Nach dieser Konferenz wird es schwierig für die Linguisten zu behaupten, dass Sprache nur aus Wörtern besteht. Vielmehr sind komplexe Körpergesten am Prozess der Bedeutungsproduktion mit beteiligt“, resümiert Cornelia Müller, Professorin für Angewandte Sprachwissenschaft an der Viadrina und Herausgeberin der Zeitschrift „Gesture“.
Seitdem zum ersten Mal in den frühen 80er Jahren eine Gruppe von Berliner Wissenschaftlern Gesten auch von einem linguistischen Standpunkt aus untersucht, und im Jahr 2000 die Freie Universität Berlin unter der Leitung von Müller das „Berlin Gesture Project“ ins Leben gerufen hatte, hat die Gestikforschung als interdisziplinäres Paradigma par excellence sämtliche Fachbereiche affiziert. Laut Müller hat sich Deutschland international als besonders prominenter Standort für Gestikforschung etabliert. Entsprechend hoch war die Fördersumme der Volkswagen-Stiftung, die das mehrjährig angelegte Projekt „Towards a Grammar of Gesture: Evolution, Brain and Linguistic Structures“ (ToGoG) an der Viadrina mit fast einer Million Euro fördert.
Den Erfolg all dieser Unternehmen sieht Müller nicht zuletzt darin begründet, dass Gesten einerseits ein universales Phänomen sind, also für alle Menschen gleichermaßen Relevanz besitzen. Andererseits seien Gesten auch abhängig von kulturellen Neuentwicklungen, die es zu untersuchen gelte. „Jüngere Kulturtechniken wie das Telefonieren mit dem Handy gehen innerhalb relativ kurzer Zeit in unser Gestenrepertoire über und schaffen neue Codes“, erklärt Müller. Sie hält ihre Faust ans Ohr, Daumen und kleinen Finger abgespreizt, ein Mobiltelefon imitierend – eine Geste, die vor einem Jahrhundert noch unverständlich gewesen wäre.
Zu den Aufgaben der Gestikforschung zählt heute, nicht mehr nur einzelne Gesten auf ihre Bedeutung zu befragen, sondern auch die Wechselwirkung zwischen Sprechakt, Gestik und individueller Körperdisposition zu analysieren. So untersuchten Mary Copple, Mone Welsche und Cornelia Müller vom Exzellenzcluster „Languages of Emotion“ der Freien Universität Berlin das Phänomen der Alexithymie, die sogenannte Gefühlsblindheit: Menschen mit Alexithymie haben Schwierigkeiten, Gefühle adäquat zu beschreiben. „Etwa zehn Prozent der deutschen Bevölkerung ist alexithymisch“, so Copple. „Mithilfe der Gestikforschung wollten wir herausfinden, ob diese Menschen bestimmte Gefühle tatsächlich nicht empfinden, oder ob es sich um ein kognitives Problem handelt, sie zu artikulieren.“ 50 Stunden Videomaterial mit Interviews von 100 Versuchsteilnehmern – die Hälfte davon alexithymisch – sollte Aufschluss über das Auftreten sogenannter Posture-Gesture-Mergers (PGMs) geben, die spontan und intuitiv erfolgende Verschmelzung von Körperbewegung und Gestik beim Sprechen. „PGMs sind nicht intentional erlernbar sondern unmittelbare Ausdrücke einer Persönlichkeit, die sich in einem Gesprächsmoment besonders engagiert“, sagte Copple. So beugten sich beispielsweise manche Menschen plötzlich nach vorne, wenn sie etwas ausriefen, oder fielen in sich zusammen, wenn sie verunsichert würden.
Die Analyse des Videomaterials ergab, dass Menschen mit Alexithymie deutlich weniger PGMs produzierten – auffälligerweise jedoch nur dann, wenn sie zu ihren Gefühlen oder emotional besetzen Themen befragt wurden. Sollten sie Fragen aus einem Intelligenztest beantworten, zeigten sie eine normal hohe Anzahl von PGMs. „Das weist darauf hin, dass alexithymische Menschen bei geistiger Arbeit entspannter sind und entsprechend mit einer größeren Selbstverständlichkeit intuitiv gestikulieren“, schlussfolgerte Copple. Bei Alexithymie handele es sich also wahrscheinlich eher um eine kognitive Unzulänglichkeit, Emotionen und deren Ausdruck intuitiv synchronisieren zu können. Daran knüpften sich auch Fragestellungen für zukünftige Forschung: „Wir wollen untersuchen, ob PGMs bei Männern und Frauen unterschiedlich auftreten.“
Die Art und Anzahl der Gesten hängt indes nicht nur vom einzelnen Sprecher ab. Vielmehr müsse auch der kulturelle und sprachliche Raum betrachtet werden, in dem sich jemand bewege, so Tasha Lewis vom Marianopolis College im kanadischen Montreal. Sie stellte die Ergebnisse ihrer Studie vor, in der sie sechs englische Muttersprachler in einem Sprachkurs in Barcelona beobachtet hatte um herauszufinden, ob sich ihre Gestik verändern würde. Der Erwerb des Spanischen bedeutete auch einen Wechsel der Sprachfamilie, denn Englisch ist eine germanische, Spanisch eine romanische Sprache, in der meist bei Aussprechen des Verbs gestikuliert wird. „Ältere Studien haben behauptet, man behalte sein muttersprachliches Gestikmuster bei Erwerb einer Fremdsprache bei“, so Lewis. Die Auswertung ihres Videomaterials hätte jedoch ergeben, dass die Teilnehmer im Verlaufe ihres Sprachkurses zunehmend der spanischen Satzstruktur gemäß ihre Gesten platziert hätten. „Dieses Ergebnis stützt die hohe Bedeutung des Lernens im fremden Land“, bilanzierte Lewis. „Die subtilen Aspekte der Kommunikation, wie Gestik, fördern den umfassenden Erwerb einer Fremdsprache.“
Die nächste Konferenz der ISGS findet 2011 in Lund (Schweden) statt. „Bis dahin wird eine weitere beachtliche Zahl an Publikationen zur Gestikforschung erschienen sein“, so Müller. Vielleicht, hofft sie, schlage sie auch Wellen außerhalb des universitären Rahmens. Nicht zuletzt für Schauspieler dürfte ein detailliertes Wissen über Geschichte und Funktionsweisen von Gesten außerordentlich interessant sein.
Die Körpersprache ist reich an versteckten Botschaften: Mit Armen und Beinen, Händen und Füßen geben Menschen so manches über sich preis. Ausladende Gesten und Selbstberührungen sind besonders viel sagend.
Team-Meeting: Ein Kollege kratzt sich am Kopf, ein anderer wippt beständig mit den Füßen, und eine Kollegin zwirbelt versonnen eine Haarsträhne um den Finger. Ob mit Händen oder Füßen: In den meisten Fällen laufen solche Bewegungen völlig unbewusst ab. Körpersprache gilt deshalb als echter, unverfälschter und verlässlicher als die gesprochene Sprache. Stimmt das? Und was verraten Gesten wirklich über das Gegenüber?
Lange hielt man die Körpersprache für bloßes Beiwerk. Dass sie einen Grundpfeiler der Kommunikation darstellt, erkannte als einer der Ersten der Psycholinguist David McNeill von der University of Chicago Anfang der 1990er Jahre. Für ihn waren Gesten »in Form gegossene Gedanken«. Wer genau auf sie achte, könne beinahe in die Köpfe hineinsehen, erklärt er in seinem Buch »Hand and Mind«.
Einstudierte Körpersprache hinkt hinterher
Noch bevor sie zu sprechen beginnen, teilen sich Babys mit Gesten mit. Typischerweise zeigen sie schon mit einem Jahr gezielt auf Dinge in ihrer Umgebung. Ob unsere Vorfahren Gesten benutzten, bevor sie sich mit Lauten ausdrückten, oder ob sich beide Formen der Kommunikation im Lauf der Evolution parallel entwickelt haben, ist noch unklar. Gewiss ist hingegen: Auch wenn wir uns längst verbal ausdrücken können, reden wir weiter mit Händen und Füßen. Und das sogar, wenn niemand zuschaut, denn die Bewegungen helfen beim Denken.
Wir betonen damit zum Beispiel, was uns wichtig ist. Etwa mit der Taktstockgeste, die Politiker häufig nutzen, wenn sie eine flammende Rede halten: Daumen und Zeigefinger formen dabei einen Ring, und wie ein Dirigent verleiht der Sprecher dem Gesagten mit dem Auf- und Abschnellen des unsichtbaren Stabs einen Beat. Sind solche Gesten einstudiert, erkennen wir das recht schnell. Sie wirken nicht spontan und hinken dem Gesagten leicht hinterher.
Südländer reden angeblich besonders viel mit den Händen. Doch das stimmt so nicht: Deutsche und Südeuropäer fuchteln beim Reden gleich viel. Der entscheidende Unterschied: »Südeuropäer neigen zu ausladenderen Gesten«, sagt Cornelia Müller von der Europa-Universität Viadrina in Frankfurt an der Oder. Die Professorin für Sprachgebrauch und Multimodale Kommunikation hat die Gestik deutscher und spanischer Sprecher miteinander verglichen: »Nordeuropäer gestikulieren aus dem Handgelenk, Südeuropäer eher aus Schulter und Ellenbogen.« Deswegen spielen sich diese Gesten weiter weg vom Körper auf Kopfhöhe ab, während Deutsche eher verhalten vor der Brust gestikulieren.
Auch das Gegenüber beeinflusst die Gestik. Unbewusst verhalten wir uns zuweilen wie soziale Chamäleons: Wir lehnen uns nach vorne, wenn die andere Person das tut, oder schlagen wie sie die Beine übereinander. Passt sich jemand in seiner Gestik und Körperhaltung auffallend an, so deutet das auf Sympathie hin.
Die Körpersprache lässt aber auf mehr als das schließen. Gesten können verraten, was im Gegenüber gerade vorgeht. Ein Hinweis darauf, dass jemand angespannt, gestresst oder verlegen ist, sind spontane, unbewusste Selbstberührungen. Der Impuls, sich kurz an den Hals, das Kinn, die Nase oder Wange zu fassen, lässt sich nur schwer unterdrücken.
Selbstberührungen wirken beruhigend
Der Psychologe Martin Grunwald vom Haptik-Forschungslabor der Universität Leipzig hat untersucht, warum dieser Impuls vor allem in Stresssituationen auftritt. Er und sein Team gaben Versuchspersonen eine Gedächtnisaufgabe. Während diese sich anstrengten, das Gelernte im Kopf zu behalten, fassten sie sich häufiger ins Gesicht, und die im EEG vor und nach der unbewussten Berührung gemessenen Hirnströme unterschieden sich stark. »Wir erklären diese Veränderungen damit, dass der kurze Berührungsreiz jene Hirnaktivität verstärkt, die für eine Stabilisierung des emotionalen Zustands und eine Stabilisierung des Arbeitsgedächtnisses verantwortlich ist«, sagt Martin Grunwald. Das heißt: Spontane Selbstberührungen helfen offenbar, sich zu beruhigen und zu konzentrieren.
Gesten liefern also Anhaltspunkte zur momentanen Verfassung des Gegenübers. Aber offenbaren sie noch mehr über seine Person? Eine 2021 veröffentlichte Metaanalyse beschäftigte sich mit dieser Frage.
Die Forschungsgruppe um den Psychologen Simon Breil von der Universität Münster analysierte dafür 32 Studien zum Zusammenhang zwischen nonverbalen Signalen und der Persönlichkeit, erhoben mit Fragebogen zu den »Big Five«, den fünf zentralen Persönlichkeitsdimensionen. Zusätzlich erfassten manche Studien noch die Intelligenz. Zu den Merkmalen der Körpersprache zählten Handbewegungen, Haltung, die Breite des Stands und die Schrittlänge. Die große Frage: Spiegelt sich in ihnen der Charakter eines Menschen wider?
Die kurze Antwort: Ja. Den stärksten Zusammenhang fanden die Forschenden für das Merkmal Extraversion. Wer als extravertiert gilt, ist herzlich, gesellig, durchsetzungsfähig, aktiv, abenteuerlustig und fröhlich. Diese Kontaktfreudigkeit sieht man entsprechenden Zeitgenossen offenbar relativ leicht an. Neben einer ausdrucksstarken Mimik, einer lauten Stimme, einem gepflegten und modischen Äußeren wiesen auch eine entspannte, dem Gegenüber zugewandte Haltung und ausholende Gesten auf Extraversion hin.
»Nicht jeder, der gerade wild gestikuliert, ist extravertiert«, stellt Simon Breil klar. »Aber von allen Charaktermerkmalen, die wir uns angeschaut haben, schlug sich Extraversion am stärksten in der Gestik nieder. Wer geselliger ist und gerne auf andere zugeht, gestikuliert tendenziell mehr.« Zudem neigten extravertierte Menschen weniger dazu, sich kleinzumachen oder nervös herumzunesteln. Insgesamt nahmen sie mehr Raum ein und zeigten in der Regel eine entspannte und offene Körpersprache.
Für die anderen Charaktermerkmale fanden sich weniger Hinweise: Verträglichere Menschen machten im Schnitt etwas kleinere Schritte; gewissenhafte berührten sich etwas seltener am Körper und im Gesicht, hatten einen breiteren Stand und eine aufrechtere Haltung. Eine solche Haltung zeugte außerdem auch von Offenheit für neue Erfahrungen. Emotionale Labilität spiegelte sich ähnlich wie Introvertiertheit in einer steiferen Körperhaltung und nervösem Zappeln wider.
Die gefundenen Zusammenhänge waren allerdings nicht sehr groß. »Ja, es gibt Hinweise auf die Validität der Körpersprache im Hinblick auf die Persönlichkeitsdeutung. Die sind aber auf einem sehr, sehr niedrigen Niveau«, sagt Uwe Kanning. Er ist Professor für Wirtschaftspsychologie an der Hochschule Osnabrück und beschäftigt sich kritisch mit unwissenschaftlichen Methoden in der Personalauswahl. Ihm zufolge lässt sich nur ein kleiner Anteil der Persönlichkeitsunterschiede aus der Körpersprache vorhersagen.
»Wenn man einzelne körpersprachliche Merkmale betrachtet, bewegt sich das zwischen null und fünf Prozent. Die höchsten Zusammenhänge findet man für Extraversion. Für Intelligenz zum Beispiel gibt es gar keine«, berichtet Kanning. »Fügt man verschiedene körpersprachliche Merkmale zu einem Gesamtbild zusammen, steigt die Zahl wahrscheinlich maximal auf zehn Prozent«, schätzt er. Das heißt umgekehrt: 90 Prozent der Charakterunterschiede lassen sich nicht aus der Gestik herauslesen.
Die Bedeutung der Körpersprache wird überschätzt
An der Idee, dass sich das Innerste in der Gestik offenbart, ist also durchaus etwas dran – nur eben nicht so viel wie vermutet. »Menschen überschätzen die Bedeutung von Körpersprache«, sagt Simon Breil. »Gerade beim ersten Eindruck, wenn wir noch nichts über die Person wissen, verlassen wir uns stark darauf, etwa beim Dating oder im Bewerbungsprozess.«
Nicht nur was wir sagen, sondern auch das, was in unserer Mimik, im Blickkontakt, in Gestik und Körperbewegung mitschwingt, spiegelt unsere Persönlichkeit wider. Wie wir mithilfe unserer Körpersprache – nicht nur im Vorstellungsgespräch – nonverbale Signale senden und warum sich diese nur schwer steuern lassen, erläutert der Psychologe, Autor und Coach Markus Väth.
Herr Väth, wir kommunizieren, auch wenn wir gerade nichts sagen. Wie das?
Markus Väth: Jeder Mensch sendet neben dem, was er sprachlich mitteilt, bestimmte Signale. Wir sprechen zusätzlich zu inhaltlichen Äußerungen nonverbal mit unserem Körper – durch Mimik, Gestik, Körperhaltung und -bewegung.
Viele haben die Sorge, dass sich ihre Körpersprache – etwa in Vorstellungsgesprächen – negativ auf das Gesagte auswirkt, weil sie mit dem Fuß wippen oder die Arme verschränken. Beides gilt ja als No-Go, oder?
Markus Väth: Man sollte sich nicht zu viele Sorgen darüber machen, wie bestimmte Verhaltensweisen gedeutet werden könnten. Zuschreibungen wie „No-Go“ empfinde ich als problematisch. Da geistert viel Pseudowissen umher – im Internet, aber auch durch Personaler-Köpfe. Es ist schwierig, Körpersignale zu interpretieren, gerade wenn man dem Gesprächspartner das erste Mal gegenübersitzt. So müssen verschränkte Arme nicht zwangsläufig Zurückweisung signalisieren. Ich selbst etwa nehme diese Haltung ein, wenn ich intensiv nachdenke. Das hat nichts mit Abwehr zu tun. Sitzt ein Bewerber beispielsweise etwas schief da, ist das nicht zwingend mangelndem Respekt und Desinteresse geschuldet, sondern kann einfach nur bedeuten, dass das Hotelbett unbequem war.
Kann man auf seine Körpersprache überhaupt einwirken?
Markus Väth: Körpersprache lässt sich nur äußerst schwer trainieren. Und in Vorstellungsgesprächen schaltet der Stress einstudierte Körpersprache oft schlicht aus. Daher ist es schwierig, seine nonverbale Kommunikation bewusst zu beeinflussen.
Man kann sich also positiv wirkende Signale nicht antrainieren?
Markus Väth: Klar kann man versuchen, Gestik und Mimik gezielt einzusetzen – verbal auf den Gesprächspartner einzugehen und gleichzeitig all das Nichtgesagte, das nebenher mitschwingt, zu kontrollieren und zu steuern, erfordert jedoch jahrelanges konsequentes Üben. Sonst wirkt es schnell künstlich und wenig überzeugend. Es dauert, bis sich solche Verhaltensweisen einschleifen und in Situationen, in denen wir unter Druck stehen, abgerufen werden können.
Also darf die Mimik Ihrer Meinung nach auch mal entgleisen und das Lächeln verrutschen?
Markus Väth: Meiner Meinung nach ja. Ein eingefrorenes, angespanntes Passfotolächeln wirkt wenig authentisch. Da lächelt nur der Mund, die Augen jedoch nicht, das bleibt dem Gesprächspartner nicht verborgen und verwirrt eher. Ein Funke springt so nicht über.
Und wie verhält es sich mit nervösem Zappeln oder wildem Gestikulieren?
Markus Väth: Gesten unterstreichen ja im besten Fall das Gesagte. Nimmt das Herumfuchteln und Zappeln jedoch überhand, kann es helfen, die Bewegung zu kanalisieren. Zum Beispiel indem man einen Stift in den Händen hält.
Und was wollen Sie jungen Menschen sonst noch mitgeben, die vor ihrem ersten Vorstellungsgespräch stehen?
Markus Väth: Seid einfach, wie ihr seid. Viel wichtiger als einstudierte körpersprachliche Verhaltensweisen sind die Grundregeln der Höflichkeit. Ein Händedruck zur Begrüßung, dem Gegenüber dabei in die Augen schauen – das kann man in der Familie oder im Supermarkt üben – und sich auf einen kurzen Smalltalk einlassen ist die halbe Miete für einen gelungenen Gesprächsbeginn. Das beste Mittel, die Körpersprache zu verbessern, ist, voller Selbstvertrauen in das Gespräch zu gehen. Wenn man von seinen Fähigkeiten überzeugt ist, dann strahlt man auch leichter Souveränität aus.
1. Her most famous novel, Frankenstein, is widely considered the first science fiction novel. Brian Aldiss certainly thinks so. It’s worth mentioning here that two other leading science (fiction) writers, Carl Sagan and Isaac Asimov, argued that the honour of ‘first science-fiction novel’ should go to a much earlier book: Johannes Kepler’s Somnium (‘The Dream’), first published in 1634. But Frankenstein: Or, the Modern Prometheus (Wordsworth Classics) is considered the first work of what we can confidently label modern SF. It was published in 1818, when Shelley (1797-1851) was just 21, and came out of the famous ghost-story competition at Lake Geneva, which involved Shelley and her husband (the poet, Percy), Lord Byron, and Byron’s physician and travelling companion, John Polidori. Polidori’s contribution, The Vampyre (1819), claims the honour of the first vampire novel. One of Mary Shelley’s early influences was one of Samuel Taylor Coleridge‘s poems: on 24 August 1806, Coleridge was visiting Mary’s father, William Godwin, and gave a reading of his poem ‘The Rime of the Ancient Mariner‘. Unbeknownst to the adults, a nine-year-old Mary Shelley had concealed herself behind the parlour sofa, and was transfixed by Coleridge’s poem.
2. The ultimate ‘message’ of her most famous book is often missed. Mary Shelley’s Frankenstein may be one of the most misread novels in the whole of English literature. What is the book about? The dangers of playing God or the need to be good parents? Shelley herself came from a strong family but also an unconventional one: her mother was influential feminist philosopher Mary Wollstonecraft, and her father the radical writer William Godwin. Mary’s mother died a few weeks after her daughter’s birth and Mary had an overly dependent, and sometimes strained, relationship with her father. Then there is her relationship with her husband, Percy Shelley, who is often seen as the model for Victor Frankenstein. (Curiously, Mary’s second novel, Mathilda (1820), would feature a father confessing incestuous desire for his daughter, followed by his death by drowning, thus prefiguring Percy Shelley’s death two years later. Wordsworth Classics recently brought out a cheap reprint of this story along with some other Mary Shelley works: Mathilda and Other Stories (Wordsworth Classics).)
3. As well as inventing modern SF with Frankenstein, Mary Shelley also wrote the first work of modern apocalyptic fiction. Mary Shelley’s favourite among her own books was a later novel, The Last Man (Wordsworth Classics), published in 1826. It tells of a future world where plague has killed off the human population – with, ultimately, one exception. There is, as the title suggests, only one human survivor, Lionel Verney. (There are in fact a number of other characters in the novel: Lionel only becomes the last man right at the end of the narrative.) The book is the progenitor of all later stories in this vein, such as Richard Matheson’s I Am Legend and Cormac McCarthy’s The Road.
4. Shelley also wrote historical novels later in her career. In 1830, Mary Shelley published The Fortunes of Perkin Warbeck, about the fifteenth-century pretender to the throne during Henry VII’s reign. Mary was also a prolific writer of biographical and historical non-fiction, and wrote large portions of the Lives of the Most Eminent Literary and Scientific Men, a ten-volume sequence in a much bigger 133-volume encyclopedia, the Cabinet Cyclopaedia. Shelley continued writing until she died in 1851, probably of a brain tumour, aged just 53.
5. Frankenstein was Shelley’s first novel, but not the first book she published. In 1817, a year before her most famous novel appeared, Mary Shelley and her husband Percy published History of a Six Weeks’ Tour through a part of France, Switzerland, Germany, and Holland; with Letters Descriptive of a Sail Round the Lake of Geneva and of the Glaciers of Chamouni which … well, the title gives a pretty detailed account of its contents. But we’ll add that the volume also included Percy’s celebrated poem ‘Mont Blanc’, and that besides this the book was largely Mary’s work, meaning it should take the mantle as her first book.
10 Surprising Facts About Mary Shelley’s Frankenstein
1. Frankenstein was written by a teenager.
Mary Shelley’s teenage years were eventful, to say the least. At age 16, she ran away with the poet Percy Bysshe Shelley. Over the next two years, she gave birth to two children. In 1816, the couple traveled to Switzerland and visited Lord Byron at Villa Diodati. While there, 18-year-old Mary started Frankenstein. It was published in 1818, when she was 20 years old.
2. The novel came out of a ghost story competition.
The Shelleys visited Switzerland during the “year without a summer.” The eruption of Mount Tambora in modern Indonesia had caused severe climate abnormalities and a lot of rain. Stuck inside, the group read ghost stories from the book Fantasmagoriana. It was then that Lord Byron proposed that they have a competition to see who could come up with the best ghost story: Byron, Mary, Percy, or the physician John Polidori.
In the end, neither Byron nor Percy finished a ghost story, although Polidori later wrote The Vampyre—which influences vampire stories to this day—based on Byron’s offering.
3. Mary Shelley said she got the idea from a dream.
At first, Mary had writer’s block, unable to come up with a good idea for a ghost story. Then she had a waking dream—“I did not sleep, nor could I be said to think,” she said. In the introduction to the 1831 edition of Frankenstein [PDF], she described the vision as follows:
“I saw the pale student of unhallowed arts kneeling beside the thing he had put together. I saw the hideous phantasm of a man stretched out, and then, on the working of some powerful engine, show signs of life. … He sleeps; but he is awakened; he opens his eyes; behold, the horrid thing stands at his bedside, opening his curtains, and looking on him with yellow, watery, but speculative eyes.”
Mary opened her eyes and realized she’d found her story. “What terrified me will terrify others,” she thought. She began working on it the next day.
4. Mary Shelley wrote Frankenstein in the shadow of tragedy.
Before she started Frankenstein, Mary gave birth to a daughter, who died just days later. (In fact, only one of Mary’s four children lived to adulthood.) Soon after the baby died, she wrote in her journal, “Dream that my little baby came to life again—that it had only been cold & that we rubbed it by the fire & it lived—I awake & find no baby—I think about the little thing all day.” This circumstance, as well as the suicide of her half-sister, must have contributed to the novel.
5. Frankenstein was the name of the scientist, not the monster.
In the novel, Victor Frankenstein is the scientist. The monster remains unnamed and is referred to as «monster,» «creature,» «dæmon,» and «it.» But if you’ve made the mistake of calling the monster Frankenstein, you’re not alone. As early as 1890 The Scots Observer complained that Frankenstein “presented the common pressman with one of his most beloved blunders”—confusing the two.
6. The novel shares its name with a castle.
Mary made up the name Frankenstein. However, Frankenstein is a German name that means Stone of the Franks. What’s more, historian Radu Florescu claimed that the Shelleys visited Castle Frankenstein on a journey up the Rhine River. While there, they must have learned about an unbalanced alchemist named Konrad Dippel, who used to live in the castle. He was trying to create an elixir, called Dippel’s Oil, which would make people live for over a hundred years. Like Victor Frankenstein, Dippel was rumored to dig up graves and experiment on the bodies. Not all historians are convinced there’s a link, however, pointing out that there’s no indication Frankenstein had a castle in the novel, and that Shelley never mentioned visiting the castle herself in any of her writing about her trip up the Rhine.
7. Many thought Percy Shelley wrote Frankenstein.
Frankenstein was first published anonymously. It was dedicated to William Godwin, Mary’s father, and Percy Shelley wrote the preface. Because of these connections, many assumed that Percy Shelley was the author. This myth continued even after Frankenstein was reprinted in Mary’s name. In fact, some people are still arguing that Percy authored the book. While he edited the book and encouraged Mary to expand the story into a novel, actual authorship is a stretch.
8. Frankenstein was originally slammed by critics.
When Frankenstein came out in 1818, many critics bashed it. “What a tissue of horrible and disgusting absurdity this work presents,” John Croker, of the Quarterly Review, wrote. But gothic novels were all the rage, and Frankenstein soon gained readers. In 1823, a play titled «Presumption; or The Fate of Frankenstein» cemented the story’s popularity. In 1831, a new version of the book was published, this time under Mary’s name.
9. Frankenstein is widely considered the first science fiction novel.
With Frankenstein, Shelley was writing the first major science fiction novel, as well as inventing the concept of the “mad scientist” and helping establish what would become horror fiction. The influence of the book in popular culture is so huge that the term Frankenstein has entered common speech to mean something unnatural and horrendous.
Mary went on to write other science fiction, such as her short story Roger Dodsworth: The Reanimated Englishman, about a man who has been frozen in ice, and her novel The Last Man, about a survivor in a world destroyed by plague, from the same year.
10. Thomas Edison adapted Frankenstein for film.
In 1910, Thomas Edison’s studio made a one-reel, 15-minute film of Frankenstein, one of the first horror movies ever made. It was thought lost until it was rediscovered in the 1980s.
Frankenstein: how Mary Shelley’s sci-fi classic offers lessons for us today about the dangers of playing God
Frankenstein, or, The Modern Prometheus, is an 1818 novel by Mary Wollstonecraft Shelley. Set in the late 18th century, it follows scientist Victor Frankenstein’s creation of life and the terrible events that are precipitated by his abandonment of his creation. It is a Gothic novel in that it combines supernatural elements with horror, death and an exploration of the darker aspects of the psyche.
It also provides a complex critique of Christianity. But most significantly, as one of the first works of science-fiction, it explores the dangers of humans pursuing new technologies and becoming God-like.
The celebrity story
Shelley’s Frankenstein is at the heart of what might be the greatest celebrity story of all time. Shelley was born in 1797. Her mother, Mary Wollstonecraft, author of the landmark A Vindication of the Rights of Women (1792), was, according to that book’s introduction, “the first major feminist”.
Shelley’s father was William Godwin, political philosopher and founder of “philosophical anarchism” – he was anti-government in the moment that the great democracies of France and the United States were being born. When she was 16, Shelley eloped with radical poet Percy Shelley, whose Ozymandias (1818) is still regularly quoted (“Look on my Works, ye Mighty, and despair!”).
Their relationship seems to epitomise the Romantic era itself. It was crossed with outside love interests, illegitimate children, suicides, debt, wondering and wandering. And it ultimately came to an early end in 1822 when Percy Shelley drowned, his small boat lost in a storm off the Italian coast. The Shelleys also had a close association with the poet Lord Byron, and it is this association that brings us to Frankenstein.
In 1816 the Shelleys visited Switzerland, staying on the shores of Lake Geneva, where they were Byron’s neighbours. As Mary Shelley tells it, they had all been reading ghost stories, including Coleridge’s Christabel (Coleridge had visited her father at the family house when Shelley was young), when Byron suggested that they each write a ghost story. Thus 18-year-old Shelley began to write Frankenstein.
The myth of the monster
The popular imagination has taken Frankenstein and run with it. The monster “Frankenstein”, originally “Frankenstein’s monster”, is as integral to Western culture as the characters and tropes from Lewis Carroll’s Alice’s Adventures in Wonderland.
But while reasonable continuity remains between Carroll’s Alice and its subsequent reimaginings, much has been changed and lost in the translation from Shelley’s novel into the many versions that are rooted in the popular imagination.
There have been many varied adaptations, from Edward Scissorhands to The Rocky Horror Picture Show (see here for a top 20 list of Frankenstein films). But despite the variety, it’s hard not to think of the “monster” as a zombie-like implacable menace, as we see in the trailer to the 1931 movie, or a lumbering fool, as seen in the Herman Munster incarnation. Further, when we add the prefix “franken” it’s usually with disdain; consider “frankenfoods”, which refers to genetically modified foods, or “frankenhouses”, which describes contemporary architectural monstrosities or bad renovations.
However, in Shelley’s novel, Frankenstein’s creation is far from being two-dimensional or contemptible. To use the motto of the Tyrell corporation, which, in the 1982 movie Bladerunner, creates synthetic life, the creature strikes us as being “more human than human”. Indeed, despite their dissimilarities, the replicant Roy Batty in Bladerunner reproduces Frankenstein’s creature’s intense humanity.
Some key elements in the plot
The story of Victor Frankenstein is nested within the story of scientist-explorer Robert Walton. For both men, the quest for knowledge is mingled with fanatical ambition. The novel begins towards the end of the story, with Walton, who is trying to sail to the North Pole, rescuing Frankenstein from sea ice. Frankenstein is being led northwards by his creation towards a final confrontation.
The central moment in the novel is when Frankenstein brings his creation to life, only to be immediately repulsed by it:
I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.
Victor Frankenstein, like others in the novel, is appalled by the appearance of his creation. He flees the creature and it vanishes. After a hiatus of two years, the creature begins to murder people close to Frankenstein. And when Frankenstein reneges on his promise to create a female partner for his creature, it murders his closest friend and then, on Frankenstein’s wedding night, his wife.
More human than human
The real interest of the novel lies not in the murders or the pursuit, but in the creature’s accounts of what drove him to murder. After the creature murders Frankenstein’s little brother, William, Frankenstein seeks solace in the Alps – in sublime nature. There, the creature comes upon Frankenstein and eloquently and poignantly relates his story.
We learn that the creature spent a year secretly living in an outhouse attached to a hut occupied by the recently impoverished De Lacey family. As he became self-aware, the creature reflected that, “To be a great and virtuous man appeared the highest honour that can befall a sensitive being.” But when he eventually attempted to reveal himself to the family to gain their companionship, he was brutally driven from them. The creature was filled with rage. He says, “I could … have glutted myself with their shrieks and misery.” More human than human.
After Victor Frankenstein dies aboard Walton’s ship, Walton has a final encounter with the creature, as it looms over Frankenstein’s body. To the corpse, the creature says:
“Oh Frankenstein! Generous and self-devoted being! What does it avail that I now ask thee to pardon me? I, who irretrievably destroyed thee by destroying all thou lovedst.”
The creature goes on to make several grand and tragic pronouncements to Walton. “My heart was fashioned to be susceptible of love and sympathy; and when wrenched by misery to vice and hatred, it did not endure the violence of the change, without torture such as you cannot even imagine.” And shortly after, about the murder of Frankenstein’s wife, the creature says: “I knew that I was preparing for myself a deadly torture; but I was the slave, not the master, of an impulse, which I detested, yet could not disobey.”
These remarks encourage us to ponder some of the weightiest questions we can ask about the human condition:
What is it that drives humans to commit horrible acts? Are human hearts, like the creature’s, fashioned for ‘love and sympathy’, and when such things are withheld or taken from us, do we attempt to salve the wound by hurting others? And if so, what is the psychological mechanism that makes this occur?
And what is the relationship between free will and horrible acts? We cannot help but think that the creature remains innocent – that he is the slave, not the master. But then what about the rest of us?
The rule of law generally blames individuals for their crimes – and perhaps this is necessary for a society to function. Yet I suspect the rule of law misses something vital. Epictetus, the stoic philosopher, considered such questions millennia ago. He asked:
What grounds do we have for being angry with anyone? We use labels like ‘thief’ and ‘robber’… but what do these words mean? They merely signify that people are confused about what is good and what is bad.
Unintended consequences
Victor Frankenstein creates life only to abandon it. An unsympathetic interpretation of Christianity might see something similar in God’s relationship with humanity. Yet the novel itself does not easily support this reading; like much great art, its strength lies in its ambivalence and complexity. At one point, the creature says to Frankenstein: “Remember, that I am thy creature; I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no misdeed.” These and other remarks complicate any simplistic interpretation.
In fact, the ambivalence of the novel’s religious critique supports its primary concern: the problem of technology allowing humans to become God-like. The subtitle of Frankenstein is “The Modern Prometheus”. In the Greek myth, Prometheus steals fire – a technology – from the gods and gives it to humanity, for which he is punished. In this myth and many other stories, technology and knowledge are double-edged. Adam and Eve eat the apple of knowledge in the Garden of Eden and are ejected from paradise. In 2001: A Space Odyssey, humanity is born when the first tool is used – a tool that augments humanity’s ability to be violent.
The novel’s subtitle is referring to Kant’s 1755 essay, “The Modern Prometheus”. In this, Kant observes that:
There is such a thing as right taste in natural science, which knows how to distinguish the wild extravagances of unbridled curiosity from cautious judgements of reasonable credibility. From the Prometheus of recent times Mr. Franklin, who wanted to disarm the thunder, down to the man who wants to extinguish the fire in the workshop of Vulcanus, all these endeavors result in the humiliating reminder that Man never can be anything more than a man.
Victor Frankenstein, who suffered from an unbridled curiosity, says something similar:
A human being in perfection ought always to preserve a calm and peaceful mind … If the study to which you apply yourself has a tendency to weaken your affections, and to destroy your taste for those simple pleasures in which no alloy can possibly mix, then that study is certainly unlawful, that is to say, not befitting the human mind.
And also: “Learn from me … how dangerous is the acquirement of knowledge and how much happier that man is who believes his native town to be the world, than he who aspires to become greater than his nature will allow.”
In sum: be careful what knowledge you pursue, and how you pursue it. Beware playing God.
Alas, history reveals the quixotic nature of Shelley and Kant’s warnings. There always seems to be a scientist somewhere whose dubious ambitions are given free rein. And beyond this, there is always the problem of the unintended consequences of our discoveries. Since Shelley’s time, we have created numerous things that we fear or loathe such as the atomic bomb, cigarettes and other drugs, chemicals such as DDT, and so on. And as our powers in the realms of genetics and artificial intelligence grow, we may yet create something that loathes us.
It all reminds me of sociobiologist Edward O. Wilson’s relatively recent (2009) remark that, “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.”
One night during the strangely cool and wet summer of 1816, a group of friends gathered in the Villa Diodati on the shores of Lake Geneva. “We will each write a ghost story,” Lord Byron announced to the others, who included Byron’s doctor John Polidori, Percy Shelley and the 18-year-old Mary Wollstonecraft Godwin.
“I busied myself to think of a story,” Mary wrote. “One which would speak to the mysterious fears of our nature and awaken thrilling horror.” Her tale became a novel, published two years later as ‘Frankenstein, or The Modern Prometheus’, the story of a young natural philosophy student, who, burning with crazed ambition, brings a body to life but rejects his horrifying ‘creature’ in fear and disgust.
Frankenstein is simultaneously the first science-fiction novel, a Gothic horror, a tragic romance and a parable all sewn into one towering body. Its two central tragedies – one of overreaching and the dangers of ‘playing God’, the other of parental abandonment and societal rejection – are as relevant today as ever.
Are there any characters more powerfully cemented in the popular imagination? The two archetypes Mary Shelley brought to life, the ‘creature’ and the overambitious or ‘mad scientist’, lurched and ranted their way off the page and on to stage and screen, electrifying theatre and filmgoers as two of the lynchpins, not just of the horror genre, but of cinema itself.
Frankenstein spawned interpretations and parodies that reach from the very origins of the moving image in Thomas Edison’s horrifying 1910 short film, through Hollywood’s Universal Pictures and Britain’s Hammer series, to The Rocky Horror Picture Show – and it foreshadowed others, such as 2001: A Space Odyssey. There are Italian and Japanese Frankensteins and a Blaxploitation film, Blackenstein; Mel Brooks, Kenneth Branagh and Tim Burton all have their own takes. The characters or themes appear in or have inspired comic books, video games, spin-off novels, TV series and songs by artists as diverse as Ice Cube, Metallica and T’Pau: “It was a flight on the wings of a young girl’s dreams/ That flew too far away/ And we could make the monster live again…”
As a parable, the novel has been used as an argument both for and against slavery and revolution, vivisection and the Empire, and as a dialogue between history and progress, religion and atheism. The prefix ‘Franken-’ thrives in the modern lexicon as a byword for any anxiety about science, scientists and the human body, and has been used to shape worries about the atomic bomb, GM crops, strange foods, stem cell research and both to characterise and assuage fears about AI. In the two centuries since she wrote it, Mary’s tale, in the words of Bobby Pickett’s comedy song, Monster Mash, has truly been “a graveyard smash” that “caught on in a flash”.
‘Mysterious fears of our nature’
“All them scientists – they’re all alike. They say they’re working for us but what they really want is to rule the world!” – Young Frankenstein (Mel Brooks, 1974).
Why was Mary’s vision of ‘science gone wrong’ so ripe a vessel to carry our fears? She certainly captured the zeitgeist: the early 19th Century teetered on the brink of the modern age, and although the term ‘science’ existed, a ‘scientist’ didn’t. Great change brings fear, as Fiona Sampson, author of a new biography of Mary Shelley tells BBC Culture: “With modernity – with the sense that humans are what there is, comes a sense of anxiety about what humans can do and particularly an anxiety about science and technology.” Frankenstein fused these contemporary concerns about the possibilities of science with fiction for the very first time – with electrifying results. Far from an outrageous fantasy, the novel imagined what could happen if people – and in particular overreaching or unhinged scientists – went too far.
Several points of popular 19th Century intellectual discourse appear in the novel. We know from Mary Shelley’s writings that in that Villa Diodati tableau of 1816, Shelley and Byron discussed the ‘principle of life’. Contemporary debates raged on the nature of humanity and whether it was possible to raise the dead. In the book’s 1831 preface, Mary Shelley noted ‘galvanism’ as an influence, referring to Luigi Galvani’s experiments using electric currents to make frogs’ legs twitch. Galvani’s nephew Giovanni Aldini would go further in 1803, using a newly-dead murderer as his subject. Many of the doctors and thinkers at the heart of these debates – such as the chemist Sir Humphry Davy – were connected to Mary’s father, the pre-eminent intellectual William Godwin, who himself had developed principles warning of the dangers and moral implications of ‘overreaching’.
Despite these nuggets of contemporary thought, though, there’s little in the way of tangible theory, method, or scientific paraphernalia in Frankenstein. The climactic moment of creation is described simply: “With an anxiety that almost amounted to agony, I collected the instruments of life around me, that I might infuse a spark of being into the lifeless thing that lay at my feet.” The ‘science’ of the book is rooted in its time and yet timeless. It is so vague, therefore, as to provide an immediate linguistic and visual reference point for moments of great change and fear.
Monster mash-up
But surely the reason we turn to Frankenstein when expressing an anxiety about science is down to the impression the ‘monster’ and ‘mad scientist’ have had on our collective brains. How did this happen? Just as the science is vague in the book, so is the description of the creature as he comes to life. The moment is distilled into a single, bloodcurdling image:
“It was already one in the morning; the rain pattered dismally against the panes, and my candle was nearly burnt out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs.”
With his ‘yellow skin’, ‘watery eyes’, ‘shrivelled complexion’ and ‘straight black lips’ the creature is far from the beautiful ideal Frankenstein intended. This spare but resonant prose proved irresistible to theatre and later film-makers and their audiences, as Christopher Frayling notes in his book, Frankenstein: The First Two Hundred Years. The shocking novel became a scandalous play – and of course, a huge hit, first in Britain and then abroad. These early plays, Frayling argues, “set the tone for future dramatisations”. They condensed the story into basic archetypes, adding many of the most memorable elements audiences would recognise today, including the comical lab assistant, the line “It lives!” and a bad-brained monster who doesn’t speak.
It’s a double-edged sword that the monstrous success of Hollywood’s vision (James Whale’s 1931 film for Universal starring Boris Karloff as the creature) in many ways secured the story’s longevity but obscured Shelley’s version of it. “Frankenstein [the film] created the definitive movie image of the mad scientist, and in the process launched a thousand imitations,” Frayling writes. “It fused a domesticated form of Expressionism, overacting, an irreverent adaptation of an acknowledged classic, European actors and visualisers – and the American carnival tradition – to create an American genre. It began to look as though Hollywood had actually invented Frankenstein.”
Making a myth
And so, a movie legend was born. Although Hollywood may have cherry-picked from Mary Shelley to cement its version of the story, it’s clear she also borrowed from historical myths to create her own. The subtitle of Frankenstein, ‘The Modern Prometheus’, namechecks the figure of ancient Greek and Latin mythology who variously steals fire from the gods and gives it to man (or makes a man out of clay) and represents the dangers of overreaching. But the other great myth of the novel is of God and Adam, and a quote from Paradise Lost appears in the epigraph to Frankenstein: “Did I request thee, Maker, from my clay / To mould me man?”. And it is above all the creature’s tragedy – and his humanity – that in his cinematic transformation into a mute but terrifying monster, has been forgotten.
Shelley gave him a voice and a literary education in order to express his thoughts and desires (he is one of three narrators in the book). Like The Tempest’s Caliban, to whom Shakespeare gives a poetic and poignant speech, the creature’s lament is haunting: “Remember that I am thy creature; I ought to be thy Adam, but I am rather the fallen angel, whom thou drivest from joy for no misdeed. Everywhere I see bliss, from which I alone am irrevocably excluded. I was benevolent and good; misery made me a fiend. Make me happy, and I shall again be virtuous.”
If we think of the creature as a badly made and unattractive human, his tragedy deepens. His first, catastrophic rejection is by his creator (man, God),which Christopher Frayling calls “that post-partum moment”, and is often identified as a parental abandonment. If you consider that Mary Shelley had lost her mother Mary Wollstonecraft at her own birth, had just buried her baby girl and was looking after her pregnant step-sister as she was writing the book – which took exactly nine months to complete – the relevance of birth (and death) makes even more sense. The baby/creature is alienated further as society recoils from him; he is made good, but it is the rejection that creates his murderous revenge. As an allegory of our responsibility to children, outsiders, or those who don’t conform to conventional ideals of beauty, there isn’t a stronger one.
“The way that we sometimes identify with Frankenstein, as we’ve all taken risks, we’ve all had hubristic moments, and partly with the creature; they are both aspects of ourselves – all our selves” Fiona Sampson says, “they both speak to us about being human. And that’s incredibly powerful.”
Some modern interpretations, such as Nick Dear’s 2011 play (directed by Danny Boyle for the National Theatre), have highlighted the question of who is the monster and who is the victim, with the lead actors Jonny Lee Miller and Benedict Cumberbatch alternating roles each night. And in this shapeshifting context, it’s fitting that the creature is widely mistaken as ‘Frankenstein’, rather than his creator.
So could a new, cinematic version of Frankenstein be on the cards? One which brings together the creature’s humanity, the mirroring of man and monster and contemporary anxieties? Just like the Romantics, we edge towards a new modern age, but this time, of AI, which brings its own raft of fears and moral quandaries. A clutch of recent films and TV shows have channelled Frankenstein, exploring what it means to be human in the context of robotics and AI – Blade Runner, Ex Machina, AI, Her, Humans and Westworld among them. But there is one film director (rumoured to have been developing the story for a while) who might be able to recapture the creature’s lament as a parable for our time.
Collecting a Bafta for a different sci-fi monster fable, The Shape of Water, this year, Guillermo del Toro thanked Mary Shelley, because “she picked up the plight of Caliban and she gave weight to the burden of Prometheus, and she gave voice to the voiceless and presence to the invisible, and she showed me that sometimes to talk about monsters, we need to fabricate monsters of our own, and parables do that for us”.
When the then-Mary Godwin thought up her chilling parable that summer of 1816, she couldn’t have imagined how far it would go to shape culture and society, science and fear, well into the 21st Century. “And now, once again, I bid my hideous progeny go forth and prosper,” she wrote in the preface to the 1831 edition. The creator and creature, parent and child, the writer and her story – they went forth, and did they prosper? Two hundred years since its publication, Mary Shelley’s Frankenstein is no longer just a tale of “thrilling horror” but its own myth, sent out into the world.
Félix Rodríguez de la Fuente (Poza de la Sal, Burgos, 14 de marzo de 1928 – Shaktoolik, Alaska, Estados Unidos, 14 de marzo de 1980) fue un famoso naturalista y divulgador ambientalista español, pionero en el país en la defensa de la naturaleza, y realizador de documentales para radio y televisión, destacando entre ellos la exitosa e influyente serie El Hombre y la Tierra (1974-1980).1 Licenciado en medicina por la Universidad de Valladolid y autodidacta en biología, fue un personaje polifacético de gran carisma cuya influencia ha perdurado a pesar del paso de los años.2 Su saber abarcó campos como la cetrería3 y la etología, destacando en el estudio y convivencia con lobos. Casado con Marcelle Geneviève Parmentier Lepied.4 Rodríguez de la Fuente ejerció además como expedicionario, guía de safaris fotográficos en África, conferenciante y escritor, además de contribuir en gran medida a la concienciación ecológica de España en una época en la que el país todavía no contaba con un movimiento de defensa de la naturaleza. Su repercusión no fue sólo a nivel nacional sino también internacional y se calcula que sus series de televisión, emitidas en numerosos países y plenamente vigentes hoy en día, han sido vistas por varios cientos de millones de personas. Murió en Alaska, Estados Unidos, junto con dos colaboradores y el piloto al accidentarse la aeronave que los transportaba mientras realizaban una filmación aérea para uno de sus documentales.
Biografía
Primeros años
Félix Samuel Rodríguez de la Fuente nació en el número 18 de la calle Mayor de Poza de la Sal, en la provincia de Burgos, el 14 de marzo de 1928, hijo de Samuel Rodríguez y Marcelina de la Fuente Ibáñez; tuvo una hermana menor, Mercedes. Su padre era notario de profesión, gran aficionado a la lectura y amante del castellano, por lo que en la casa se respiraba un ambiente intelectual. Debido a la Guerra Civil (1936-1939) y a que su padre no era partidario de una escolarización demasiado temprana, se ocupó él mismo de educar a sus hijos en casa, por lo que las incursiones de Félix en la naturaleza fueron continuas hasta los diez años, siendo marcado por una naturaleza virgen apenas hollada por el hombre. Él mismo describiría su lugar de nacimiento como una «comunidad humana» en «convivencia armónica con los paisajes» que configuraron su «universo zoomórfico». En este ambiente maduró sus experiencias infantiles, que repercutirían en su sensibilidad y pensamiento para crear en el futuro sus hipótesis y propuestas tanto biológicas y antropológicas como de corte filosófico que se reflejarían en su obra divulgativa. Veraneaba habitualmente en Santander (Cantabria), ciudad en la que llegó a ejercer profesionalmente su padre.5 La afición de Félix por la naturaleza le lleva a convertirse en un gran conocedor de la zoología y en una de sus excursiones campestres, al observar cómo un halcón captura un pato, comienza su afición por la cetrería. Comenzó su educación reglada en 1938 en los Sagrados Corazonistas de Vitoria como interno, época que vivió con añoranza por la libertad perdida. En 1946, por consejo de su padre, que aunque siempre respetó su afición por la naturaleza desconfiaba de sus inclinaciones naturalistas, comienza a estudiar medicina en la Universidad de Valladolid. El primer año, atraído por la libertad recuperada tras sus siete años en el internado y las nuevas experiencias que le brindaba la ciudad, no fue un buen estudiante y sólo aprobó las tres asignaturas más fáciles. En años posteriores, solía encerrarse un mes antes de los exámenes para estudiar y desde el principio destacó en las pruebas orales dada su facilidad de palabra, logrando así las más altas calificaciones. Fue un gran aficionado al deporte, logrando ganar el campeonato universitario de 400 metros lisos. En esta época causa una gran influencia sobre él el biólogo José Antonio Valverde, que alcanzaría una enorme repercusión internacional a finales de los años cincuenta al enfrentarse a los planes del Ministerio de Agricultura para desecar las marismas del Guadalquivir, lo que llevaría a la creación en la zona del Parque de Doñana. Además, Valverde compartía su pasión por la cetrería, arte que por aquel entonces llevaba siglo y medio sin practicarse en España, pero que Félix se propuso recuperar con la ayuda de los escritos medievales sobre el tema, especialmente el Libro de la caza de las aves, de Pero López de Ayala, y el Libro de la caça, de Don Juan Manuel. En 1954 es uno de los firmantes del acta de fundación de la Sociedad Española de Ornitología. En 1957 se gradúa en estomatología en Madrid, consiguiendo el Premio Extraordinario Landete Aragó, nombrado en honor del pionero de la especialidad en España. Durante dos años ejerce como odontólogo en la madrileña clínica del doctor Baldomero Sol, aunque siempre a media jornada para poder seguir dedicándose diariamente a la cetrería. Sin embargo, en 1960, tras fallecer su padre, abandona el oficio de dentista para dedicarse definitivamente a la cetrería y a la divulgación científica. En 1961 trabaja como asesor de cetrería en la película El Cid, rodada en España. En 1964, gracias a sus cada vez mayores contactos internacionales con científicos de toda Europa, presenta en el Congreso Internacional para la Protección de las Aves de Presa, celebrado en Caen (Francia), un estudio sobre la situación del halcón peregrino en España. Publica su primer libro, El arte de la cetrería.
Salto a la fama
En 1962 es encargado por el Gobierno español para capturar dos halcones peregrinos y ofrecérselos como regalo al rey Saud de Arabia Saudita, viajando a ese país para entregárselos. En octubre de 1964 organiza las Jornadas Internacionales de Cetrería, que por vez primera se celebraban en España, en la provincia de Guadalajara. El diario ABC le dedica su foto de portada del 21 de octubre con el título de «Cetrero Mayor del Reino» y en la que aparece lanzando a Durandal, una hembra de halcón entrenada por él. Al resultar ésta ganadora de la competición unos días después, es invitado a un programa de Televisión Española, donde comenzaría a ser conocido y admirado por el gran público. Félix entró en los estudios con un halcón en el puño enguantado y, aunque se trataba de una simple entrevista de tres minutos para explicar los rudimentos de la cetrería, demostró ante la audiencia sus amplísimos conocimientos con tal pasión y oratoria que más tarde el popular periodista Joaquín Soler Serrano pidió para él un puesto en la Real Academia de la Lengua por ser «el español de mejor prosodia». A los pocos días se recibieron centenares de cartas solicitando nuevas intervenciones suyas en pantalla, empezando así a colaborar en el programa Fin de semana, donde, en un breve espacio de unos cinco minutos, cada dos semanas, habla de caza, pesca, excursionismo y temas relacionados con los animales en general. Su colaboración en ese programa dura cuatro años. En 1966 se inicia en TVE el espacio Televisión Escolar y Félix es encargado de la clase de Zoología, presentado como Félix, el amigo de los animales, con el que se transforma en un personaje tremendamente popular. Es el primero que empieza a hablar de la fauna y la flora del país por sus valores intrínsecos, al margen del valor económico, el único que primaba entonces. Además, en una época de desarrollismo industrial, consigue conectar con una población trabajadora que está emigrando desde el campo a la ciudad y que siente como cercanas, por haberlas vivido en carne propia, las experiencias y conocimientos que Félix les transmite. Y todo ello adornado con una oratoria magistral y con un sentido del tiempo televisivo, ajustándose siempre, en intervenciones improvisadas y que se emitían en directo, a los pocos minutos de que disponía, pero logrando acabar siempre con la frase apropiada para mantener vivo el interés del espectador. Consigue rodar su primer documental, Señores del espacio, dedicado a la cetrería y realizado gracias al rey de Arabia Saudí y de varios aristócratas. El 5 de agosto de 1966 se casa con la francesa Marcelle Geneviève Parmentier Lepied (París, 1937),6 con la que tendría tres hijas: María de las Mercedes Geneviève (1967), Leticia Jimena (1969) y Odile Patricia (1973).7 Continúa sus colaboraciones en varios programas de televisión, como Imágenes para saber (1966) y A toda plana (1967), donde muestra su interés por los pueblos indígenas. En 1966 consigue la protección en España del halcón peregrino y de las rapaces nocturnas, lo que convierte al país en un referente, pues es el primero en que se aprueba una normativa de este tipo. En 1967 comienza a escribir artículos en la revista Blanco y Negro, dominical del diario ABC, englobados bajo los epígrafes de Serie ibérica (1967) y Serie africana (1968), que consiguen aumentar en gran medida la tirada de la revista. También redacta en esta época cuatro entusiastas series en La Actualidad Española, revista que amplía su tirada con dichos artículos, y comienza su etapa de viajes y expediciones. Estos éxitos le permiten dedicarse a otra de sus pasiones, el estudio de los lobos, para lo que, tras obtener en 1965 dos lobeznos que salvó de morir apaleados en un pueblo, los crio ayudado por su mujer y consiguió convertirse en el lobo alfa (jefe), lo que luego repetiría con varias manadas en los montes aledaños al barranco del río Dulce, en la localidad de Pelegrina (cerca de Sigüenza, Guadalajara). Comienza así a divulgar lo que consideraba «la verdad del lobo», en una época en que era un animal perseguido y acosado por considerársele enemigo del hombre y, concretamente, de la ganadería y las especies cinegéticas. Sus estudios sobre este mítico animal profundizaron en la Etología de esta especie, que, como el hombre, fue un cazador social que compartió con los humanos la cúspide trófica durante los últimos de 100.000 años de la era del Paleolítico, pacto entre estos dos depredadores que llevó hace 30.000 años a la domesticación del lobo, que se transformó en perro, y más tarde a la enemistad del hombre con los cánidos salvajes cuando los humanos domesticaron a los herbívoros para entrar en el Neolítico hace 10.000 años. Esta original visión del pasado y del futuro de Félix, y de su rebeldía personal a ser domesticado por la sociedad actual, es el eje de su última biografía, publicada en el XXX aniversario de su muerte por la editorial La Esfera de los libros bajo el título Félix Rodríguez de la Fuente, su vida, mensaje de futuro. En ella se recogen muchos documentos de este divulgador de las Ciencias Naturales y «agitador de conciencias», que profundizan en ideas tan sugerentes. En 1966 dirige y presenta la película Alas y garras, de la que también es guionista, que cosechará varios premios, como el Arquero de Bronce del Festival de Cine de Gijón. Mayor oportunidad de expandir su mensaje de aproximación a la naturaleza le llega en 1968, cuando los directivos de TVE le encargan la responsabilidad de ponerse al frente de un programa propio, Fauna. Ese mismo año se le encarga, gracias a sus conocimientos en cetrería, un plan inédito en España: utilizar aves rapaces para el control de las aves potencialmente peligrosas en los aeropuertos. El éxito cosechado por el programa de televisión Fauna no impide que al coincidir esa cabecera con el de la enciclopedia del mismo nombre, que empieza a publicar en Salvat, los directivos de TVE le cambién el nombre del programa de forma unilateral y sin consultarlo con el autor por el de Animalia (1970), que pocos capítulos más tarde Félix logró cambiar por el de Vida Salvaje (1970). Aunque Félix se interesaba especialmente por la educación de los niños, consiguió llegar a todos los públicos.
Reconocimiento mundial
Entre 1970 y 1974 realiza la primera de sus grandes series que le darían reconocimiento mundial, especialmente en el ámbito hispanohablante, Planeta Azul. En diciembre de 1973 comienza su colaboración en la radio con el programa La aventura de la Vida, que se emitiría semanalmente, todos los jueves, durante los siguientes siete años, alcanzando más de 350 emisiones. Para la radio también colaboraría con Planeta agua y Objetivo: salvar la naturaleza. Paralelamente, en estos años se entrega a diversas causas conservacionistas de relevancia, como el salvamento de distintas especies animales en peligro de extinción, muy especialmente el lobo, que probablemente le debe su supervivencia en la península ibérica, al contrario de la mayoría de países de Europa Occidental, donde sí se ha extinguido, y para el que consiguió el respeto y el aprecio por parte de la sociedad, de manera similar a como lo había conseguido años antes con las aves rapaces, aunque a costa del enfrentamiento con pastores y cazadores. Otros animales que se esforzó en proteger fueron el oso ibérico, el lince, el águila real o el águila imperial. También trabajó en la preservación de diferentes ámbitos de la geografía española, como las dunas de El Saler, el Parque de Doñana, las Tablas de Daimiel, el Monte del Pardo o la laguna de Gallocanta. Además, durante toda la década, emprende diversos proyectos editoriales, como la coordinación de la Enciclopedia Salvat de la Fauna (1970-73), realizada con un equipo de jóvenes biólogos entre los que se encontraban Miguel Delibes; Javier Castroviejo, Cosme Morillo y Carlos Vallecillo, entre otros. La enciclopedia supuso un verdadero reto ya que durante tres años se publicó un fascículo semanal de 24 páginas, vendiendo sólo en España dieciocho millones de volúmenes. Posteriormente sería traducida a catorce idiomas y publicada en los cinco continentes, transformándose en una obra de referencia (Delibes recordaría años después haber visto la enciclopedia entre los libros técnicos de la mayoría de los museos de ciencias naturales de Europa). También publicó la Enciclopedia Salvat de la Fauna ibérica y europea coordinada por Joaquín Araujo, Los libros de El Hombre y la Tierra, Los cuadernos de Campo y la enciclopedia La aventura de la vida, publicada tras su fallecimiento. Entre 1973-1980 realiza para televisión la que sin duda es su serie más famosa, El Hombre y la Tierra, dividida en tres partes: las series ibérica, suramericana y norteamericana. La serie ibérica constó de tres partes y de una cuarta inconclusa. La serie suramericana se filmó en 1973 en Venezuela, en Los Llanos, el Orinoco y en el Amazonas, y aunque en principio sólo se iban a rodar ocho capítulos se ampliaron finalmente a dieciocho. Por última, de la serie norteamericana sólo se pudo filmar la parte canadiense y dos capítulos en Alaska. El rodaje de la serie, que abarcó 124 capítulos, la mayoría rodados en España, supuso todo un reto, ya que se rodó en 35 milímetros, para lo que se tenían que transportar los pesados equipos de filmación de la época. También es de destacar su sintonía, compuesta por Antón García Abril. La serie se convirtió en un referente mundial y filmó algunos animales por primera vez, como el desmán de los Pirineos. Utilizando animales troquelados (acostumbrados a la presencia humana pero que conservan sus pautas naturales de comportamiento, no han sido domesticados8 ), se consiguieron imágenes impactantes que dieron la vuelta al mundo, entre las que cabe destacar la caza de diversos animales por parte de las manadas de lobos de las que Félix era el jefe o, quizá la más espectacular y recordada, la caza de un muflón por un águila real. La serie se emitió en numerosos países con gran éxito de audiencia y cosechó premios tanto en España (Ondas, Antena de Oro) como en el extranjero (Festival de Televisión de Montecarlo). Sus ventas se realizaban más allá del telón de acero. Es de destacar que la serie se hacía sin guion y Félix improvisaba el desarrollo de cada capítulo. En abril de 1980, el Ayuntamiento de Burgos le otorga la Medalla de Oro de la Ciudad a título póstumo.9
Muerte
El 4 de marzo de 1980, ante los reyes de España, Félix presentó en el Centro Cultural de la Villa de Madrid un documento titulado Estrategia mundial para la conservación de los recursos vivos y el logro de un desarrollo sostenido, propuesta de la Unión Internacional para la Conservación de la Naturaleza y de los Recursos Naturales. El día 10 se trasladó junto con un equipo de El Hombre y la Tierra a Alaska, al círculo polar ártico, para filmar la «Iditarod Trail Sled Dog Race», la carrera de trineo con perros más importante del mundo. Para ello contrataron los servicios de un piloto llamado Tony Oney y de su socio, Warren Dobson, cuyo hijo llegaría a ser comandante de aviación y a contraer matrimonio, casualmente, con una piloto española. Aunque la mayor parte del equipo viajaba habitualmente en la avioneta de Oney, una pequeña Cessna, ésta sufre una pequeña pérdida de aceite y Félix, que tenía miedo a volar, decide cambiar de aparato, y comenta poco antes de montar «qué lugar más hermoso para morir». Tras despegar de Unalakleet, las dos avionetas vuelan casi juntas y poco después, la que pilota Dobson se estrella como consecuencia del desprendimiento de uno de los hidropatines, que desequilibró el aparato. Este volteo no pudo ser corregido por la baja altura de vuelo del rodaje. Quizá la experiencia del piloto hubiera podido salvar el contratiempo de haber sucedido a mayor distancia del suelo. Con él fallecen, además de Félix y Dobson, el cámara de Televisión Española Teodoro Roa y el ayudante Alberto Mariano Huéscar. Oney aterriza y es el primero en alcanzar la avioneta siniestrada. El lugar exacto de la catástrofe fue Shaktoolik, población de esquimales a unos 25 kilómetros de la costa del mar de Bering, no lejos de Klondike, lugar adorado por Félix desde sus adolescentes lecturas de Jack London. La policía de Alaska recogió los cadáveres, que fueron depositados en la morgue de Nome, desde donde fueron repatriados a España. Según una persona de Nome que ayudaba a los españoles a transportar sus cámaras y sus utensilios, Rodríguez de la Fuente había estado ligeramente enfermo a principios de semana a consecuencia de fuertes dolores de muelas pero doce horas antes de su fallecimiento se hallaba en plena forma y haciendo planes para dos nuevas filmaciones, una sobre los albatros de Cordova, localidad de Alaska, y otra sobre las islas Aleutianas. Durante su estancia en Norteamérica, Rodríguez de la Fuente y su equipo habían conseguido una gran popularidad en la región canadiense de Yukón, en las ciudades de Dawson City, Whitehorse y Yellowknife; y en Alaska en Nome, Anchorage y Fairbanks. El principal diario del estado publicó en portada con grandes letras «Adiós a nuestro Jack London español». El accidente, según consta en el registro de accidentes de aviación norteamericano, ocurrió exactamente a las 12.30 del 14 de marzo de 1980 hora local de Alaska. En aquel momento existían 11 horas de diferencia con España. Por tanto, el accidente se produjo cuando eran las 23.30 del 14 de marzo en España. La noticia del accidente fue dada a conocer en España unas horas después, a primera hora de la mañana del 15 de marzo, por lo que a veces se cree equivocadamente que murió ese día, aunque en realidad fue el 14, precisamente el día en que cumplía 52 años. Su muerte conmocionó el país. Félix fue enterrado en el sencillo cementerio de su localidad natal de Poza de la Sal (Burgos) en un acto multitudinario el miércoles día 19 de marzo de 1980 sobre las 3 y media de la tarde. En junio de 1981, y por iniciativa de su viuda Marcelle Parmentier sus restos mortales fueron exhumados para ser trasladados al cementerio de Burgos, donde descansan desde entonces en un panteón realizado por el arquitecto Miguel Fisac junto con una escultura en su memoria obra del artista Pablo Serrano. El polémico traslado al cementerio de Burgos se realizó durante la madrugada para evitar enfrentamientos con los habitantes y autoridades de Poza de la Sal que se oponían frontalmente a que los restos del famoso naturalista fueran alejados de su lugar de nacimiento.
Su legado
Filosofía y propuestas
La filosofía de Félix Rodríguez de la Fuente parte de una visión de la vida según la cual los seres vivos, mediante la evolución, se perfeccionan, embellecen y adaptan. Pero no incluye necesariamente al ser humano moderno, pues Félix cree que el hombre ideal y feliz es el de la cultura de los cazadores superiores del Magdaleniense -paleolítico superior, de hace 15.000 años, que pintaron la cueva de Altamira– dotado ecológica, artística, biológica e incluso comunitaria y culturalmente, en armonía con la naturaleza que le da todo y de la que es parte. Según sus palabras, «la entrada del neolítico es la del abuso y la del sojuzgamiento, y en ella seguimos, inadaptados». Su prédica, constante, propone no regresar a un pasado paleolítico imposible, pero sí de incorporar a la actualidad sus elementos perdidos «positivos», que fueron los que nos dieron forma y que por tanto en el fondo de nuestra especie, anhelamos. Inspirado en autores como Teilhard de Chardin y Remi Chauvin, llegó a concebir un mundo futuro donde el hombre vivirá en armonía con la naturaleza y consigo mismo tras alcanzar la capacidad de una comunicación instantánea y universal en la que la palabra actuaría como una feromona capaz de transmitir el conocimiento y, por ser un conocimiento empático, una suerte de argamasa cultural que permita superar las deficiencias actuales. La creación de un pensamiento colectivo –del que la Wikipedia es un buen exponente–, el aumento del tiempo libre y la promulgación de espacios naturales protegidos eran en su opinión una esperanza de la humanidad para superar los retos ambientales e incluso sociales a los que nos enfrentamos. Su filosofía podría resumirse en un humanismo vitalista que produzca individuos sanos a ser posible destacados que mejoren la sociedad por capacitación constante. Parte de esta mejora es el recuerdo y reivindicación de la Naturaleza tal como fue. Para Félix el ser humano no es una especie más, sino una síntesis de la naturaleza, con todo lo peor y todo lo mejor de ella, creada «con la nieblas del amanecer, con el aullido del lobo, el rugido del león», en una estrecha y «compleja trama palpitante» muy interdependiente y frágil. Félix cree que la compartimentación moderna es enemiga del necesario cooperativismo y aboga por la vida en comunidades de menos de 5.000 habitantes. Contra la dispersión por egoísmos, Félix propuso el orden y el cumplimiento de las leyes. En su prédica se siente solidario con el mundo en que vive y pide a todos el compromiso, siendo un profundo rompedor de tópicos. Pero muchos mitos no comprobados que propone que rechacemos no son los del pensamiento animista, «mítico» o «infantil», sino los impulsados artificialmente por los intereses creados por nuestra sociedad mercadotécnica biocida y separada de la naturaleza y del empirismo. Profundo amante de la ciencia, de la investigación, de los progresos culturales y del conocimiento, cree que la unión de las ciencias nos hará regresar a la base de los postulados del pensamiento animista: una tierra viviente sentida, comprendida y vivida como comunidad por el hombre, el Hombre y la Tierra (título de su mejor programa televisivo). En este contexto cabría entender su pasión por la cetrería. Para él, este tipo de caza, al ejercerse con animales (halcones, azores y águilas) «sacados de la propia evolución natural», no suponía un engaño, artificio o suprema ventaja con respecto a la naturaleza, que tendría la por él denunciada caza con escopeta, «logro éste del «neolítico» y por tanto de la esclavitud del hombre o de la domesticación del animal («El halcón falla muchas veces el lance y las presas escapan», Félix, el amigo de los animales).
Repercusión
En una época en la que, especialmente en España, no existía aún una clara conciencia ecológica, la influencia del Dr. Félix Rodríguez de la Fuente fue decisiva en la creación de esa conciencia de defensa medioambiental y conservacionista que en sus programas de radio y televisión se hizo cada vez más patente y acuciante. En torno a su artístico y apasionado modo de mostrar la naturaleza se fue creando el denominado «fenómeno Félix», una corriente de amor a la naturaleza y a su defensa que consiguió por ejemplo cambiar la muy criticada política del ICONA, Instituto para la Conservación de la Naturaleza, o acabar con las Juntas de Extinción de Animales Dañinos y Protección a la Caza. Además, ayudó a crear la delegación española del Fondo Mundial para la Vida Silvestre, siendo vicepresidente de Adena y su máximo promotor, auspició la promulgación de Parques Naturales y Nacionales y logró la preservación por ley del halcón peregrino y el lobo. También creó, en 1975, el Refugio de Rapaces de Montejo de la Vega, con la mayor población de buitres leonados de Europa, donde organizó campamentos infantiles por los que pasaron cientos de niños. Este compromiso conservacionista le llevó a la defensa del equilibrio ecológico por encima de cada animal individualmente considerado, lo que le llevó a polémicas que siguen vigentes y a oponerse a las políticas medioambientales de diversas instituciones, como las escuelas de ingenieros de montes y de caminos o el IRYDA, lo que le valió diversas enemistades. También, según la biografía «Félix Rodríguez de la Fuente, el hombre y su obra», su defensa del lobo le acarreó diversas amenazas de muerte. De su gran popularidad dan ejemplo dos datos. Según las encuestas que recoge su biógrafo, Miguel Pou, entre 1971 y 1974 en España se le consideró «el personaje más famoso después de Franco». Según Joaquín Araújo, en una información también recogida por Pou, en 1983 el 70% de los estudiantes de biológicas entrevistados decían hacer la carrera por la influencia de Rodríguez de la Fuente. A nivel internacional, la serie El Hombre y la Tierra se ha visto en los cinco continentes, incluyendo países como la república Popular China, por lo que su mensaje es susceptible de haber llegado a varios cientos de millones de espectadores. La biografía Félix, el amigo de los animales detalla pormenorizadamente su vida. La conciencia planetaria de Félix Rodríguez de la Fuente recoge sus pensamientos y teorías, que engloban sus conocimientos de antropología antigua, así como sus «adelantados avisos e interesantes propuestas (actuales) a la sociedad». En 2010, vio la luz una nueva biografía titulada «Félix Rodríguez de la Fuente, su vida, mensaje de futuro», que recoge buena parte de sus documentos personales, correspondencia y reflexiones vertidas en su programa de Radio Nacional de España, hasta ese momento inéditos, que corroboran su compromiso con la conservación de la naturaleza y la vida desde muy joven.
Críticas
Félix Rodríguez de la Fuente, obtuvo críticas por la filmación de la muerte de animales para sus documentales, aunque su intención fuera concienciar a un país de la importancia del equilibrio ecológico. Se valió de la muerte de animales para salvar la especie y sus ecosistemas. Afirmó «¿De qué se quejan los ignorantes de la ecología si muestro como un águila mata a un chivo para que amen a las águilas, que están en peligro de extinción y nadie las protege, para que al amarlas las salven y defiendan todo su ecosistema?». También fue criticado por su fuerte personalidad, vehemente y apasionada, que le llevaba a exigir a sus colaboradores que mantuvieran el mismo nivel de trabajo que él era capaz de desarrollar. A este respecto cabe recordar que durante los años setenta desarrolló un frenético ritmo de trabajo, dirigiendo y presentando programas de radio y televisión y escribiendo libros, además de involucrarse en un sinfín de causas de defensa de la naturaleza. Tras su muerte se criticó que muchas secuencias de sus documentales, sobre todo de El Hombre y la Tierra, fueron rodadas, principalmente en la hoz de Pelegrina, con animales troquelados (acostumbrados a la presencia humana)8 que mantenía en cautividad, falseando tomas utilizando animales inmovilizados o pieles rellenas de paja.10 Sin embargo, los defensores de esta forma de trabajo sostienen que si no se hubiera hecho así hubiera sido imposible conseguir tales imágenes en plena naturaleza y que en cualquier caso los animales troquelados no sufrían ningún tipo de daño, atrayendo por contra el interés del público, lo que a la larga suponía su apoyo y concienciación en la defensa del medio ambiente.
Monumentos en su honor
Buena prueba de la influencia de Félix Rodríguez de la Fuente en la sociedad española y del recuerdo que aún perdura de su figura es el gran número de monumentos, placas conmemorativas y parques en su honor distribuidos por toda la geografía de España.11 A su muerte siguió recibiendo premios, trofeos, galardones y condecoraciones hasta un total de más de sesenta y se creó en su honor la Reserva Natural de Cabrera (Baleares). El dúo de música infantil Enrique y Ana le dedicó el tema «Amigo Félix», que se convertiría en un gran éxito.
Veinte curiosidades sobre la vida y la muerte de Félix Rodríguez de la Fuente
¿Sabías que era dentista, odiaba volar, y trabajó como asesor en la película ‘El Cid’?
Muchos son los que conocen al dedillo la obra de Félix Rodríguez de la Fuente, su amor por los animales, su pasión por la cetrería y su conciencia ecológica. Falleció hace hoy 35 años y tanto su figura como su muerte siguen despertando tanta atención e interés como generó su vida. Su historia está repleta de logros, de esfuerzo y de talento. A continuación, os ofrecemos una lista con las veinte curiosidades más interesante del hombre que dicen que era el español más conocido en su tiempo después de Franco.
-Su segundo nombre era Samuel, como su padre, que era notario, gran aficionado a la lectura y amante del castellano. No era partidario de la escolarización temprana, por lo que educó él mismo a sus dos hijos: Mercedes y Félix. En el año 1938, ya con diez años, comenzó la educación reglada en los Sagrados Corazonistas de Vitoria.
– Su afición por la naturaleza comenzó en las excursiones campestres que realizaba en su localidad natal, Poza de la Sal. En una de ellas observó como un halcón capturaba un pato, y ahí nació su pasión por la cetrería.
-Gracias a la cetrería, y seguramente a su desparpajo, logró enamorar a la que sería su esposa, la madre de sus tres hijas y una fiel colaboradora, amiga y admiradora. La conoció en una fiesta y, según relató ella misma en una entrevista en ABC, «me dijo que tenía halcones, yo le contesté que no me lo creía. Me llevó a su casa de campo y me los enseñó. Yo no sabía nada de eso, era una mujer de asfalto, una mujer de París».
-Félix Rodríguez de la Fuente se licenció en Medicina por la Universidad de Valladolid. Tal vez la falta de interés académico le llevó a no ser un buen estudiante, pero su inteligencia y su facilidad de palabra le hizo destacar en las pruebas orales, donde logró las más altas calificaciones. En 1957 se graduó en Estomatología en Madrid y consiguió el Premio Extraordinario Landete Aragó, que lleva el nombre del pionero de esta especialidad en España. Su única incursión en el ámbito laboral dentro de la especialidad que había estudiado fue un trabajo que realizó en 1958 en una consulta odontoestomatológica del doctor Baldomero Sol, primero en prácticas y luego como colaborador. Dejó este empleo al poco de morir su padre.
-Fue un gran aficionado al deporte. Llegó a ganar el campeonato universitario de 400 metros lisos.
-En 1961 trabajó como asesor de cetrería en la película ‘El Cid’, rodada en España y con Charlton Heston y Sofía Loren como protagonistas.
-En 1962 el Gobierno español le encargó capturar dos halcones peregrinos para ofrecérselos como regalo al rey Saud de Arabia Saudita, donde viajó para entregárselos. Este monarca le financiaría años más tarde el rodaje de su primer documental, titulado Señores del espacio y dedicado, cómo no, a la cetrería.
-En 1964, y tras ganar una competición de cetrería, Televisión Española le invita a un programa. La pasión y oratoria que demostró en la entrevista, mientras portaba sobre su puño enguantado al imponente halcón, enamoraron al público y a la cadena, que le contrató para colaborar en el programa Fin de semana.
-Con esta hembra de halcón había sido portada de ABC el 21 de octubre de 1964. Este ave rapaz había obtenido la máxima puntuación en las Jornadas Internacionales de Cetrería, que se celebraron ese año en el coto guadalajareño de Loranca de Tajuña. El burgalés llamó a su halcón Durandal, tal vez para rendir un homenaje a la espada de Roldán, paladín y sobrino de Carlomagno. En El Bierzo existe la leyenda de que la espada de Roldán se encuentra en el lago de Carucedo, cerca de Las Médulas. Otra versión apunta a que el caballero leonés Bernardo del Carpio, tras vencer a Roldán, se quedó con la espada con la que sería enterrado a su muerte en Peña Longa, en la localidad palentina de Aguilar de Campoo.
-Fue un defensor a ultranza del lobo. En 1965 consiguió dos lobeznos a los que salvó de morir apaleados en un pueblo. Los crío y estudió ayudado por su mujer. Les llamaron Rómulo y Remo y, según relata su viuda, «con ellos aprendí a ser madre, porque les daba el biberón cada dos horas. Fueron mis primeros hijos». Félix intentó que este animal dejara de ser visto como un enemigo natural del hombre y de la ganadería.
-El alcalde de Poza de la Sal levantó en el municipio una estatua en el lugar donde el naturalista observaba los halcones. «No te preocupes le dijo- porque aunque te mueras en alguna de tus aventuras ya tienes aquí una estatua que ha de permanecer siempre. Ya te puedes lanzar a todas tus aventuras, puedes dormir tranquilo».
-Odiaba volar y se cambió de avioneta porque en la que tenía previsto rodar la carrera de trineos tirados por perros más importante de Alaska había sufrido una pérdida de aceite. Antes de subir al aparato que finalmente sufrió el accidente dicen que comentó en alto: «Qué lugar más bello para morir».
-La noticia de su muerte la dio en Televisión Española Isabel Tenaille, presentadora en ese momento del programa Siete días.
-Su viuda aseguró días más tarde que antes de salir de viaje hacia Alaska, Félix le había firmado un poder. «Fue la primera vez que hizo algo así. Yo creo que tuvo un presentimiento. Anteriormente se había ido en un sinfín de viajes y nunca había pasado nada parecido», argumentó refiriéndose a la firma del documento.
-Marcelle Parmentiere relató pocos días después del accidente que su marido estaba amargado con los problemas económicos que sufría la serie. Incluso, comentó que el burgalés le había comentado que el de Alaska era el último viaje que acometía y que quería tomarse la vida con más tranquilidad, ir al campo, escribir y pasar más tiempo con su familia.
-Uno de los cámaras que falleció al estrellarse la avioneta, concretamente Alberto Mariano Huéscar, ya había sufrió un accidente anterior grabando otro programa con Félix Rodríguez de la Fuente. Estuvo un año de baja.
-Las agencias que informaron de la noticia se confundieron con el nombre del piloto fallecido. Aseguraban que era Peter Lang, de 36 años, cuando su nombre era Warren Dobson, unos de los mejores pilotos de Alaska. Quienes le conocían aseguraban que, tal vez, a una mayor altura se hubiera podido hacer con el aparato, evitando el accidente. También hubo una pequeña confusión con la fecha de la muerte. El accidente se produjo a las 12.30 horas del 14 de marzo, hora local en Alaska y las 23.30 de España (existen once horas de diferencia). Unas horas después, se dio a conocer la muerte en su país natal. Como ya era un nuevo día, llegó a existir la confusión de que había muerto en la jornada del 15 de marzo.
-En el lugar donde cayó la avioneta se erigió una estatua en su memoria. Sin embargo, años más tarde fue destruida para construir una carretera.
-Teodoro Roa y Alberto Huéscar, los compañeros cámaras de TVE que también fallecieron en el accidente junto a Félix Rodríguez de la Fuente, fueron enterrados el 19 de marzo en Fuencarral. Cuando la comitiva llegó al cementerio, comprobaron que por las normas sanitarias de traslados de cadáveres entre países, los ataúdes eran más grandes de lo normal. Familiares, amigos y compañeros tuvieron que esperar durante más de hora y media, y bajo una lluvia torrencial, a que dos enterradores pudieran agrandar las fosas para poder llevar a cabo el enterramiento.
-El mismo año en el que fallecía Félix Rodríguez de la Fuente, el dúo Enrique y Ana lanzaba una de las canciones más exitosas de su carrera: Mi amigo Félix. Este tema siempre estará relacionado con el naturalista, aunque sin lugar a dudas, la música que puso la banda sonora de su vida fue la sintonía de El hombre y la Tierra, creada por el compositor y músico español Antón García-Abril ; autor, entre otras muchas, de las bandas sonoras de series míticas como Anillos de oro, Fortunata y Jacinta o Ramón y Cajal, y películas como Los santos inocentes, Segunda enseñanza o La ciudad no es para mí.
Félix Rodríguez de la Fuente, el divulgador más mediático
Fue el divulgador medioambiental por excelencia en la España de los años setenta, además de un humanista y un activista que con su gran carisma logró concienciar a la audiencia del deber de proteger y salvar a las especies que se encuentran en peligro de extinción.
Aquel 14 de marzo de 1980, cuando estaba a punto de subirse a la avioneta en Unalakleet (Alaska) para rodar la carrera de trineos con perros más famosa del mundo para la serie El hombre y la Tierra, Félix Rodríguez de la Fuente les dijo a sus compañeros de rodaje que «aquel era un lugar hermoso para morir». Casualmente era el día de su cumpleaños, y a los pocos minutos de levantar el vuelo, y por causas que aún se desconocen, la avioneta se estrelló llevándose la vida de Félix Rodriguez de la Fuente, magistral divulgador del mundo natural y un héroe para muchos niños a los que había transmitido su amor por la naturaleza y su compromiso con el medioambiente.
Comunicador polifacético
Félix Rodríguez de la Fuente fue mucho más que un gran comunicador que convirtió su afición por la naturaleza en una forma de entender la vida. Su capacidad para cambiar la visión que la sociedad española de la década de 1970 tenía de los animales y la naturaleza lo convirtió en un fenómeno mediático. Con su talento, Rodríguez de la Fuente consiguió poner a la fauna ibérica en el foco de atención. Zoólogo, biólogo autodidacta, expedicionario, guía, realizador de documentales y de programas radiofónicos, y escritor, Rodríguez de la Fuente fue posiblemente el primer ecologista español en convencer con su oratoria a cientos de miles de personas.
Rodríguez de la Fuente era una persona cercana que transmitía una gran pasión y seguridad. Su característico timbre de voz, su tono y la forma de acentuar las palabras, sus explicaciones didácticas, sus reflexiones y sus documentales pioneros no han pasado de moda a pesar de los años transcurridos. Pero ni siquiera un personaje legendario como él se ha librado de las críticas. Algunos le acusaron de rodar escenas con animales «troquelados», es decir, acostumbrados a la presencia humana.
Pasión por la cetrería
Licenciado en Medicina por la Universidad de Valladolid, Félix Roriguez de la Fuente se graduó en Estomatología en Madrid con el Premio Extraordinario Landete Aragó. A pesar de obtener tal alta graduación, Félix apenas trabajó dos años en una consulta y a la muerte de su padre lo dejó todo para dedicarse a sus grandes pasiones: la cetrería y la divulgación científica.
Sus conocimientos de cetrería le valieron ejercer como asesor, en 1961, en la película El Cid, rodada en España, con Charlton Heston y Sofía Loren como protagonistas. En 1962, el Gobierno español le encargó capturar dos halcones peregrinos para ofrecérselos como regalo al rey Saud de Arabia Saudí, país al que viajó para entregárselos personalmente. El monarca saudí, a su vez, como agradecimiento le financiaría años más tarde el rodaje de su primer documental, titulado Señores del espacio, dedicado a la cetrería. En 1964, Rodríguez de la Fuente participó en el Congreso Internacional para la Protección de las Aves de Presa celebrado en Caen, Francia, donde presentó un estudio sobre la situación del halcón peregrino en España y también publicó su primer libro, El arte de la cetrería.
El gran amigo del lobo
Su carrera mediática empezó en 1962 en un programa de Televisión Española al que había acudido como invitado. En 1968, directivos de la televisión pública le pusieron al frente del programa Fauna, que más tarde se convertiría en una enciclopedia del mismo nombre. En ese mismo año, sus conocimientos de cetrería lo convirtieron en el encargado de desarrollar un plan inédito en España y que también resultó un éxito: utilizar aves rapaces para controlar a las aves potencialmente peligrosas para los motores de los aviones en los aeropuertos. Entre 1970 y 1972 inició su trayectoria como productor televisivo con el programa Planeta Azul, una serie que le daría reconocimiento y fama mundial. Un año después, en 1973, su inquietud sin límites lo llevó a hacer una incursión en la radio con el programa La aventura de la vida y otros espacios, como Planeta agua y Objetivo: salvar la naturaleza.
El amor de Rodríguez de la Fuente por la naturaleza y los animales fue tal, que muy pronto su imagen se asoció con la del lobo ibérico, un animal tan temido como repudiado en las zonas rurales y que con sus programas ayudó a preservar de la extinción. En 1965, Rodríguez de la Fuente había conseguido salvar a dos lobeznos de morir apaleados en un pueblo. El divulgador crío y estudió a los animales, a los que bautizó con los nombres de Rómulo y Remo. En Fauna, Rodríguez de la Fuente dijo los siguiente de la relación entre lobos y humanos: «Todo parece indicar que hasta la aparición de la agricultura y el pastoreo el hombre y el lobo compartieron el hemisferio Norte sin hacerse una verdadera guerra. El lobo se convirtió en un proscrito, en un animal fuera de la ley, cuando el hombre se hizo agricultor y pastor».
El hombre y la Tierra
Entre 1973 y 1980, Rodríguez de la Fuente realizó su serie más famosa y por la que siempre será reconocido y recordado: El hombre y la Tierra. Una serie que pasará también a la historia de la televisión gracias a su sintonía de tambores compuesta por Antón García Abril, sus imágenes de un sol incandescente que surge por el horizonte y la interacción entre humanos y animales.
La serie, de 124 capítulos, estuvo dividida en tres partes: ibérica, sudamericana y norteamericana. La serie ibérica constó de tres partes y de una cuarta que quedó inacabada. La dedicada a la fauna de Sudamérica se filmó en 1973 en Venezuela, concretamente en Los Llanos, el Orinoco y el Amazonas, y, aunque en principio solo se iban a rodar ocho capítulos, finalmente se rodaron 18. Fue precisamente en el capitulo 3 de la etapa venezolana, titulado Operación anaconda, cuando Félix Rodríguez de la Fuente casi perdió la vida en el transcurso del rodaje, cuando él y su equipo trasladaban una gigantesca anaconda entre el barro para salvarla, cuando la enorme serpiente se volvió hacia él y a punto estuvo de propinarle un mordisco que hubiera sido letal. Por último, de la serie norteamericana sólo se pudo filmar la parte canadiense y dos capítulos en Alaska. La serie fue todo un reto cinematográfico debido al peso de los equipos y a la dificultad para transportarlos al filmarse todos ellos en 35 milímetros.
Comprometido con la Naturaleza
A pesar de todas las dificultades, El hombre y la Tierra se convirtió en un referente mundial, y no sólo por su temática y su pedagogía, sino también porque logró captar imágenes de animales que nunca había sido posible conseguir hasta entonces, lo que le valió numerosos premios, tanto en España como en el extranjero. Félix Rodríguez de la Fuente nos acercó como nadie al mundo animal. Filmaba imágenes impactantes de la vida cotidiana del azor, de la hiena, el buitre leonado o el cocodrilo que en esos momentos parecían imposibles.
La popularidad de Félix Rodriguez de la Fuente le permitió comprometerse con todo tipo de causas como la protección y el salvamento de distintas especies animales en peligro de extinción, como el ya mencionado lobo ibérico, que probablemente le debe su supervivencia, pero también se comprometió con el salvamento del oso ibérico, el lince, el águila real, el águila imperial y el muflón. La fauna de nuestro país siempre estará en deuda con él.
Una figura irrepetible. Naturalista, humanista y activista, Félix Rodriguez de la Fuente fue una figura excepcional e irrepetible. Su pasión y carisma personal lo convirtieron en un referente público, en un imán para los más jóvenes y en un maestro de la comunicación capaz de agitar conciencias, de influir en la sociedad de su época y de convencer a los gobierno de la época para aprobar nuestras primeras leyes de protección de la naturaleza.
Cuando en 1967 Peter Scott, fundador de WWF, recibió la propuesta de crear en España una delegación de esta organización internacional, recomendó contar con personas reconocidas por la sociedad española y que dominaran los nuevos medios de comunicación, para impulsar rápidamente la organización y la causa de la defensa de la naturaleza.
Rápidamente surgió el nombre de Félix Rodriguez de la Fuente, un joven naturalista conocido como el «Amigo de los Animales» que cautivaba con sus intervenciones a la audiencia del programa Fin de Semana, uno de los más populares de la televisión, que además publicaba regularmente artículos sobre animales y naturaleza en Blanco y Negro, la influyente revista dominical de ABC.
Pocos meses después, el 30 de julio de 1968, nacía ADENA con Félix Rodriguez de la Fuente como Vicepresidente. Ese mismo año Félix recibió el encargo de dirigir Fauna, su propio programa de televisión. Después vendría Planeta Azul y en 1973 el Hombre y la Tierra, que le convertirán en una de las personas más populares y queridas de España.
INCANSABLE DEFENSOR DE LA NATURALEZA
El reconocimiento de ADENA creció en paralelo a la popularidad de Félix Rodriguez de la Fuente, llegando a contar miles de socios por todo el país, algo inaudito en aquellos años.
Con su extraordinaria capacidad de trabajo, Félix se convirtió en un activista integral, capaz de sensibilizar a la sociedad a través de los medios de comunicación y, con ADENA como arma, de denunciar la destrucción de la naturaleza y activar a la sociedad para defenderla en cada rincón de España.
Durante el tiempo en que Félix fue vicepresidente de ADENA, la organización se enfrentó a las políticas del controvertido ICONA y del IRYDA, declaró la guerra al uso masivo de la estricnina y a las Juntas de Extinción de Alimañas y Protección de la Caza, que desde 1953 y organizadas en muchas provincias de España diezmaban por orden gubernativa águilas, buitres, linces, nutrias, lobos y cualquier especie que no fuera considerada útil o cinegética. Finalmente las campañas de denuncia y la presión pública dieron resultado y la Ley de Caza de 1970 introdujo por primera vez el concepto de especie protegida, retiró la recompensa por matar animales “dañinos” y terminó por fin con las juntas provinciales de extinción.
AMIGO DEL LOBO Y DE LAS ESPECIES MALDITAS
Una de las mayores pasiones de Félix fueron las rapaces, pero sin duda era el lobola especie más odiada y perseguida en la España rural de la época y a la que Felix Rodriguez de la Fuente y ADENA dedicaron mayor esfuerzo. Mientras en televisión contaba “la verdad del lobo”, desde ADENA se hacía campaña hasta convencer al gobierno de que la especie gozará por primera vez de un mínimo estatus legal. Gracias a ello, el lobo, acorralado entonces en el noroeste de la Península Ibérica y en escasos enclaves del sur, no llegó a extinguirse, como ya había ocurrido en casi toda Europa Occidental.
Félix Rodríguez de la Fuente trabajó sin descanso para formar a la nueva generación de jóvenes españoles en el amor y el respeto a la naturaleza y para activarles para defenderla, lo que consiguió gracias a la creación del Club de Linces de ADENA, que en muy poco tiempo contó con la participación entusiasta de miles de niños y niñas y grupos en toda España que se convertían así en vigilantes y protectores de su entorno.
Se dirigía a sus ‘cachorros’ a través de cartas publicadas en la revista de ADENA animándoles a organizarse y a sumarse a la batalla por la vida natural, “… a formar parte de nuestra gran familia, la familia del amor y la protección de las criaturas salvajes, que tiene miembros en todo el mundo, y que un día no muy lejano cambiará la faz de la Tierra…”, “…hacia eso debemos caminar nosotros, queridos linces, esa debe ser nuestra misión, incidir en la Sociedad para que los hombres comprendan que si acaban con la naturaleza acabarán con ellos mismos”.
Además Félix creía firmemente en la necesidad de que los niños y niñas experimentaran el contacto directo con la naturaleza, para lo que ADENA puso en marcha los primeros campamentos de verano en las Hoces del Río Riaza, ejemplo pionero de la educación ambiental, donde chavales venidos de toda España se impregnaban de la vida en el campo y aprendían en directo a conocerla y respetarla. Todos los veranos Félix pasaba unos días con los acampados, haciendo excursiones y compartiendo historias junto a la hoguera.
UNA VISIÓN GLOBAL Y ANTICIPADA A SU TIEMPO
Gracias al estudio y a sus viajes, Félix Rodríguez de la Fuente desarrolló una visión global e integradora del ser humano y la naturaleza adelantada a su tiempo. En un momento en que sólo algunos expertos y entidades como el Club de Roma eran capaces de predecir nuestro impacto sobre la Tierra, él ya hablaba del Planeta Azul y de la Tierra como un solo organismo vivo y alertaba desde ADENA de amenazas a escala global como la contaminación, los pesticidas, la destrucción de los ecosistemas, el consumo desmedido de recursos naturales, la deshumanización de las ciudades o la desaparición del mundo y la cultura rural y el acoso a etnias y pueblos indígenas.
Justo antes de viajar a Alaska, donde se estrellaría su avioneta, Felix presentó en Madrid la primera Estrategia Mundial para la Conservación de la Naturaleza junto con los reyes de España, el presidente Adolfo Suárez y varios ministros de su gobierno para darle la relevancia política que merecía. El documento, elaborado por el PNUMA, UICN y WWF, recogía por primera vez las prioridades y estrategias necesarias para salvar la naturaleza ante la presión creciente de la especie humana, sentando las bases de lo que hoy conocemos como desarrollo sostenible.
Félix Rodríguez de la Fuente: un pensamiento indomable
“Hoy apenas si se escucha ya el canto del lobo”. La legendaria frase de Félix Rodríguez de la Fuente denunciaba la situación de una especie amenazada. Pero iba mucho más allá. Analizada a la luz de su pensamiento, esta frase se refería también a la pérdida casi definitiva de la libertad ancestral del hombre. La libertad de vivir en y con la naturaleza y ser dueño de sí mismo.
Es sabido que nuestro naturalista más internacional dedicó buena parte de sus energías a eliminar la leyenda negra que existía sobre el lobo. Él demostró que no era un animal perverso ni sanguinario, sino que cazaba para subsistir y que su presencia era necesaria para mantener el equilibrio biológico de los ecosistemas naturales. Una frase de Félix expresa claramente su identidad con este animal y resume buena parte de su pensamiento: “Yo quiero ser un lobo y vivir en una tierra no contaminada, con bisontes pastando en las praderas como aquellos que quedaron pintados en la cueva de Altamira; y cantaría a la luna por la felicidad infinita de vivir en un mundo así”.
La idea central del pensamiento de Rodríguez de la Fuente es que no se puede disociar el hombre de la naturaleza. De ahí el nombre de su serie más conocida: El Hombre y la Tierra. Él lo dijo de una forma mucho más poética: “El hombre es síntesis del Universo, el planeta es síntesis del Universo, entre el hombre y la Tierra hay el abrazo profundo, el cordón umbilical irrompible, que puede haber entre el niño y la madre, cuando el niño está en el claustro materno. Si el cordón se rompe, el niño muere, y la propia madre está en peligro”. La defensa de la Vida, el respeto a los demás y a la Naturaleza son las tres líneas argumentalesque se repetirán de forma constante en todos sus planteamientos.
EL AMIGO DE TODOS LOS ESPAÑOLES
Rodríguez de la Fuente apareció por primera vez en pantalla en una entrevista a finales de 1964. En ella habló con entusiasmo de la estrecha comunión del hombre con la naturaleza, del sentido cósmico de la vida y de su noción de pertenencia a un todo. Los telespectadores se quedaron enamorados de su vitalidad, su voz y su pasión. Ése fue el inicio de una conexión inseparable entre Félix y los españoles. El público quería saber más sobre ese burgalés y TVE entendió el mensaje. Sus programas ‘Fauna’ en 1968, ‘Planeta Azul’ en 1970, y ‘El Hombre y la Tierra’ en 1973, le hicieron el hombre más popular y querido de España.
En estos programas puede entreverse el pensamiento de Rodríguez de la Fuente. Ha dejado para la posteridad cientos de horas de grabación y textos que condensan su visión del mundo. En su biografía, Benigno Varillas realizó un excelente trabajo de compilación y análisis (Félix Rodríguez de la Fuente. Su vida, mensaje de futuro). Félix fue un visionario al anticipar la actual crisis ecológica y criticar un sistema consumista (¡de los años 60 y 70!) que conducía inevitablemente a la insatisfacción permanente. Identificó la raíz de la alienación del hombre actual en el hecho de haberse desgajado de la naturaleza. La senda de la sociedad moderna sólo puede llevar a la avaricia, la banalidad y la pérdida de libertad.“El mundo es espantoso para el ciudadano medio que vive en colmenas, urbes monótonas y horrísonas, calles sucias recibiendo cultura como píldoras y mensajes que no se ha demostrado que sean perfectos. Nuestra era se recordará en un futuro feliz, si es que se llega, con verdadero terror. El hombre tiene necesidad de libertad, del campo, del cielo, de tiempo para no hacer cosas… y aprender e imaginar. Hoy no lo puede hacer”.
UNA VISIÓN ORIGINAL DEL PASADO Y DEL FUTURO
Rodríguez de la Fuente era un espíritu rebelde que no se dejó domesticar por nadie. Recuperó el arte de la cetrería, la caza con halcón, cuando hacía dos siglos que nadie lo hacía. Para ello exploró bibliotecas de toda España y llegó a estudiar textos medievales como el libro de la caza de las aves del canciller López de Ayala y el libro de las aves del príncipe Don Juan Manuel. Muchos vieron en ello una extravagancia folclórica. Pero no lo era. Para él era una conexión con el pasado. Leyendo tomos polvorientos averiguó que la cetrería se había practicado en muchas culturas de la antigüedad. Para nuestro naturalista, la caza con halcón era una de las últimas manifestaciones de la forma de vida libre de los cazadores nómadas, del paraíso que para él representaba el paleolítico.
Afirmaba que “quizás en el pasado se encuentren las claves del futuro”. Los depredadores del cielo le conectaban al misterio de la vida. Al recuperar la alianza entre el hombre y el halcón Félix creía volver a una época olvidada, en la que el hombre había sido libre y feliz. Según nos cuenta su biógrafo, Félix se sentía portador de una antorcha antigua que llevaba mucho tiempo apagada.
La originalidad de su pensamiento llegaba al considerar que el paleolítico había sido la edad de oro de la humanidad. En el neolítico el hombre buscó el asentamiento y la acumulación de bienes y buscó la seguridad aun a costa de perder libertad. El predominio de la caza cedió ante la implantación de la agricultura y la ganadería. “La cultura de los cazadores superiores fue barrida por una poderosa ola, al parecer procedente de Oriente, cuya característica era la modificación de la naturaleza en provecho del hombre”. Lo que el hombre no supo medir es que al domesticar la naturaleza, el hombre acabaría por domesticar al propio hombre.
En la obra de Rodríguez de la Fuente encontramos el embrión de toda una serie de corrientes de pensamiento que están cristalizando en la actualidad. De una forma intuitiva supo enlazar ideas aparentemente antagónicas. Su pensamiento cuestiona los cimientos mismos de una concepción ideológica que entiende el futuro como una huida del pasado y el progreso como una dominación de la naturaleza para maximizar el crecimiento material. Y, si se analiza bien, el productivismo y la carrera tecnológica estaban tan presentes en el bloque occidental como en el soviético. Esta lucidez de ideas, en una época en la que el mundo estaba dividido por un telón de acero, sitúa a Rodríguez de la Fuente, sin pretenderlo, en la vanguardia de una nueva síntesis de pensamiento ecológico.
A él le gustaba considerarse un agitador de conciencias. Y por supuesto que lo fue. No tenía problema en hablar a favor del lobo en una España que todavía era altamente rural. El pueblo demostró que sabía reconocer y apreciar las palabras cargadas de verdad y de nobleza, aunque en ocasiones resultaran incómodas. Félix hablaba al corazón de las personas y sus palabras consiguieron despertar un instinto indómito que parecía dormido. Tal y como señala su biógrafo, Benigno Varillas, “no aprendió de nadie, aprendió de muchos. No fue el clásico seguidor de una escuela o una filosofía muy concreta; era una persona integradora y que intentaba conciliar la izquierda con la derecha y el pasado con el futuro”.
UNA MISIÓN PARA LAS GENERACIONES FUTURAS
Hoy, aniversario de su muerte, es un buen día para recordar a Félix Rodríguez de la Fuente. Nos dejó el 14 de marzo de 1980, el mismo día que cumplía 52 años. Se encontraba en su mejor momento vital y profesional. Un accidente de avioneta segó su vida mientras rodaba un documental sobre una carrera de trineos en Alaska.
En cualquier plataforma de contenidos podemos encontrar muchos de sus reportajes y entrevistas. Una de las cosas que más impresiona al verle hablar es la convicción de que hay momentos en los que desconecta de la audiencia adulta y se dirige solo a los niños. Félix quería que su mensaje llegara, sobre todo, a las generaciones futuras. Él sabía que eran los jóvenes del mañana los que podrían traer los cambios que él imaginaba. Por eso, también fue un pionero en el activismo cultural a largo plazo.
Todos conocemos su faceta como naturalista incansable. Pero Félix fue mucho más. Fue un estudioso que recuperó un arte que llevaba dos siglos olvidado. Y un vitalista que aspiró a recuperar una forma de ver el mundo que todos daban por superada. Para Félix no había nada imposible. Solo hacía falta un grupo de indomables que quisieran volver a recuperar la libertad perdida.
Figure incontournable qui a marqué notre planète et ses océans, Jacques-Yves Cousteau et ses équipes ont mené à bien de nombreuses expéditions à bord de la célèbre Calypso, visant à explorer le monde sous-marin, alors très peu connu à cette époque.
Certaines des expéditions de Jacques-Yves Cousteau ont donné lieu à des documentaires visant à sensibiliser le public sur la préservation des océans et des profondeurs marines.
Mais le saviez-vous ?
1. Personnalité
Né le 11 juin 1910 à St André de Cubzac en Gironde, Cousteau entre à l’école navale en 1930 où il se destine à l’origine à une carrière d’aviateur dans la Marine, mais qui prend fin suite à un grave accident de voiture. Aventurier dans l’âme, il se tourne alors vers l’océan. Avec l’aide d’Émile Gagnan et l’invention du « détendeur » de plongée, il perfectionne le principe du scaphandre autonome.
L’origine de son célèbre bonnet rouge devenu symbole de son personnage est dû aux essais des premiers scaphandres du XIXème siècle. Pour l’occasion, des bagnards étaient désignés volontaires et se voyaient attribuer des couleurs vives et un bonnet rouge pour les différencier des autres marins. En portant un bonnet rouge, Cousteau souhaitait leur rendre hommage.
Le commandant a toujours été une personnalité très appréciée des français. Le journal du dimanche relèvera que Jacques-Yves Cousteau aurait atteint la première classe de ce classement 20 fois entre la période de 1988 et 1996, devançant ainsi l’Abbé Pierre : 16 fois lauréat.
2. La Calypso
Lorsque le film « Aventure en Mer Rouge » de Hans Hass est primé lors d’un évènement en 1950, Cousteau se lance dans un nouveau projet visant à créer un film sous-marin en couleur. Pour se donner les moyens il se lance dans une campagne de mécénat. C’est à cette occasion que le millionnaire Loël Guiness lui offre le 19 juillet 1950 un bateau sur lequel il pourra parcourir les océans. Son nom : la Calypso. Avec son navire, Cousteau naviguera près de 40 ans sur une grande partie des mers du globe, ce qui lui permettra de réaliser sa longue série de documentaires « L’Odyssée sous-marine du commandant Cousteau » rendant ainsi l’équipage et la Calypso mondialement connus.
La Calypso coule dans le port de Singapour en 1996 après avoir été heurtée accidentellement par une barge. Renflouée et rapatriée en France pour restauration, elle est laissée à l’abandon plusieurs années en raison d’un litige qui oppose un chantier naval et la Fondation Cousteau. En 2012, elle est classée « bateau d’intérêt patrimonial » par la fondation patrimoine maritime et fluvial. Le 1er avril 2016, la Calypso arrive en Turquie pour sa phase de restauration et de modernisation où les travaux sont toujours en cours.
3. Inventeur et découverte
Le commandant Jacques-Yves Cousteau est à l’origine de nombreuses améliorations et inventions marines. La constante amélioration du scaphandre ainsi que les soucoupes plongeantes, portent la griffe Cousteau. Il étudie avec l’aide du Professeur Malavard et l’ingénieur Charrier, la Turbovoile : un tout nouveau système complémentaire de propulsion éolienne. L’invention est testée en 1985 sur l’Alcyone, qui est aujourd’hui le navire d’expédition de l’Équipe Cousteau. Il est entretenu par des bénévoles dans le port de la ville de Caen en Normandie.
Parmi ses nombreuses découvertes, lors d’une exploration en mer Égée, Jacques-Yves Cousteau retrouve le 3 décembre 1975, l’épave du Britannic : le navire jumeau du Titanic qui avait sombré 59 ans plus tôt. Il s’agit de la plus grande épave de paquebot au monde, découverte à 120 mètres de profondeur. Un an après (le temps d’obtenir l’autorisation des autorités grecques) Cousteau et son équipe plongent à l’intérieur de l’épave. L’exploration est filmée et suivie depuis la surface par l’une des survivantes du naufrage. Certains éléments de l’épave sont remontés à la surface et un documentaire est produit pour la série télévisée « L’Odyssée sous-marine de l’équipe Cousteau ».
4. Écologiste dans l’âme
Défenseur de l’écosystème marin, Cousteau s’engage en 1960 contre l’immersion de deux mille tonnes de déchets radioactifs en méditerranée. Pour défendre la préservation de la vie marine, il créé en 1974 The Cousteau Society : une organisation de droit américain à but non lucratif. En 1981 la Fondation Cousteau, aujourd’hui l’Équipe Cousteau voit le jour.
Ses films documentaires tournés lors de ses expéditions, ont pour but d’attirer l’attention du public et de les sensibiliser sur le désastre écologique dû à la négligence humaine.
Save
Jacques-Yves Cousteau en 1972 // Photo via Wiki Commons par Peters, Hans / Anefo
INSOLITE•2 AVRIL 2020
LE SAVIEZ-VOUS ? LE COMMANDANT JACQUES-YVES COUSTEAU
Figure incontournable qui a marqué notre planète et ses océans, Jacques-Yves Cousteau et ses équipes ont mené à bien de nombreuses expéditions à bord de la célèbre Calypso, visant à explorer le monde sous-marin, alors très peu connu à cette époque.
Certaines des expéditions de Jacques-Yves Cousteau ont donné lieu à des documentaires visant à sensibiliser le public sur la préservation des océans et des profondeurs marines.
Mais le saviez-vous ?
1. Personnalité
Né le 11 juin 1910 à St André de Cubzac en Gironde, Cousteau entre à l’école navale en 1930 où il se destine à l’origine à une carrière d’aviateur dans la Marine, mais qui prend fin suite à un grave accident de voiture. Aventurier dans l’âme, il se tourne alors vers l’océan. Avec l’aide d’Émile Gagnan et l’invention du « détendeur » de plongée, il perfectionne le principe du scaphandre autonome.
L’origine de son célèbre bonnet rouge devenu symbole de son personnage est dû aux essais des premiers scaphandres du XIXème siècle. Pour l’occasion, des bagnards étaient désignés volontaires et se voyaient attribuer des couleurs vives et un bonnet rouge pour les différencier des autres marins. En portant un bonnet rouge, Cousteau souhaitait leur rendre hommage.
Le commandant a toujours été une personnalité très appréciée des français. Le journal du dimanche relèvera que Jacques-Yves Cousteau aurait atteint la première classe de ce classement 20 fois entre la période de 1988 et 1996, devançant ainsi l’Abbé Pierre : 16 fois lauréat.
2. La Calypso
Lorsque le film « Aventure en Mer Rouge » de Hans Hass est primé lors d’un évènement en 1950, Cousteau se lance dans un nouveau projet visant à créer un film sous-marin en couleur. Pour se donner les moyens il se lance dans une campagne de mécénat. C’est à cette occasion que le millionnaire Loël Guiness lui offre le 19 juillet 1950 un bateau sur lequel il pourra parcourir les océans. Son nom : la Calypso. Avec son navire, Cousteau naviguera près de 40 ans sur une grande partie des mers du globe, ce qui lui permettra de réaliser sa longue série de documentaires « L’Odyssée sous-marine du commandant Cousteau » rendant ainsi l’équipage et la Calypso mondialement connus.
L’épave renflouée de la Calypso, attendant une hypothétique restauration muséologique, image via Wikipedia
La Calypso coule dans le port de Singapour en 1996 après avoir été heurtée accidentellement par une barge. Renflouée et rapatriée en France pour restauration, elle est laissée à l’abandon plusieurs années en raison d’un litige qui oppose un chantier naval et la Fondation Cousteau. En 2012, elle est classée « bateau d’intérêt patrimonial » par la fondation patrimoine maritime et fluvial. Le 1er avril 2016, la Calypso arrive en Turquie pour sa phase de restauration et de modernisation où les travaux sont toujours en cours.
3. Inventeur et découverte
Le commandant Jacques-Yves Cousteau est à l’origine de nombreuses améliorations et inventions marines. La constante amélioration du scaphandre ainsi que les soucoupes plongeantes, portent la griffe Cousteau. Il étudie avec l’aide du Professeur Malavard et l’ingénieur Charrier, la Turbovoile : un tout nouveau système complémentaire de propulsion éolienne. L’invention est testée en 1985 sur l’Alcyone, qui est aujourd’hui le navire d’expédition de l’Équipe Cousteau. Il est entretenu par des bénévoles dans le port de la ville de Caen en Normandie.
Parmi ses nombreuses découvertes, lors d’une exploration en mer Égée, Jacques-Yves Cousteau retrouve le 3 décembre 1975, l’épave du Britannic : le navire jumeau du Titanic qui avait sombré 59 ans plus tôt. Il s’agit de la plus grande épave de paquebot au monde, découverte à 120 mètres de profondeur. Un an après (le temps d’obtenir l’autorisation des autorités grecques) Cousteau et son équipe plongent à l’intérieur de l’épave. L’exploration est filmée et suivie depuis la surface par l’une des survivantes du naufrage. Certains éléments de l’épave sont remontés à la surface et un documentaire est produit pour la série télévisée « L’Odyssée sous-marine de l’équipe Cousteau ».
4. Écologiste dans l’âme
Défenseur de l’écosystème marin, Cousteau s’engage en 1960 contre l’immersion de deux mille tonnes de déchets radioactifs en méditerranée. Pour défendre la préservation de la vie marine, il créé en 1974 The Cousteau Society : une organisation de droit américain à but non lucratif. En 1981 la Fondation Cousteau, aujourd’hui l’Équipe Cousteau voit le jour.
Ses films documentaires tournés lors de ses expéditions, ont pour but d’attirer l’attention du public et de les sensibiliser sur le désastre écologique dû à la négligence humaine.
Il lance en 1990 avec l’aide de plusieurs ONG, une pétition mondiale visant à sauver l’Antarctique de l’exploitation minière. C’est une opération réussie : la décision est prise de protéger cette terre vierge pour les cinquante prochaines années. De par ses nombreuses actions écologiste, il est l’invité officiel de la conférence des Nations Unies pour l’environnement et le développement à Rio de Janeiro en 1992.
Le 25 juin 1997 âgé de 87 ans, le commandant Jacques-Yves Cousteau s’éteint. Même si aujourd’hui l’homme n’est plus là, son message reste toujours d’actualité. A sa mort, il lègue à ses deux associations : l’Équipe Cousteau et The Cousteau Society la mission de continuer à transmettre son message et à sensibiliser le monde.
5. Héritage
Aujourd’hui, la « marque » Cousteau a pratiquement disparu. Cela s’explique par la rupture familiale entre les deux principaux héritiers du célèbre commandant : Francine Cousteau : deuxième femme du commandant, et Jean-Michel Cousteau : son fils ainé. La guerre entre les deux clans a très vite eu un impact sur les deux associations de l’ancien commandant où le nombre d’adhérents a considérablement diminué, là où dans les années 90, la Fondation Cousteau comptait près de 100 000 membres en France et la Cousteau Society en comptabilisait 250 000. Ces associations représentaient une réelle arme pour les défenseurs de l’environnement : elles avaient permis de sauver l’Antarctique dans les années 90.
Le véritable symbole du naufrage de l’héritage Cousteau reste son célèbre bateau : la Calypso, abandonnée pendant de nombreuses années à La Rochelle puis à Concarneau pendant le litige qui opposait les deux parties quant au devenir du navire.
Le commandant Cousteau, inventeur génial et écologiste de terrain
En 1992, au sommet de Rio, on pouvait voir des célébrités comme Jane Fonda et Pelé côtoyer des représentants comme le Dalaï-lama et George Bush. Mais au moment de prendre la photo officielle, présidents, premiers ministres et dirigeants du monde entier n’avaient d’yeux que pour un seul personnage public. Alors qu’ils se regroupaient, ils ont insisté pour que Jacques-Yves Cousteau les rejoigne.
L’explorateur, océanographe et défenseur de la nature, qu’on a surnommé « Capitaine Planète », était venu pour demander aux dirigeants de reconnaître aux générations futures le droit de jouir d’une planète non polluée, responsabilité que l’ONU a fini par endosser. Près de trente ans plus tard, on se souvient toujours de Jacques Cousteau pour ses contributions indélébiles à notre vision du monde et de l’environnement. Voici comment il s’y est pris pour inscrire l’environnement à l’ordre du jour international.
1. IL A MONTRÉ LE MONDE SOUS-MARIN
Le commandant Cousteau est connu pour avoir été un des pionniers de la plongée au milieu du 20e siècle. Ancien officier de marine, Cousteau a peu à peu été amené à s’intéresser à l’exploration des océans. En 1943, alors qu’il était à la recherche d’une façon de rendre les profondeurs accessibles aux plongeurs indépendants, il effectue des modifications sur un équipement de respiration sous-marine qui existait déjà de manière à permettre aux plongeurs de rester sous la surface plus longtemps sans être attachés à un tuyau d’air relié à un bateau.
C’est alors que le détendeur de plongée, ou « Aqua Lung », voit le jour. Celui-ci administre de l’air à la demande et à la pression désirée. L’appareil a libéré les plongeurs de leur bateau et leur a permis d’aller voir ce qui se passait sous la surface. Un monde vaste et vierge qui fourmillait de vie inconnue les y attendait.
Grâce à la plongée on pouvait désormais faire des recherches scientifiques, découvrir de nouvelles espèces, se débarrasser des mines et même organiser des fouilles archéologiques sous l’eau. Mais Jacques Cousteau, malgré son génie, n’a dissipé qu’une infime partie du mystère des profondeurs : à ce jour, 80 % des océans demeurent inexplorés.
2. IL NOUS A MONTRÉ POURQUOI IL FALLAIT PRENDRE SOIN DES OCÉANS
Cousteau était fasciné par l’idée de faire des photos sous l’eau et s’est assez tôt mis à modifier et à inventer des équipements photographiques comme la « luge sous-marine » qui lui a permis de filmer sur le plancher de l’océan. Après avoir amélioré l’équipement de plongée, il a travaillé avec Harold Edgerton, professeur de l’Institut de technologie du Massachussets (MIT), pour mettre au point des projecteurs adaptés aux conditions sous-marines extrêmes. Grâce à ses lumières stroboscopiques qui illuminaient les animaux et à ses projecteurs capables de faire le jour en eau profonde, Cousteau a permis de révéler ce qui se tramait dans les fonds marins.
Il est aussi à l’origine du premier véritable appareil photo sous-marin, le Calypso, du nom de son célèbre navire. Cet appareil de 1961 pouvait être utilisé jusqu’à 180 mètres de profondeur et fonctionnait également en surface. Grâce à cette innovation ainsi qu’à d’autres, il a été possible de prendre des photos et même de filmer sous l’eau. Les images saisissantes prises par Cousteau avaient d’ailleurs été publiées dans National Geographic, qui avait eu l’honneur de financer certaines de ses explorations.
Grâce à ses photographies étourdissantes et à des films influents comme Le Monde du silence (1956), Le Monde sans soleil (1964) et Voyage au bout du monde (1977), Cousteau a transformé en art ce qui n’était à l’origine qu’un médium. L’avènement de la photographie sous-marine n’a pas profité qu’aux explorateurs et aux scientifiques, il a permis au public de comprendre les merveilles de l’océan et de se sentir concerné par leur sauvegarde.
3. SON COMBAT POUR LES RÉCIFS CORALLIENS
Sa vie au contact des créatures marines l’a fait passer des « films d’aventure », pour reprendre ses mots, à la défense de toute forme de vie marine. Sa vie sous l’eau lui a permis de saisir toute l’importance des récifs coralliens et des plantes et animaux sous-marins. Cousteau a fait partie des premiers à documenter l’usage du sonar chez les dauphins ; il a également découvert de nouvelles espèces et même identifié des bassins volcaniques inconnus sur le plancher océanique.
Bien que Cousteau ait fait office d’ambassadeur des océans pendant sa longue vie, son palmarès a été terni par des accusations de cruauté envers les animaux dont il se serait rendu coupable au début de sa carrière. Dans une biographie sur son père, Jean-Michel Cousteau écrivait que l’explorateur utilisait et tuait des animaux marins capturés et leur réservait des traitements insensibles. Dans un de ses films, Cousteau fait la démonstration de la pêche à l’explosif, technique dont se servaient les premiers plongeurs et qui consistait à jeter de la dynamite dans l’eau pour recenser la vie animale à un endroit donné en faisant remonter à la surface des créatures qui autrement fuient les plongeurs.
Pourtant, à la fin de sa vie, Cousteau avertissait les humains qu’ils étaient en train d’épuiser et de dévaster les océans, qu’ils menaçaient la vie animale et qu’ils étaient sur le point de détruire les récifs coralliens. En 1974, il confiait à Paul Dustan qu’il soupçonnait que les humains étaient responsables de la dégradation de ceux-ci. Selon la communauté scientifique, dix années nous séparent de la destruction du corail par le blanchissement et le changement climatique, entre autres menaces.
4. IL A DONNÉ L’ALERTE SUR LES DÉCHETS NUCLÉAIRES
En passant de plus en plus de temps dans l’eau, Cousteau s’est inquiété de l’intention des gouvernements d’immerger des déchets nucléaires dans les océans et les mers du monde. En 1959, après avoir appris que le gouvernement français voulait déposer ses déchets nucléaires dans la Méditerranée, Cousteau a pris part à une campagne publique d’information massive et est devenu un fervent opposant à la fois à l’énergie nucléaire et à l’utilisation des océans comme zones d’enfouissement des déchets.
Sa campagne a été un succès et il a poursuivi sa lutte acharnée contre la pollution des océans grâce à sa fondation, à ses livres et à ses interventions publiques. « Nous voulons pour tous le droit de décider des risques que l’on prend ou non et de préserver la qualité de la vie pour les générations à venir », écrivait-il dans un pamphlet en 1990.
5. TOUS RESPONSABLES POUR LES GÉNÉRATIONS FUTURES
Son exploration inlassable des profondeurs l’a mené à ressentir une grande responsabilité non seulement pour les humains et les animaux de son temps, mais aussi pour la vie future. En 1991, il a commencé à réunir des signatures pour une pétition pour les droits des générations futures dans le but de faire pression sur les Nations unies pour que celles-ci modifient leurs statuts pour y inclure le droit de celles ceux qui ne sont pas encore né.e.s à jouir de la nature.
Cousteau a fini par réunir neuf millions de signatures dans le monde entier et, en 1997, l’UNESCO a fait une déclaration sur les responsabilités des générations présentes à l’égard de celles à venir. On y lit notamment qu’elles ont la responsabilité de « léguer aux générations futures une Terre qui ne soit pas irréversiblement endommagée par l’activité humaine ».
Cousteau a rappelé ces obligations lors de la conférence des Nations unies à laquelle il participait en 1992. « Les générations futures ne nous pardonneront pas d’avoir délibérément gâché leur dernière opportunité, a-t-il dit. Mettons fin à ce génocide à retardement ! Cessons de ne penser qu’à nous-mêmes et de raisonner uniquement à court terme […] Il en va de notre responsabilité, car nous tenons entre nos mains l’avenir des générations exigeantes de demain. »
Gentil ami de la nature, le populaire Jacques-Yves COUSTEAU? Films hollywoodiens, affaires juteuses, procès contre son fils aîné… A 84 ans, Cousteau » brasse encore les millions.
Né sous le signe du Zodiac, le commandant Cousteau est un homme de spectacle qui, depuis quarante ans, interprète un rôle d’ami de la nature: un personnage gentil, qui filme des animaux gentils. L’été en saharienne Lanvin. L’hiver en costume d’humanoïde à col chevalier, commandé depuis trente ans chez Vernet, 116, Champs-Elysées. Il est entré en fonction en 1956, grâce à un documentaire-culte, le Monde du silence, réalisé avec Louis Malle, qui obtint la palme (hihi) d’or à Cannes. Pour la première fois, le public, et surtout les enfants, découvraient les fonds marins. Un monde neuf. «On buvait ce film», dit drôlement Bernard Violet, biographe iconoclaste (1). Le capitaine de corvette Cousteau obtint illico le poste de directeur du musée océanographique de Monaco et quitta la Marine nationale. Les Requins associés, sa société de production, commençait son ascension commerciale.
Né sous le signe du Zodiac, le commandant Cousteau est un homme de spectacle qui, depuis quarante ans, interprète un rôle d’ami de la nature: un personnage gentil, qui filme des animaux gentils. L’été en saharienne Lanvin. L’hiver en costume d’humanoïde à col chevalier, commandé depuis trente ans chez Vernet, 116, Champs-Elysées. Il est entré en fonction en 1956, grâce à un documentaire-culte, le Monde du silence, réalisé avec Louis Malle, qui obtint la palme (hihi) d’or à Cannes. Pour la première fois, le public, et surtout les enfants, découvraient les fonds marins. Un monde neuf. «On buvait ce film», dit drôlement Bernard Violet, biographe iconoclaste (1). Le capitaine de corvette Cousteau obtint illico le poste de directeur du musée océanographique de Monaco et quitta la Marine nationale. Les Requins associés, sa société de production, commençait son ascension commerciale.
Aujourd’hui, lorsqu’on visionne le Monde du silence en compagnie d’enfants convena-bles, en âge d’être les arrière-petits-enfants du commandant, ceux-ci sont stupéfaits: «Gros sauvages! Respectent pas les animaux.» La rhétorique écologiste a lavé les jeunes cervelles. L’équipage de la Calypso massacre une tribu de requins dans une scène à la Cronenberg, abat d’une balle dans la tête un bébé cachalot haché par les pales du navire, dynamite un jardin de coraux tandis que la caméra s’attarde sur l’agonie tragico-rigolote d’un diodon. Les marins chevauchent des tortues en tirant sur des cigarettes qui nuisent gravement à la santé. Les enfants de 1956 rêvaient, ceux de 1995 sont scandalisés. «Ils ont raison et j’en suis fier, dit le commandant, roublard. Cela prouve que les mentalités ont évolué, grâce à moi.» Visage hâve, il ressemble à une institutrice en retraite. Tel un sociétaire d’Art Média, Cousteau ne reçoit la presse que pour commercer. Actuellement, il promeut un livre sur Madagascar (Plon) et un documentaire produit avec Ted Turner.
Dans le nouveau siège de sa fondation (250.000 adhérents), hôtel particulier gardé par des scaphandres, Cousteau communique en buvant de l’eau Perrier. On n’interviewe pas le commandant, on écoute un argumentaire. Lorsqu’on ose une question, il dit: «Vous me brusquez un peu.» L’attachée de presse maison note. Lui rame pour légitimer le caractère scientifique de ses expéditions. Les plongeurs de la Calypso ont raconté comment, pour obtenir des images séduisantes, ils avaient manipulé les animaux. Employant, par exemple, une solution de chlorax sur les pieuvres (Pieuvre, petite pieuvre, 1972) (1). Les belles images ont un prix. Cinq à six millions de francs le tournage. Cousteau ne tourne pas des documentaires, mais des films scénarisés à Hollywood, qui exigent des supercheries. «Ted Turner a les droits pour l’Amérique du Sud et du Nord. Avec l’aide de la banque Worms, nous avons monté une société de production à Majorque pour les droits dans les autres pays», dit JYC. Prononcer «Jique», comme madame Cousteau II, une plantureuse qui chaloupe dans les parages, sur des escarpins à motif panthère.
«Mauvais traitements infligés aux animaux? Moi qui évite d’écraser les fourmis»», JYC enchaîne sur les grandes menaces qui alimentent ses films et le fond de roulement de la fondation. Explosion démographique. Réchauffement de la planète. Et parle de Cousteau Junior, le magazine que lance le groupe Hachette en novembre. S’il a réussi sur le plan commercial, ses autres tentatives sont moins heureuses. L’Argyronète, projet de sous-marin futuriste qui échoua, coûta 57 millions de francs en 1972 aux contribuables français. Après une tempête financière enregistrée par la Cour des comptes, Cousteau quitta la France et entama aux Etats-Unis une carrière en mondovision. Sa célébrité lui valut de vanter des croquettes de poisson congelé à la télévision britannique. Le parc océanique Cousteau, aux Halles creusa, lui, un passif abyssal de 10 millions de francs avant de sombrer. L’Aquaboulevard a racheté la baleine en carton-pâte. Quand on aime, on ne compte pas.
Cousteau jouit d’un droit d’expression tout terrain sur le seul crédit de sa popularité. Au box-office (2), il dispute la première place à l’abbé Pierre. Les médias pour grandes personnes lui sont ouverts, sur des motifs plus sentimentaux que rigoureux. Pour qu’il s’exprime sur les essais de Mururoa, le Monde lui offre sa Une. En janvier 1986, à Punta Arenas (Chili), Cousteau se dit totalement opposé aux essais dans le Pacifique Sud. En 1988, dans France-Soir, il déclare les tirs français inoffensifs. Est-ce un effet du hasard? Quelques mois plus tard, Pierre Joxe, ministre de la Défense, offre un million de francs à la fondation, pour la construction de la Calypso II. En 1995, Cousteau dénonce les essais et» les déclare sans danger.
Ancêtre du «politiquement correct», Cousteau est aujourd’hui victime de sa réussite. Comme Walt Disney, son histoire et sa vie privée sont passées au sonar. Séquence Occupation: JYC a passé la guerre en apnée. Aux îles de Lérins. Son premier court-métrage, Par 18 mètres de fond, réalisé en 1943, reçoit les éloges de Lucien Rebatet dans Je suis partout, son frère aîné, Pierre-Antoine Cousteau, en
est le rédacteur en chef. Condamné à mort après la guerre, ce dernier quitte Clairvaux dix ans plus tard, brouillé avec JYC à qui il reprochait de l’avoir délaissé en prison. Les deux hommes n’étaient pas sur la même longueur d’onde.
Séquence dynastique: il intente ces jours-ci un procès à son fils aîné, pour utilisation abusive de leur nom. Celui-ci a installé dans les îles Fidji un complexe de vacances baptisé Cousteau les Flots Bleus. «Mon fils est charmant, mais il n’est pas capable. Ce n’est pas parce qu’un gosse est né de votre sperme qu’il a les qualités nécessaires pour vous remplacer», a-t-il un jour déclaré au Nouvel Economiste. Jean-Philippe est le fils de madame Cousteau I, décédée en 1990, après cinquante ans de service à bord de la Calypso. «Simone fut la seule femme de marin à avoir attendu son mari en mer», a dit Falco, un plongeur du navire. Le second fils du commandant, Philippe, est mort en 1979 dans le crash d’un hydravion. Il y a deux ans, le commandant se remarie avec Francine, ex-hôtesse de l’air, et présente deux Cousteau junior de 13 et 15 ans.
Lorsqu’on demande au commandant ce qui génère sa popularité, il dit: «Je n’ai jamais raconté de bo-bards.» En voilà justement un… et c’est ce qui a fait son charme. Durant quatre décennies, Cousteau a approvisionné les enfants des villes en féeries aquatiques. Sa biographie n’est pas formatée sur ce modèle. Mais pourquoi, en l’apprenant, les grandes personnes réagissent-elles comme des enfants?.
Le commandant Cousteau : défenseur moderne des océans
« One Ocean Summit », le premier sommet créé pour la protection des océans, s’est terminé le 11 janvier dernier. Plusieurs jours de conférences entre spécialistes ont été établis pour décréter des mesures autour de la préservation marine. Les océans étaient jusqu’alors les grands oubliés de la prise de conscience écologique. Il a fallu attendre 2022 pour que l’on consacre enfin un événement à cet espace planétaire. Si les mesures étatiques et internationales se sont faites attendre, les dénonciations de l’urgence marine, elles, avaient été clamées bien avant. Et une figure sort du lot : il s’agit du commandant Cousteau.
Créateur de l’océanographie
Jacques-Yves Cousteau naît en 1910 et disparaît en 1997, laissant derrière lui les majeures découvertes océanographiques du siècle.
Quand il atteint ses 20 ans, il entre à l’Ecole Navale et devient officier canonnier. C’est un accident grave de voiture qui met fin à la carrière d’aviateur qu’il entreprend dans l’armée. En 1936, il essaie des lunettes sous-marines. Sa passion bascule alors des cieux à la mer et ne le quitte plus. Il met à profit son instinct inventif pour développer, avec l’ingénieur Emile Gagnan, le scaphandre autonome en 1943. Le monde sous-marin devient enfin perceptible.
Ses inventions ne se limitent pas au scaphandre. En effet, en 1946, il améliore le vêtement « à volume constant » permettant de nager dans les eaux les plus froides, l’ancêtre de nos combinaisons étanches actuelles. Il crée en 1950 la « soucoupe plongeante (SP-350) » avec l’ingénieur Jean Mollard, un sous-marin deux places qui peut descendre jusqu’à 350 mètres de profondeur. En 1965, les perfectionnements de la machine lui permettent ainsi d’atteindre les abysses sous 500 mètres. Enfin, avec le professeur Lucien Malavard et l’ingénieur Charrier, il développe le principe de la « Turbovoile ». Il s’agit d’un système spécifique de propulsion éolienne, dont il va équiper son bateau tout aussi connu que lui : l’Alcyone.
Lanceur d’alerte
Jacques-Yves Cousteau et ses amis Philippe Tailliez, officier marine et plongeur sous-marin, et Frédéric Dumas, pionnier de la plongée, forment le groupe des « Mousquemers ». Il se met avec eux à arpenter les mers au travers de sa Calypso. Cousteau filme tous les voyages dans cette embarcation. Les productions audiovisuelles ainsi créées mettent en avant les problématiques rencontrées en mer : l’impact de l’homme, la pollution, la surexploitation des ressources marines, la dégradation des côtes… Toute l’écologie marine y passe.
En 1974, il crée « The Cousteau Society », une organisation à but non-lucratif de droit américain ayant pour but l’exploration océanographique et la protection de la faune des mers. En 1981, en France, naît la Fondation Cousteau qui deviendra l’Equipe Cousteau, arpentant encore les mers aujourd’hui. Le commandant est aussi à l’initiative d’une pétition mondiale de 1990 visant à sauver l’Antarctique de l’exploitation minière.
Cousteau devient ainsi un symbole de cette conscience écologique. En 1992, il est l’invité officiel de la conférence des Nations Unies pour l’environnement et le développement à Rio de Janeiro. Ce sont plus de 70 films et de 50 livres qui font sa renommée. Si son action militaire lui avait conféré le rang de chevalier de la légion d’honneur, son implication scientifique l’a promu Officier et Commandeur. Tantôt membre de l’Académie des sciences des Etats-Unis, tantôt directeur du musée océanographique de Monaco, il reçoit le prix international des Nations Unies pour l’environnement en 1977, la médaille de la liberté du président des Etats-Unis en 1985, l’inscription au tableau d’honneur des individus qui se sont distingués pour la protection de l’environnement en 1988 et devient membre de l’Académie française en 1989. Un palmarès hors du commun et international qui ne peut être oublié aujourd’hui.
Même si le commandant a disparu, les questionnements qu’il a émis n’ont jamais été plus actuels. Ils ont une résonance toute particulière dans la situation d’urgence climatique dans laquelle nous nous trouvons. Pour sûr, il aurait adoré parcourir le cinquième océan découvert l’année dernière, mais aurait-il été satisfait des conclusions tirées lors du « One Ocean Summit » ?
Comment le commandant Cousteau a ouvert les portes des océans à l’humanité
« Regarde ! », s’exclama mon fils.
Nous flottions à l’ombre de la jetée de l’île portoricaine de Vieques, abrités du soleil tropical par les lattes en bois suspendues au-dessus de nos têtes, tandis que l’eau recouvrait des piliers érodés. Cet endroit créé par l’humain, où nous nous octroyions une courte pause lors de notre baptême de plongée libre, n’avait aucun attrait hormis sa fraîcheur.
Les yeux écarquillés derrière son masque, Will pointa l’eau du doigt avant de plonger tête la première. J’en fis de même.
Nous venions d’entrer dans un autre monde, à mille lieues de la jetée, structure sans intérêt de bois gondolé et de peinture écaillée. Sous la surface, l’eau grouillait de vie : des coraux orange et jaune enveloppaient les colonnes, des plantes aquatiques luxuriantes ondulaient au gré du courant, des bancs de poissons argentés filaient entre les poteaux. Ce lieu étroit situé sous un quai construit voilà plusieurs décennies pour les navires de guerre américains était aussi fécond qu’une jungle. Mais contrairement à la jungle, nous pouvions flotter dedans et l’examiner sous tous les angles.
Jamais nous n’aurions imaginé évoluer parmi une faune et une flore aussi riches. Et pourtant, Will en redemandait. « C’était trop bien », lança-t-il alors que nous rentrions à l’hôtel à bord du vieux pick-up de nos guides, avant d’ajouter : « J’aimerais essayer la plongée sous-marine ». Il ne voulait pas être obligé de rester à la surface à cause de nos tubas de location. Il rêvait de plonger plus profondément, d’explorer davantage l’océan, de voir ses merveilles par lui-même.
Bien que Jacques-Yves Cousteau ait appris à nager à l’âge de quatre ans, c’est le ciel, et non la mer, qui l’intéressa dans un premier temps. Il entra dans la Marine nationale en 1930 pour devenir pilote, un rêve qui s’envola lorsqu’il échappa de peu à la mort dans un accident de voiture ; il s’en sortit avec les deux bras fracturés. Dans le cadre de sa rééducation, l’officier de la marine Philippe Tailliez lui suggéra de s’adonner à la nage en eau libre. Il lui prêta une paire de lunettes de plongée et l’emmena pêcher au harpon près de Toulon, en mer Méditerranée.
Nager avec ces lunettes fut une révélation. « Dès que j’ai plongé la tête sous l’eau, j’ai eu un choc », raconta-t-il par la suite. Il venait de découvrir « un vaste domaine complètement vierge à explorer ».
« J’ai compris qu’à partir de ce jour, je consacrerai tout mon temps libre à l’exploration sous-marine », confia-t-il.
À terme, il finit par plonger jusqu’à 18 mètres de profondeur, restant en apnée pendant 70 à 80 secondes. Mais ce n’était pas suffisamment profond ni suffisamment long pour le commandant Cousteau. « Je me suis toujours rebellé contre les limites imposées par une seule inspiration », a-t-il écrit en 1952 dans un article pour National Geographic, son premier pour le magazine.
Jacques-Yves Cousteau devait trouver sa propre solution. « Je suis devenu inventeur par nécessité », ironisait-il.
Pour plonger plus profondément, il avait besoin d’un appareil capable de fournir de l’air respirable tout en composant avec la pression de l’eau : plus un plongeur s’enfonce dans l’océan, plus la pression augmente, ce qui réduit le volume de l’air dans le corps et peut provoquer l’affaissement des poumons. Le beau-père du commandant Cousteau le mit en contact avec l’ingénieur Émile Gagnan, spécialiste des conceptions pneumatiques haute pression.
À l’époque, la Seconde Guerre mondiale faisait rage et l’Allemagne contrôlait la majeure partie du territoire français. Émile Gagnan, qui travaillait à Paris pour la principale compagnie de gaz du pays, avait notamment conçu une soupape qui régulait le débit de carburant ; celle-ci permettait aux voitures de fonctionner avec de l’huile de cuisson, une adaptation essentielle en temps de guerre alors que les nazis réquisitionnaient l’essence pour leurs automobiles.
Cousteau se rendit dans la capitale en 1942 afin d’expliquer son problème de pression de l’air à l’ingénieur. Gagnan pensait que son régulateur d’essence pourrait être la solution. Les deux hommes bricolèrent ensemble jusqu’à obtenir quelque chose qu’ils pouvaient tester : un régulateur fixé à deux bouteilles d’air comprimé par des conduits. Le commandant essaya le prototype dans la Marne, à l’est de Paris.
« J’ai respiré normalement à un rythme lent, incliné la tête et plongé doucement jusqu’à neuf mètres de profondeur », confia-t-il.
L’appareil fonctionnait, mais uniquement à l’horizontale. En position verticale, de l’air s’en échappait. Jacques-Yves Cousteau et Émile Gagnan repositionnèrent les conduits d’aspiration et d’évacuation pour les placer au même niveau. Ils finirent par obtenir une version que l’explorateur français était prêt à essayer en mer.
Pendant plusieurs mois au cours de l’année 1943, Cousteau, Tailliez et leur ami Frédéric Dumas testèrent avec prudence l’appareil baptisé Aqua-Lung. Ils réalisèrent plus de 500 plongées en mer Méditerranée, allant de plus en plus profondément à chaque fois. La barre des 40 mètres de profondeur fut atteinte au début de l’automne, avant que Dumas ne plonge à 67 mètres en octobre.
« Le meilleur moyen d’observer un poisson est de devenir un poisson », écrit Jacques-Yves Cousteau dans son premier article pour National Geographic. « Et le meilleur moyen de devenir un poisson, ou un fac-similé raisonnable de celui-ci, est d’enfiler un appareil respiratoire sous-marin appelé l’Aqua-Lung. L’Aqua-Lung offre à l’Homme la possibilité de sonder, sans se presser et sans danger, les profondeurs de l’océan ».
Près de 80 ans après l’invention de l’appareil, le même concept de base est encore utilisé. « C’est aussi simple et élégant qu’un bouton de porte », décrit David Doubilet, photographe sous-marin National Geographic de longue date. « C’est d’une fiabilité ! En 65 ans de plongée, je n’ai jamais rencontré de souci ».
La possibilité de sonder les profondeurs exposait toutefois les plongeurs à d’autres dangers. L’Aqua-Lung facilitait la respiration en équilibrant la pression ambiante et interne, mais il ne pouvait pas prévenir la narcose à l’azote, ou « l’ivresse des profondeurs » comme l’appelaient les premiers plongeurs, un phénomène qui survient lorsque des bulles d’azote se forment dans le système sanguin au cours de la descente du plongeur. Le commandant Cousteau l’a décrit comme « une impression d’euphorie, une perte progressive du contrôle des réflexes, une perte de l’instinct de survie ». « L’air prend un drôle de goût et vous vous enivrez de votre propre respiration », disait Albert Falco, qui a navigué avec Cousteau pendant près de 40 ans.
La narcose à l’azote peut s’avérer mortelle. Après la guerre, Jacques-Yves Cousteau, qui faisait partie du groupe de recherche sous-marine de la Marine, organisa en 1947 des tests de plongée autonome à Toulon. Il voulait montrer que l’Aqua-Lung pouvait permettre aux plongeurs d’aller à plus de 100 mètres de profondeur. Mais la personne qui effectua la première tentative, le second capitaine Maurice Fargues, trouva la mort après avoir perdu connaissance à 120 mètres. L’équipage le ramena désespérément à la surface, mais ne parvint pas à le réanimer.
« Je commence à me demander si ce que je fais a du sens », déclara, bouleversé, le commandant Cousteau.
Aux yeux de la Marine, cela en avait. Elle déploya le groupe de recherche sous-marine pour effacer les séquelles de la Seconde guerre mondiale en mer Méditerranée. Les plongeurs de la Marine procédèrent à des opérations de déminage à proximité de ports fréquentés. Ils récupérèrent les corps des pilotes d’avions abattus. Ils observèrent la destruction du monde sous-marin infligée par une guerre qui n’avait pas épargné les côtes méditerranéennes.
« J’ai enfilé le scaphandre et je me suis retrouvé au fond de la piscine », se souvient David Doubilet, qui a photographié la mer des Sargasses, la grande barrière de corail et presque chaque recoin de l’océan pour plus de 70 reportages National Geographic. « J’étais cloué sur le fond, mais je respirais et c’était divin ».
« Le régulateur Aqua-Lung était un passeport qui nous donnait accès à 70 % de notre planète », poursuit le photographe. « Le commandant Cousteau est une personne dont le rôle essentiel pour la planète ne pourra jamais être oublié ni sous-estimé ».
Le photographe Laurent Ballesta, qui a passé son enfance à nager et faire de la plongée libre et sous-marine sur les côtes françaises de la mer Méditerranée, a également été influencé par le commandant Cousteau. Lorsqu’il avait 16 ans, lui et ses amis se trouvaient sur un bateau lorsqu’ils furent encerclés par des requins. Fan inconditionnel des documentaires de l’explorateur, Laurent Ballesta a réalisé qu’il s’agissait de requins-pèlerins, une espèce inoffensive, et a sauté dans l’eau pour nager avec eux.
Quand il raconta ce qu’il s’était passé à ses parents en rentrant à la maison, ces derniers ne le crurent pas. « C’est à ce moment que j’ai décidé d’apprendre la photographie ».
Laurent Ballesta a depuis découvert une nouvelle espèce de poisson baptisée le gobie d’Andromède, et a été le premier à photographier sous l’eau le cœlacanthe, un poisson préhistorique. Il a récemment relaté pour National Geographic comment, avec son équipe, il avait passé 28 jours dans une capsule pressurisée, qui leur a permis de plonger pendant des heures dans les profondeurs de la mer Méditerranée.
Jacques-Yves Cousteau a continué de jouer un rôle actif dans l’exploration sous-marine jusqu’à sa mort en 1997, à l’âge de 87 ans. « Mon travail consistait à montrer ce qu’abritait la mer, à en montrer la beauté, pour que les gens apprennent à la connaître et à l’aimer », a écrit l’explorateur.
Malgré ses contributions novatrices et son influence sur la scène internationale, ce monde est encore largement méconnu. Selon l’Agence américaine d’observation océanique et atmosphérique, plus de 80 % des océans de notre planète restent inexplorés.
Depuis l’invention de l’Aqua-Lung par Jacques-Yves Cousteau et Émile Gagnan il y a 78 ans, plus de 28 millions d’individus ont suivi l’explorateur dans les océans et se sont initiés à la plongée sous-marine.
Au printemps prochain, mon fils et moi viendrons nous ajouter à cette liste. Pour son 17e anniversaire, Will veut un passeport vers un autre monde.
Social Learning Leads to Our Greatest Achievements and Worst Errors
In 1845, Sir John Franklin, a Fellow of the Royal Society and experienced Arctic traveler, set out with a large, well-equipped expedition to find the Northwest Passage. His ship was ice-bound through two winters, and the entire crew of 129 perished, mainly from starvation and scurvy. Yet the rich animal resources of the region where these men died have allowed the Central Inuit to survive and raise their children for at least 700 years. Why couldn’t the British find enough food?
Modern technology allows us to dominate the world like no other species in the history of life. In the Pleistocene, our hunter-gatherer ancestors did the same—long before farming, cities, or the industrial revolution. Modern humans emerged from Africa about 50,000 years ago and soon occupied almost every terrestrial habitat, from extreme desert to tropical rain forest to arctic tundra. Of course there are species adapted to each of these environments: desert rodents conserve water so well that they never need to drink; forest primates swing through the canopy never coming to ground, and arctic musk ox have hair and fat deposits that allow them to survive winter above the arctic circle. What makes humans unique in the natural world is that they live in all of these environments.
Humans can rapidly adapt to much wider range of circumstances than any other creature. Two million years ago, our ancestors were a fairly typical ape species limited to a small part of Africa. Now we are the world’s champion adapters. What explains this change?
You’re probably thinking, that’s easy—humans evolved big brains and got a lot smarter. We can adapt to such a wide range of environments because we use these smarts to solve the problems of each one. We invent tools, clothing, and shelter. Other animals can’t do this because they aren’t as clever as we are. People are indeed smarter than the average bear (or primate), but we are not nearly smart enough individually to solve the number of problems that we need to solve.
Instead I believe that culture is the key. Humans are much better at learning from each other than any other animal. This ability allows human populations to gradually accumulate knowledge over generations that no individual human could invent alone. Such cultural adaptation may be slow when measured in human lifetimes, but it is blindingly fast compared to genetic adaptation. The British explorers starved not because they lacked intelligence but because they did not have the necessary local knowledge and two years was not enough time to acquire it.
The people who left the Horn of Africa 50,000 years ago were tropical foragers. Twenty thousand years later, their descendants were living on the shores of the Arctic Ocean. To get a feeling for how much knowledge was required to make this transition, let’s take a look at a people who lived in a similar environment, Central Inuit of the Canadian Arctic.
These foragers depended utterly on a toolkit crammed with complex, highly refined tools. Winter temperatures average about −25° C, so survival required warm clothes. In the winter, the Central Inuit wore beautifully designed clothing. Making clothing from caribou skins requires a host of complex skills⎯hides must be cured, thread and needles made, and clothing designed, cut, and stitched. Even the best clothing is not enough during winter storms; shelter is mandatory. The Central Inuit made snow houses so well designed that interior temperatures were about 10° C. No wood is available, so they lit and heated houses, cooked food, and melted ice for water using carved soapstone lamps fueled with rendered seal fat.
During the winter, the Central Inuit hunted seals, mainly by ambushing them at their breathing holes using multi-piece toggle harpoons. During the summer, the Inuit used a clever three-pronged spear with a sharp central spike and two hinged, backward-facing points, to harvest Arctic char caught in stone weirs. They also hunted seals and walrus in open water from kayaks. Later in summer and the fall, they shifted to caribou hunting, using sophisticated recurved, composite bows. An Inuit Instruction Manual for Technology would run to thousands of pages. To become a competent Inuit, you’d also need to master theNatural History Handbook, Social Policies and Procedures, Grammar and Dictionary, and Beliefs, Stories, and Songs.
Do you think you could acquire all the knowledge necessary to create these books on your own? This is not a ridiculous question. That is pretty much the way that all other animals have to learn about their environments. They rely mainly on innate information and personal experience to figure out how to find food, make shelter, and in some cases make tools. Many other species have simple, culturally transmitted traditions, but in every case, the behaviors acquired can be and are learned by individuals. Only humans are able to accumulate knowledge and technology that no single individual could invent.
I hope you’ll agree that you wouldn’t have much of a chance. Inventing a kayak or acquiring an Inuit’s knowledge of seal behavior are hard problems, beyond the capacity of any individual. Inventing the whole package is impossible. We don’t have to rely on introspection because this kind of experiment has been repeated many times when European explorers were stranded in unfamiliar habitats. Despite desperate efforts and ample learning time, these hardy men and women suffered or died because they lacked crucial information about how to adapt to local conditions.
How does culture allow populations to acquire adaptations beyond the ken of individuals? You might think the answer is obvious: culture allows knowledge to be shared. Suppose a clever Inuit devises a new stitch that makes seams waterproof; she can share this with knowledge with her friends and relatives without reducing its value to her. Her family’s clothing and shoes are still waterproof no matter how many people she tells about her discovery. As a result, culture allows the costs of innovation to be spread over all individuals who adopt the innovation. But such cost-spreading alone cannot explain cumulative cultural evolution because it provides no reason to innovate. It’s cheaper to just copy somebody else. Culture must also reduce the cost of innovation in some way.
There are at least two ways that this can happen. First, cultural learning can allow individuals to learn selectively. Sometimes experience provides accurate information at low cost. Think of Goodyear accidentally spilling rubber onto a hot stove or Fleming observing his mold-contaminated petri dishes. Such rare cues allow accurate, low-cost inferences about the environment. However, most individuals won’t observe these cues, so making the same inference will be much more difficult for them. Animals that can’t learn from others are stuck with whatever information nature offers. In contrast, cultural learning allows us to be choosy, learning individually when it’s cheap and accurate and relying on cultural learning when environmental information is costly or inaccurate.
Second, the ability to learn culturally can also raise the average fitness of a population by allowing acquired improvements to accumulate from one generation to the next. For example, bows vary in many dimensions that affect performance—such as length, width, cross section, taper, and degree of recurve. It is typically more difficult to make large improvements by trial and error than small ones. Even if small changes are made at random, half of them will increase the payoff, unless the design is already at the optimum. Large changes will improve things only if they are in the direction that includes a distant optimum. Thus, we expect it to be much harder to design a useful bow from scratch than to tinker with the dimensions of a reasonably good bow. Cultural learners start their search closer to the best design than pure individual learners and can invest in further improvements. Then, they can transmit those improvements to the grandkids, and so on down through the generations until sophisticated artifacts are produced.
These mechanisms imply that cultural adaptation comes with an inherent tradeoff. It allows the rapid evolution of superb habitat-specific knowledge and technologies. But these mechanisms only work if people are credulous, mainly doing what people around them are doing. Learning selectively means that you only believe your own senses when they clearly contradict cultural norms. Otherwise you copy others. The accumulation of small improvements means that people are motivated to make small changes to cultural norms; large changes are suspect.
As a result, false—even harmful—ideas can spread. If everyone you know believes that tiny, invisible organisms cause disease, then you believe that. If everyone you know believes that malevolent witches cause disease, you believe that instead. Culture is a powerful mechanism for generating adaptive knowledge on average, but for certain traits in particular circumstances, it can generate wildly wrong beliefs and highly maladaptive outcomes. You don’t have to travel to the Arctic to find yourself committed to approaches that aren’t appropriate to your local environment. Sometimes you don’t even have to leave the house.
In Kubrick’s masterpiece, 2001: A Space Odyssey, a group of our ape-like ancestors encounter a towering black rectangle somewhere in an African desert. Something in them changes. A seed is sown. Everything from tool use to space travel is now inevitable.
Human beings have always felt that we are fundamentally different from other animals. We have done countless things that no other species could do. We have reshaped the world around us. But anyone who has spent time with animals can see that we also have much in common.
Defining exactly what separates us from other animals has proven surprisingly difficult. Large brains, tool use and language may each seem like attractive answers, but each turns out to be less clear-cut. Maybe the most notable thing about humanity is that we are able to communicate ideas well enough for each generation to maintain and build upon the understanding of the previous one, giving us a cumulative culture like no other.
That still leaves open the question of what gives us this potential. Ruling out an intervention by Space Odyssey’s monoliths, was there a single trigger that kickstarted the human revolution?
Undoubtedly genetic mutation played a part. But did those mutations occur before the revolution started or after? In a paper published in Science today, Simon Fisher at Max Planck Institute for Psycholinguistic, Netherlands and Matt Ridley, formerly a science journalist, try to answer the question.
Genes before beans?
We share 99% of our DNA with our closest living relative, the chimpanzee. So any genetic component of being human must be found in the differences between the two. Recent advances in technology mean that DNA sequences from some of our most recent extinct relatives are also becoming available to us, allowing a finer understanding of who evolved what, and when.
The best known mutations are those that change the sequence of a gene, which in turn changes properties of proteins made by that gene. We now know that a surprising amount of key variations are caused by mutations that leave the protein sequence unchanged, but alter the amount or the situations in which a protein is made. The relationships between genes and proteins form complex networks, in which an apparently small change in the DNA can lead to many changes in the proteins produced and therefore profound changes in the nature of the organism. In this way, a relatively few changes can lead to big differences in, say, brain size and structure.
But the brain is shaped by both genes and experience, and that makes it hard to be sure which came first. Questions of genetic and behavioural evolution sometimes put scientists in a chicken or egg situation.
What came first
If a genetic mutation and a behavioural change appear to be associated, scientists can be tempted to assume that the genetic mutation caused the change. But Fisher and Ridley argue that it is also worth considering that a change in lifestyle could suddenly make a particular mutation advantageous, and therefore allow it to spread throughout the population via natural selection. For example, a bigger brain is advantageous only if the body has enough resources to support it. It is possible that the mutations that caused the increase in brain size in humans happened only after the behavioural revolution of cooking, which made the nutrients in food more accessible.
Sometimes it is even possible to identify genes that may have had specific roles in the evolution of specific behaviour. For example, the FOXP2 gene is believed to be involved in language. Modern individuals who have faulty copies of this gene have difficulty speaking and understanding language. FOXP2 is also the gene whose sequence has the most differences between chimps and humans, and it is likely that between modern humans and Neanderthals the amount of FOXP2 protein made varies.
It is important, however, not to assume that this is the origin of language. In many animals, versions of the gene exist and they have roles in vocal and motor skills. It is likely that the evolution of FOXP2 gene honed such pre-existing skills. Such changes would be most advantageous to humans when some level of language was already being practised.
Examples of genetic changes following behavioural ones have previously been observed, but it is not always given due consideration. Much later in our evolution, after farming had developed, evidence shows that humans developed an increased ability to tolerate lactose (a key component of milk) only after they began consuming it. More recently, Europeans developed higher tolerance for alcohol relative to Asians because of their greater alcohol consumption.
Being human
So there are many things that separate modern humans from chimps and fewer that separate us from our closer extinct relatives. These include genetic, physical, behavioural and cultural differences, and they came into being over time, causing each other or allowing each other to become advantageous. This doesn’t imply that the transition was smooth. Successful populations may exist for long periods relatively unchanged, while one change can quickly trigger a cascade of changes.
It does, however, make it difficult to say that any specific change turned an ape into a human. A definitive answer to what made us “human” requires you to decide quite arbitrarily on a definition of human, and in doing that, you have probably answered your own question.
Modern human culture could have emerged 44,000 years ago
When did human behaviour as we know it begin? Work conducted by an international team of researchers suggests that modern culture emerged 44,000 years ago. Their analysis of archaeological material discovered at Border Cave in South Africa has demonstrated that much of the material culture that characterized the lifestyle of San hunter-gatherers in southern Africa was part of the culture and technology of the inhabitants of Border Cave 44,000 years ago. This research, funded by an ERC Advanced Grant, is published in the online edition of the Proceedings of the National Academy of Sciences.
A key question in human evolution is to determine when human cultures like ours emerged. Most archaeologists believe that the oldest traces of San hunter-gatherer culture in southern Africa date back 20,000 years. A multidisciplinary group led by Francesco d’Errico, from the French National Centre for Scientific Research (CNRS) and the University of Bergen, has revealed that modern cultural behaviour could be much older than this. The team dated and analysed objects from archaeological layers at Border Cave. Francesco d’Errico, who is working on the TRACSYMBOLS project as a co-Investigator with ERC grantee Prof. Christopher Henshilwood, said: «The results of our research undoubtedly show that the people at Border Cave were using digging sticks weighted with perforated stones, like those used traditionally by the San, 44,000 years ago.»
Located in the foothills of the Lebombo Mountains in KwaZulu-Natal, South Africa, the site has yielded exceptionally well-preserved organic material. The Border Cave inhabitants adorned themselves with ostrich egg and marine shell beads, and notched bones for notational purposes. They fashioned fine bone points for use as awls and poisoned arrowheads. One point was decorated with a spiral groove filled with red ochre, which closely parallels marks that the San people made to identify their arrowheads when hunting. Chemical analysis of residues on a wooden stick decorated with incisions also reveals that it was used as a poisonous weapon containing ricinoleic acid. People at Border Cave also used a lump of beeswax, mixed with the resin of toxic Euphorbia, and possibly egg, wrapped in vegetal twine made from the inner bark of a woody plant.
Francesco d’Errico added: «This complex compound used for hafting arrowheads or tools, which is 40,000 years old, is the oldest known evidence of the use of beeswax.» The custom of arming hunting weapons with small pieces of stone is confirmed by the discovery of resin residue still clinging to some of the tools, which has been identified as a suberin (waxy substance) produced from the sap of Podocarpus. The study of stone tools discovered in the same archaeological layers as the organic remains, and from older deposits, shows a gradual evolution in lithic technology.
These findings support the view that what is perceived today as «modern behaviour» is the result of non-linear trajectories in human development. Researchers believe these archaeological processes could be better understood when they are documented on a regional scale. These future investigations could also shed light on key behavioural innovations that emerged in southern Africa.
Professor Magnus Enquist looks around at an exhibition on the origin of humankind at the Museum of Natural History in Stockholm. He fixes his gaze on photographs from the Lascaux Caves in France. The cave paintings of prehistoric oxen, wild horses and deer were made between 17,000 and 15,000 years B.C., and are among the oldest works of art in the world.
“Look at the lion’s nose,” he says. “Whoever painted that had the same artistic abilities as modern-day humans.”
The human being – a unique animal
Professor Enquist has only a short walk back to his place of work at the Centre for the Study of Cultural Evolution at Stockholm University, which he heads. The center was founded in 2007 to study the factors shaping and changing human culture.
“The human being is a unique animal. This is reflected in our mental abilities, such as language and thought,” Enquist says.
His research team has been awarded a grant by the Knut and Alice Wallenberg Foundation to examine what it was that caused the evolutionary paths of humans and chimpanzees to diverge some six or seven million years ago. The project is an interdisciplinary one. The researchers represent highly diverse specialist fields at four higher education institutions in Sweden, the U.K. and the U.S.
He himself is an ethologist – a specialist in animal behavior.
“There is a big difference between chimpanzees and humans, but it is not easy to understand how it occurred,” he comments. The first step will be to identify and describe various learning mechanisms using mathematical models and data simulations. This will be followed by analyses of how these characteristics influence the spread of information between individuals.
“We will be combining the latest research in the fields of AI (artificial intelligence), learning theory and cultural evolution with the latest findings in Paleolithic archaeology and human genetics to find out what it is that makes the human being unique.” Professor Enquist explains.
Animals cannot remember sequences
Much of the work will involve “metastudies” of findings from studies already carried out, mostly by other researchers.
One sub-project has already been completed. It confirms that animals lack the ability to remember sequences of sound and light. About one hundred studies of 13 bird and mammal species show that they are unable to distinguish a red-green sequence from a green-red one. The experiments were repeated thousands of times, but the animals could not learn to see any difference. For humans it is easy.
“Sequences are a feature of many human thought processes. When we plan something we create an order. Language is also about order,” Enquist comments.
He explains that animals also find it difficult to distinguish between rising and falling tones. He demonstrates by whistling a few notes, first with rising pitch, then falling. The difference is easy for the human ear to detect.
Cultural evolution
Many people believe that heredity can explain why humans began to evolve differently from other mammals. But Professor Enquist points out that genetic changes occur very gradually, whereas the leap from animal to human occurred relatively quickly. He believes the explanation lies in cultural development. And by “culture” the researchers here mean implements such as axes and other tools, as well as more abstract constructions such as values or outlook on life.
“We want to examine how organisms learn things they can then pass down from one generation to another. We also want to understand how biological evolution and cultural change continue to affect the way people acquire and disseminate knowledge,” Professor Enquist explains.
Why is it only humans who undergo this transition? Strangely enough, one of the reasons may be that human children are helpless much longer than young animals.
“Taking care of a child is a form of investment. A human child becomes a highly capable organism once it has absorbed all the information available to it,” he points out.
Tools made by thinkers
The next task facing the researchers will be to examine the first stone tools, which began to be used about three million years ago.
“The ability to make simple tools was an advantage for a thinking organism. We will be trying to describe the intellectual functions and the behavior needed to use them. It is fairly easy to make tools out of flint. A single blow is all it takes to get a sharp edge,” Enquist explains.
Chimpanzees also use tools, for example when searching for food. They are able to remove leaves and twigs from the branches of trees, and coax termites out of their nests.
“They do make tools, but on a very limited scale,” Professor Enquist comments.
Uniquely Human: Understanding Our Cultural Evolution
Humans are not unique as a species in our ability to use tools or form complex social groups. So what happened in the evolution of Homo sapiens that made our species stand out in comparison to other hominids? Several theories point to our unique ability to develop culture via social learning—the capacity to teach and learn from others—as an evolutionary turning point in human history.
Gene-Culture Coevolution
When Marcus Feldman started his PhD in biology at Stanford in 1971, there was a debate raging over the publications of Arthur Jensen and William Shockley, scientists who contended that differences in IQ measurements between racial groups were almost entirely based on genetics.
Feldman, now a professor of biology at Stanford University, described how this racist debate over the mechanisms of IQ heritability led him to become interested in seeing whether cultural transmission might also play a role in shaping the characteristics of human populations. He and his longtime collaborator Luigi Luca Cavalli-Sforza, a population geneticist at the Stanford University School of Medicine, began to investigate whether behavioral processes like social learning might exert as great an influence on human evolution as genetics.
“The two of us sat down to try to see whether cultural transmission, as opposed to genetic transmission, could explain our high heritability,” Feldman said in an Integrative Science Symposium at the 2019 International Convention of Psychological Science (ICPS) in Paris.
Feldman and Cavalli-Sforza, described as founders of the field of cultural evolutionary research and gene-culture coevolution, began applying quantitative mathematical models from the field of population genetics to understand how a combination of genetics, culture, and behavior contributes to shaping human evolution.
In 1981, the two published a landmark book, Cultural Transmission and Evolution: A Quantitative Approach, paving the way for the new field of quantitative cultural evolutionary theory. Their book laid out a framework for how the transmission of nongenetic, socially learned traits across individuals and groups can impact human diversity in ways similar to genetic transmission.
In the 1990s, Cavalli-Sforza started the Human Genome Diversity Project, a collaborative international project designed to study the richness of human genetic diversity. As part of this project, he, Feldman, and dozens of collaborators compiled the Human Genome Diversity Cell Line Panel, a resource of uniquely diverse genetic information collected from more than 1,000 individuals across 52 global populations representing most of the world’s major geographic regions.
Although Feldman regrets that the team was not able to collect a more complete and representative panel of samples, the project has provided an unprecedented opportunity to study human evolution and genetic diversity.
The analyses resulting from this body of work have provided an enormous number of cross-disciplinary insights into fields ranging from archaeology and anthropology to epidemiology and linguistics.
In one example of the impact of a cultural phenomenon on population genetics, Feldman described how marriage customs and taboos can influence the frequency of genetic diseases. When the cultural practice of marriage between blood relations is more common, very rare alleles associated with genetic diseases became more frequent within a population, Feldman observed.
“That is a cultural choice [that] dictates the pattern of the DNA variation,” Feldman explained.
Since the publication of Feldman’s work on the Human Genome Diversity Cell Line Panel in Science in 2002, these types of DNA studies have become increasingly common. However, Feldman cautions researchers to be wary of how DNA variation can be misused.
“We now have a very heavy hammer, which is the ability to find DNA variation everywhere we look—in humans in particular—and associate it with phenotypes. And it’s being done all the time, 20 or 30 articles a day,” Feldman warned. “I think it’s necessary for psychologists, as well as other social scientists, to look carefully and critically at the inferences that are being made about complex human traits, especially behaviors.”
Excavating Culture
Although we have no remnants of behavioral data from our early human ancestors, we do have archeological artifacts dating back as far as 3.3 million years. Cognitive archaeology researchers like Miriam Haidle use these physical objects to trace the cognitive and cultural evolution of humans and other hominids. Haidle is the scientific coordinator of the Role of Culture in Early Expansions of Humans project at the Heidelberg Academy of Sciences and Humanities at Senckenberg Research Institute in Frankfurt, Germany.
When closely studied, stone tools and other material artifacts can provide researchers with a rich window into the cognitive capabilities, as well as the cultural practices, of our prehistoric ancestors. For example, the ability to shape stones through flaking, or to turn a small tree into a tool for hunting, can reveal a whole scope of cognitive processes, such as causal reasoning.
In a 2015 paper published in the Journal of Anthropological Science, Haidle and a diverse group of coauthors ranging from archaeologists to psychological scientists described how culture emerges through the development of three dimensions.
To start, there is a necessary biological dimension, including how biological factors such as genes, anatomy, and physiology both enable and restrict behavior. Human hands, for example, allow different forms of cultural development than the flippers of a dolphin or the wings of a bat. There is also a dimension of individual development that encompasses a given individual’s capacity, skills, and experiences. Finally, there is the historical-social dimension, which is particularly important for the development of culture. This dimension includes knowledge and skills that are shared socially.
Although there is evidence that some other animal species are capable of rudimentary aspects of culture, such as using basic tools, the historical-social dimension is uniquely well developed in humans. In fact, at some point in our evolutionary history, the biological dimensions—our genetically heritable traits—became less important, and the historical-social dimension increased dramatically in importance, Haidle explained at the ICPS symposium.
“There is an increase of the social and material engagement, and this is very important because you cannot learn everything on your own,” she said.
The production of clothing is an example of the importance of social learning in cultural expansion. Making the simplest clothing, just a piece of leather wrapped around the body, requires only a few tools to scrape, soften, and cut the hide. Even in this simple setting, raw material procurement, tool production, and application require various skills and knowledge that have to be learned and maintained. Producing more complex clothing requires not only more steps—making holes in the hide, tanning the hide, and threading pieces of the hide together—but also more specialized tools, combinations of practices, and thus knowledge and skills.
Tailored clothing sewn with eyed needles has existed for at least 30,000 years, representing further advances in materials and tools, along with practices to acquire and get trained in using them. The development of such increasingly complex performances is interdependent with the development of transmittance capacities of knowledge and skills, Haidle said. In cultural species, individuals do not invent behaviors over and over again; information is shared within and between groups and passed on to new generations. And “in humans, the intensification of social interactions and the expansion of a socially formed physical environment resulted in new transfer trajectories and the unfolding of new performances.”
Self-Conscious Learners
Henrike Moll, an associate professor of psychology at the University of Southern California, studies the cultural transmission of knowledge: How do we pass information from one person to another and from one generation to the next?
Children are actually self-conscious learners, rather than blank slates or passive sponges that simply absorb whatever information happens to be present in their environment, Moll argues.
“I believe that we have good evidence to think that children understand their need to learn. You can observe this very early on in children when they show so-called social referencing,” she said at ICPS. “They look up at other people in order to better understand what they should be doing in an ambiguous or ambivalent situation.”
Although other animals are capable of social learning, humans are uniquely adept at this skill. Part of Moll’s current research examines which mechanisms make human social learning so successful.
“We believe that there are different kinds of social learning mechanisms that are unique, and one of the most fascinating ones is the case of teaching,” she explained.
In a paper published in the Review of Philosophy and Psychology in 2018, Moll described a series of experiments that demonstrated how children’s problem-solving abilities benefit from teaching far more than they benefit from simply observing someone else solve a similar problem.
When asked to get a peanut out of a tall thin plastic tube using nothing but water, almost all 4-year-olds failed to realize they could use the water as a tool. When shown a video demonstrating puppets completing a similar task, children still largely failed to improve their problem-solving. However, when the same video was shown before the task with a teaching framing (“Look, I want to show you something!”), most children succeeded.
Not only do children seek out opportunities for teaching, but they also seem to understand that teaching serves to spread knowledge of a certain kind—namely, general knowledge.
In a series of experiments that are currently under review, Moll and colleagues found that even 4-year-olds have an innate understanding of the value of teaching and how to choose information most conducive to teaching others.
In the study, children were initially taught some new facts about an animal. The facts always included one generic fact that applied to the entire species (“Hummingbirds can fly backward”) and one episodic fact specific to an individual animal (“This hummingbird is flying backward”).
When children were asked to teach what they’d learned about animals to a pretend classroom, Moll and colleagues observed a pattern: The children consistently recounted generic rather than episodic information.
They seemed to inherently understand that conveying generic facts, which everyone can benefit from, is a key component of teaching. Episodic facts, on the other hand, tend to concern only specific people or situations.
Moll and her team of researchers think that this is because children understand that the goal of teaching is to acquire—and possibly further propagate—knowledge that is general and objective.
After conducting an extensive review of the literature and evidence of long-term human evolution, scientists Tim Waring and Zach Wood concluded that humans are experiencing a «special evolutionary transition» in which the importance of culture, such as learned knowledge, practices and skills, is surpassing the value of genes as the primary driver of human evolution.
Culture is an under-appreciated factor in human evolution, Waring says. Like genes, culture helps people adjust to their environment and meet the challenges of survival and reproduction. Culture, however, does so more effectively than genes because the transfer of knowledge is faster and more flexible than the inheritance of genes, according to Waring and Wood.
Culture is a stronger mechanism of adaptation for a couple of reasons, Waring says. It’s faster: gene transfer occurs only once a generation, while cultural practices can be rapidly learned and frequently updated. Culture is also more flexible than genes: gene transfer is rigid and limited to the genetic information of two parents, while cultural transmission is based on flexible human learning and effectively unlimited with the ability to make use of information from peers and experts far beyond parents. As a result, cultural evolution is a stronger type of adaptation than old genetics.
Waring, an associate professor of social-ecological systems modeling, and Wood, a postdoctoral research associate with the School of Biology and Ecology, have just published their findings in a literature review in the Proceedings of the Royal Society B, the flagship biological research journal of The Royal Society in London.
«This research explains why humans are such a unique species. We evolve both genetically and culturally over time, but we are slowly becoming ever more cultural and ever less genetic,» Waring says.
Culture has influenced how humans survive and evolve for millenia. According to Waring and Wood, the combination of both culture and genes has fueled several key adaptations in humans such as reduced aggression, cooperative inclinations, collaborative abilities and the capacity for social learning. Increasingly, the researchers suggest, human adaptations are steered by culture, and require genes to accommodate.
Waring and Wood say culture is also special in one important way: it is strongly group-oriented. Factors like conformity, social identity and shared norms and institutions — factors that have no genetic equivalent — make cultural evolution very group-oriented, according to researchers. Therefore, competition between culturally organized groups propels adaptations such as new cooperative norms and social systems that help groups survive better together.
According to researchers, «culturally organized groups appear to solve adaptive problems more readily than individuals, through the compounding value of social learning and cultural transmission in groups.» Cultural adaptations may also occur faster in larger groups than in small ones.
With groups primarily driving culture and culture now fueling human evolution more than genetics, Waring and Wood found that evolution itself has become more group-oriented.
«In the very long term, we suggest that humans are evolving from individual genetic organisms to cultural groups which function as superorganisms, similar to ant colonies and beehives,» Waring says. «The ‘society as organism’ metaphor is not so metaphorical after all. This insight can help society better understand how individuals can fit into a well-organized and mutually beneficial system. Take the coronavirus pandemic, for example. An effective national epidemic response program is truly a national immune system, and we can therefore learn directly from how immune systems work to improve our COVID response.»
This is the impact that culture has had on our evolution
Is there an evolutionary explanation for humanity’s greatest successes – technology, science, and the arts – with roots that can be traced back to animal behavior? I first asked this question 30 years ago, and have been working to answer it ever since.
Plenty of animals use tools, emit signals, imitate one another, and possess memories of past events. Some even develop learned traditions that entail consuming particular foods or singing a particular kind of song – acts that, to some extent, resemble human culture.
But human mental ability stands far apart. We live in complex societies organized around linguistically coded rules, morals, and social institutions, with a massive reliance on technology. We have devised machines that fly, microchips, and vaccines. We have written stories, songs, and sonnets. We have danced in Swan Lake.
Developmental psychologists have established that when it comes to dealing with the physical world (for example, spatial memory and tool use), human toddlers’ cognitive skills are already comparable to those of adult chimpanzees and orangutans. In terms of social cognition (such as imitating others or understanding intentions), toddlers’ minds are far more sophisticated.
The same gap is observed in both communication and cooperation. Vaunted claims that apes produce language do not stand up to scrutiny: animals can learn the meanings of signs and string together simple word combinations, but they cannot master syntax. And experiments show that apes cooperate far less readily than humans.
Thanks to advances in comparative cognition, scientists are now confident that other animals do not possess hidden reasoning powers and cognitive complexity, and that the gap between human and animal intelligence is genuine. So how could something as extraordinary and unique as the human mind evolve?
A major interdisciplinary effort has recently solved this longstanding evolutionary puzzle. The answer is surprising. It turns out that our species’ most extraordinary characteristics – our intelligence, language, cooperation, and technology – did not evolve as adaptive responses to external conditions. Rather, humans are creatures of their own making, with minds that were built not just for culture, but by culture. In other words, culture transformed the evolutionary process.
Key insights came from studies on animal behavior, which showed that, although social learning (copying) is widespread in nature, animals are highly selective about what and whom they copy. Copying confers an evolutionary advantage only when it is accurate and efficient. Natural selection should therefore favor structures and capabilities in the brain that enhance the accuracy and efficiency of social learning.
Consistent with this prediction, research reveals strong associations between behavioral complexity and brain size. Big-brained primates invent new behaviors, copy the innovations of others, and use tools more than small-brained primates do. Selection for high intelligence almost certainly derives from multiple sources, but recent studies imply that selection for the intelligence to cope with complex social environments in monkeys and apes was followed by more restricted selection for cultural intelligence in the great apes, capuchins, and macaques.
Why, then, haven’t gorillas invented Facebook, or capuchins built spacecraft? To achieve such high levels of cognitive functioning requires not just cultural intelligence, but also cumulative culture, in which modifications accumulate over time. That demands transmission of information with a degree of accuracy of which only humans are capable. Indeed, small increases in the accuracy of social transmission lead to big increases in the diversity and longevity of culture, as well as to fads, fashions, and conformity.
Our ancestors were able to achieve such high-fidelity information transmission not just because of language, but also because of teaching – a practice that is rare in nature, but universal among humans (once the subtle forms it takes are recognized). Mathematical analyses reveal that, while it is generally difficult for teaching to evolve, cumulative culture promotes teaching. This implies that teaching and cumulative culture co-evolved, producing a species that taught relatives across a broad range of circumstances.
It is in this context that language appeared. Evidence suggests that language originally evolved to reduce the costs, increase the accuracy, and expand the domains of teaching. That explanation accounts for many properties of language, including its uniqueness, power of generalization, and the fact that it is learned.
All of the elements that have underpinned the development of human cognitive abilities – encephalization (the evolutionary increase in the size of the brain), tool use, teaching, and language – have one key characteristic in common: the conditions that favored their evolution were created by cultural activities, through selective feedback. As theoretical, anthropological, and genetic studies all attest, a co-evolutionary dynamic – in which socially transmitted skills guided the natural selection that shaped human anatomy and cognition – has underpinned our evolution for at least 2.5 million years.
Our potent capacity for imitation, teaching, and language also encouraged unprecedented levels of cooperation among individuals, creating conditions that not only promoted longstanding cooperative mechanisms such as reciprocity and mutualism, but also generated new mechanisms. In the process, gene-culture co-evolution created a psychology – a motivation to teach, speak, imitate, emulate, and connect – that is entirely different from that of other animals.
Evolutionary analysis has shed light on the rise of the arts, too. Recent studies of the development of dance, for example, explain how humans move in time to music, synchronize their actions with others, and learn long sequences of movements.
Human culture sets us apart from the rest of the animal kingdom. Grasping its scientific basis enriches our understanding of our history – and why we became the species we are.
As with any other species, human populations are shaped by the usual forces of natural selection, like famine, disease or climate. A new force is now coming into focus. It is one with a surprising implication — that for the last 20,000 years or so, people have inadvertently been shaping their own evolution.
The force is human culture, broadly defined as any learned behavior, including technology. The evidence of its activity is the more surprising because culture has long seemed to play just the opposite role. Biologists have seen it as a shield that protects people from the full force of other selective pressures, since clothes and shelter dull the bite of cold and farming helps build surpluses to ride out famine.
Because of this buffering action, culture was thought to have blunted the rate of human evolution, or even brought it to a halt, in the distant past. Many biologists are now seeing the role of culture in a quite different light.
Although it does shield people from other forces, culture itself seems to be a powerful force of natural selection. People adapt genetically to sustained cultural changes, like new diets. And this interaction works more quickly than other selective forces, “leading some practitioners to argue that gene-culture co-evolution could be the dominant mode of human evolution,” Kevin N. Laland and colleagues wrote in the February issue of Nature Reviews Genetics. Dr. Laland is an evolutionary biologist at the University of St. Andrews in Scotland.
The idea that genes and culture co-evolve has been around for several decades but has started to win converts only recently. Two leading proponents, Robert Boyd of the University of California, Los Angeles, and Peter J. Richerson of the University of California, Davis, have argued for years that genes and culture were intertwined in shaping human evolution. “It wasn’t like we were despised, just kind of ignored,” Dr. Boyd said. But in the last few years, references by other scientists to their writings have “gone up hugely,” he said.
The best evidence available to Dr. Boyd and Dr. Richerson for culture being a selective force was the lactose tolerance found in many northern Europeans. Most people switch off the gene that digests the lactose in milk shortly after they are weaned, but in northern Europeans — the descendants of an ancient cattle-rearing culture that emerged in the region some 6,000 years ago — the gene is kept switched on in adulthood.
Lactose tolerance is now well recognized as a case in which a cultural practice — drinking raw milk — has caused an evolutionary change in the human genome. Presumably the extra nutrition was of such great advantage that adults able to digest milk left more surviving offspring, and the genetic change swept through the population.
This instance of gene-culture interaction turns out to be far from unique. In the last few years, biologists have been able to scan the whole human genome for the signatures of genes undergoing selection. Such a signature is formed when one version of a gene becomes more common than other versions because its owners are leaving more surviving offspring. From the evidence of the scans, up to 10 percent of the genome — some 2,000 genes — shows signs of being under selective pressure.
These pressures are all recent, in evolutionary terms — most probably dating from around 10,000 to 20,000 years ago, in the view of Mark Stoneking, a geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Biologists can infer the reason for these selective forces from the kinds of genes that are tagged by the genome scans. The roles of most of the 20,000 or so genes in the human genome are still poorly understood, but all can be assigned to broad categories of likely function depending on the physical structure of the protein they specify.
By this criterion, many of the genes under selection seem to be responding to conventional pressures. Some are involved in the immune system, and presumably became more common because of the protection they provided against disease. Genes that cause paler skin in Europeans or Asians are probably a response to geography and climate.
But other genes seem to have been favored because of cultural changes. These include many genes involved in diet and metabolism and presumably reflect the major shift in diet that occurred in the transition from foraging to agriculture that started about 10,000 years ago.
Amylase is an enzyme in the saliva that breaks down starch. People who live in agrarian societies eat more starch and have extra copies of the amylase gene compared with people who live in societies that depend on hunting or fishing. Genetic changes that enable lactose tolerance have been detected not just in Europeans but also in three African pastoral societies. In each of the four cases, a different mutation is involved, but all have the same result — that of preventing the lactose-digesting gene from being switched off after weaning.
Many genes for taste and smell show signs of selective pressure, perhaps reflecting the change in foodstuffs as people moved from nomadic to sedentary existence. Another group under pressure is that of genes that affect the growth of bone. These could reflect the declining weight of the human skeleton that seems to have accompanied the switch to settled life, which started some 15,000 years ago.
A third group of selected genes affects brain function. The role of these genes is unknown, but they could have changed in response to the social transition as people moved from small hunter-gatherer groups a hundred strong to villages and towns inhabited by several thousand, Dr. Laland said. “It’s highly plausible that some of these changes are a response to aggregation, to living in larger communities,” he said.
Though the genome scans certainly suggest that many human genes have been shaped by cultural forces, the tests for selection are purely statistical, being based on measures of whether a gene has become more common. To verify that a gene has indeed been under selection, biologists need to perform other tests, like comparing the selected and unselected forms of the gene to see how they differ.
Dr. Stoneking and his colleagues have done this with three genes that score high in statistical tests of selection. One of the genes they looked at, called the EDAR gene, is known to be involved in controlling the growth of hair. A variant form of the EDAR gene is very common in East Asians and Native Americans, and is probably the reason that these populations have thicker hair than Europeans or Africans.
Still, it is not obvious why this variant of the EDAR gene was favored. Possibly thicker hair was in itself an advantage, retaining heat in Siberian climates. Or the trait could have become common through sexual selection, because people found it attractive in their partners.
A third possibility comes from the fact that the gene works by activating a gene regulator that controls the immune system as well as hair growth. So the gene could have been favored because it conferred protection against some disease, with thicker hair being swept along as a side effect. Or all three factors could have been at work. “It’s one of the cases we know most about, and yet there’s a lot we don’t know,” Dr. Stoneking said.
The case of the EDAR gene shows how cautious biologists have to be in interpreting the signals of selection seen in the genome scans. But it also points to the potential of the selective signals for bringing to light salient events in human prehistory as modern humans dispersed from the ancestral homeland in northeast Africa and adapted to novel environments. “That’s the ultimate goal,” Dr. Stoneking said. “I come from the anthropological perspective, and we want to know what the story is.”
With archaic humans, culture changed very slowly. The style of stone tools called the Oldowan appeared 2.5 million years ago and stayed unchanged for more than a million years. The Acheulean stone tool kit that succeeded it lasted for 1.5 million years. But among behaviorally modern humans, those of the last 50,000 years, the tempo of cultural change has been far brisker. This raises the possibility that human evolution has been accelerating in the recent past under the impact of rapid shifts in culture.
Some biologists think this is a possibility, though one that awaits proof. The genome scans that test for selection have severe limitations. They cannot see the signatures of ancient selection, which get washed out by new mutations, so there is no base line by which to judge whether recent natural selection has been greater than in earlier times. There are also likely to be many false positives among the genes that seem favored.
But the scans also find it hard to detect weakly selected genes, so they may be picking up just a small fraction of the recent stresses on the genome. Mathematical models of gene-culture interaction suggest that this form of natural selection can be particularly rapid. Culture has become a force of natural selection, and if it should prove to be a major one, then human evolution may be accelerating as people adapt to pressures of their own creation.
People often speak of culture in the plural (“cultures”) because they believe that there are many different cultures in the world. At one level, this is of course true; the American culture is different from the Chinese culture, both of which are different from the Egyptian culture, and so on. However, all the cultural differences are on the surface; deep down, at the most fundamental level, all human cultures are essentially the same.
Yes, culture and socialization do matter for human behavior, to a certain extent. But the grave error of traditional sociologists and others under the influence of the Standard Social Science Model (a term attributable to the co-founders of evolutionary psychology, Leda Cosmides and John Tooby) is to believe that human behavior is infinitely malleable, capable of being molded and shaped limitlessly in any way by cultural practices and socialization. Available evidence shows that this view is false. Human behavior, while malleable, is not infinitely malleable by culture, because culture is not infinitely variable. In fact, despite all the surface and minor differences, evolutionary psychologists have shown that all human cultures are essentially the same.
To use a famous metaphor, coined by the cultural anthropologist Marvin Harris, it is true that, at the surface level, people in some societies consume beef as food and worship pigs as sacred religious objects, while those in others consume pork as food and worship cows as sacred religious objects. So there is cultural variety at this concrete level. However, both beef and pork are animal proteins (as are dogs, whales, and monkeys), and both pigs and cows are animate entities (as are Buddha, Allah, and Jesus). And people in every human society consume animal proteins and worship animate entities (as I explained in an earlier post). At this abstract level, there are no exceptions, and all human cultures are the same. There is no infinite variability in human culture, in the sense that there are no cultures in which people do not consume animal protein or worship animate entities.
To use another example, it is true that languages spoken in different cultures appear completely different, as anyone who ever tried to learn a foreign language knows. English is completely different from Chinese, neither of which is anything like Arabic. Despite these surface differences, however, all natural human languages share what the linguist Noam Chomsky calls the “deep structure” of grammar. In this sense, English and Chinese are essentially the same, in the sense that beef and pork are essentially the same.
Any developmentally normal child can grow up to speak any natural human language. Regardless of what language their genetic parents spoke, all developmentally normal children are capable of growing up to be native speakers of English, Chinese, Arabic, or any other natural human language. In fact, when a group of children grow up together with no adults to teach them a language, they will invent their own natural human language complete with grammar. This does not mean, however, that the human capacity for language is infinitely malleable. Human children cannot grow up to speak non-natural language like FORTRAN or symbolic logic, despite the fact that these are far more logical and easier to learn than any natural language (no irregular verbs, no exceptions to rules). Yes, a developmentally normal child can grow up to speak any language, as long as the language is a product of human evolution, not a recent invention of computer scientists or logicians.
Pierre van den Berghe, a pioneer sociobiologist at the University of Washington, puts it best when he says:
Certainly we are unique, but we are not unique in being unique. Every species is unique and evolved its uniqueness in adaptation to its environment. Culture is the uniquely human way of adapting, but culture too evolved biologically.
Despite all the surface differences, there is only one culture, because culture, like our body, is an adaptive product of human evolution. The human culture is a product of our genes, just like our hands and pancreas are.
Biologically, human beings are very weak and fragile; we do not have fangs to fight predators and catch prey or fur to protect us from extreme cold. Culture is the defense mechanism with which evolution equipped us to protect ourselves, so that we can inherit and then pass on our knowledge of manufacturing weapons (to fight predators and catch prey) or clothing and shelter (to protect us from extreme cold). We don’t need fangs or fur, because we have culture. And just like — despite some minor individual differences — all tigers have more or less the same fangs and all polar bears have more or less the same fur, all human societies have more or less the same culture. Fangs are a universal trait of all tigers; fur is a universal trait of all polar bears. So culture is a universal trait of all human societies. Yes, culture is a cultural universal.
Humans are not the only animals that create their own culture
Animals are able to create their own cultures just like humans, according to a study led by Professor Andrew Whiten at the University of St. Andrews. The research suggests that animals pick up cultural traditions from their parents.
“The all-pervading cultural nature of our species was long thought to define what it is to be human, separating us from the rest of the natural world and the evolutionary processes that shape it,” said Professor Whiten.
“Other species were thought to live by instinct and some ability to learn, but only humans had culture. Over recent decades, a rapidly growing body of research has increasingly revealed a very different picture.”
Professor Whiten argues that if culture is seen as traditions that are passed on by learning from others, culture is far from unique to humans. In this case, there is evidence of culture among a growing variety of animals, including mammals, fish, birds, and insects.
According to the study, examples of animal culture range from a group of chimpanzees that have a tradition of wearing a blade of grass in one ear to female fruit flies that copy the mate preferences of other females.
“Culture even pervades animals’ lifetimes, from infancy to adulthood. The young of many species may first learn much from their parents, but increasingly learn from the different skills of others (as we humans do), even coming to focus on those in their group who display the greatest expertise, for example in using tools,” explained Professor Whiten.
“Learning from others continues to be important into adulthood. Monkeys and apes dispersing as adults into new groups, avoiding inbreeding, have been found to adopt local habits different to those back home – ‘when in Rome, do as the Romans do’ appears a useful rule of what can be learned from the locals in an environment new to these animal immigrants.”
Professor Whiten said it must be recognized that culture is not a uniquely human capacity that emerged out of the blue, but that it has deep evolutionary roots.
“Recognition of such practical implications of the reach of animal culture, along with implications for the broad range of scientific disciplines, should help assure a bright future for researchers in this field,” said Professor Whiten.
“A new generation of scientists will now surely pursue the wider reaches of culture in animals’ lives, aided by the substantial armoury of methodological advances developed over the past two decades.”
Whales and dolphins have rich ‘human-like’ cultures and societies
Whales and dolphins (Cetaceans) live in tightly-knit social groups, have complex relationships, talk to each other and even have regional dialects – much like human societies.
A major new study, published today in Nature Ecology & Evolution (Monday 16th October), has linked the complexity of Cetacean culture and behaviour to the size of their brains.
The research was a collaboration between scientists at The University of Manchester, The University of British Columbia, Canada, The London School of Economics and Political Science (LSE) and Stanford University, United States.
The study is first of its kind to create a large dataset of cetacean brain size and social behaviours. The team compiled information on 90 different species of dolphins, whales, and porpoises. It found overwhelming evidence that Cetaceans have sophisticated social and cooperative behaviour traits, similar to many found in human culture. The study demonstrates that these societal and cultural characteristics are linked with brain size and brain expansion – also known as encephalisation.
The long list of behavioural similarities includes many traits shared with humans and other primates such as:
complex alliance relationships – working together for mutual benefit
social transfer of hunting techniques – teaching how to hunt and using tools
cooperative hunting
complex vocalizations, including regional group dialects – ‘talking’ to each other
vocal mimicry and ‘signature whistles’ unique to individuals – using ‘name’ recognition
interspecific cooperation with humans and other species – working with different species
alloparenting – looking after youngsters that aren’t their own
social play
Dr Susanne Shultz, an evolutionary biologist in Manchester’s School of Earth and Environmental Sciences, said: “As humans, our ability to socially interact and cultivate relationships has allowed us to colonise almost every ecosystem and environment on the planet. We know whales and dolphins also have exceptionally large and anatomically sophisticated brains and, therefore, have created a similar marine based culture.
“That means the apparent co-evolution of brains, social structure, and behavioural richness of marine mammals provides a unique and striking parallel to the large brains and hyper-sociality of humans and other primates on land. Unfortunately, they won’t ever mimic our great metropolises and technologies because they didn’t evolve opposable thumbs.”
The team used the dataset to test the social brain hypothesis (SBH) and cultural brain hypothesis (CBH). The SBH and CBH are evolutionary theories originally developed to explain large brains in primates and land mammals.
They argue that large brains are an evolutionary response to complex and information-rich social environments. However, this is the first time these hypotheses have been applied to ‘intelligent’ marine mammals on such a large scale.
Dr Michael Muthukrishna, Assistant Professor of Economic Psychology at LSE, added: “This research isn’t just about looking at the intelligence of whales and dolphins, it also has important anthropological ramifications as well. In order to move toward a more general theory of human behaviour, we need to understand what makes humans so different from other animals. And to do this, we need a control group. Compared to primates, cetaceans are a more “alien” control group.”
Dr Kieran Fox, a neuroscientist at Stanford University, added: “Cetaceans have many complex social behaviours that are similar to humans and other primates. They, however, have different brain structures from us, leading some researchers to argue that whales and dolphins could not achieve higher cognitive and social skills. I think our research shows that this is clearly not the case. Instead, a new question emerges: How can very diverse patterns of brain structure in very different species nonetheless give rise to highly similar cognitive and social behaviours?”
Chimpanzees and humans share a surprising number of similarities. For starters, chimpanzees and people share nearly identical genetic DNA. More recently, another similarity has been discovered: the presence of culture in chimpanzees.
Culture has historically been regarded as a distinguishing feature of what makes humans unique. Culture is defined as “the integrated pattern of human knowledge, belief, and behavior that depends upon the capacity for learning and transmitting knowledge to succeeding generations.” However, scientists have discovered evidence of culture in different animal species. The strongest and most diverse evidence of culture in animals has been found in chimpanzees.
Jane Goodall’s long-term research on the chimpanzees of Gombe contributed to a comprehensive study that identified almost 40 different behaviour patterns in chimpanzees that are an indication of significant cultural variation. These behaviours include differences in tool use, aimed throwing, nest building, grooming, rain dances, and courtship rituals, and were found to be customary in some communities while absent in others. Researchers ruled out the possibility of the natural environment as influencing such differences. Instead the variation in behaviour was traced to multi-generational social learning and customs — in other words, culture.
Even chimpanzee populations with the same tools at their disposal show different behavioural patterns that stem from multi-generational social learning – lessons that a mother may pass on to her offspring that she learned from her mother. For example, in Uganda, chimpanzees in the Kibale forest use sticks to extract honey from a log, while chimpanzees from the Budongo forest use chewed leaves as sponges to collect honey from logs. The difference in tools used by chimpanzees in different locations is significant as it demonstrates the extended impact of social learning and how it can shape populations through their social norms over multiple generations. This gives us some insight into how our cultures may have begun millions of years ago.
A recent study found a significant correlation between the number of female chimpanzees within a group and the number of cultural traits, suggesting that female chimpanzees play an important role in transmitting and maintaining cultural diversity. Infant chimpanzees spend approximately eight years in close proximity to their mothers suggesting that much of their learned behaviour is the result of the mother’s influence.
More research and long-term studies on chimpanzees are important for identifying new cultural traits and how these cultures evolve overtime. Many believe that chimpanzees have not evolved culture on the scale of human beings due to their lack of advanced language. Continued research may produce new findings that not only change how we think of culture, but how it has been used to define us as superior beings of the animal kingdom.
Writing is the physical manifestation of a spoken language. It is thought that human beings developed language c. 35,000 BCE as evidenced by cave paintings from the period of the Cro-Magnon Man (c. 50,000-30,000 BCE) which appear to express concepts concerning daily life. These images suggest a language because, in some instances, they seem to tell a story (say, of a hunting expedition in which specific events occurred) rather than being simply pictures of animals and people.
Written language, however, does not emerge until its invention in Sumer, southern Mesopotamia, c. 3500 -3000 BCE. This early writing was called cuneiform and consisted of making specific marks in wet clay with a reed implement. The writing system of the Egyptians was already in use before the rise of the Early Dynastic Period (c. 3150 BCE) and is thought to have developed from Mesopotamian cuneiform (though this theory is disputed) and came to be known as heiroglyphics.
The phoenetic writing systems of the Greeks («phoenetic» from the Greek phonein – «to speak clearly»), and later the Romans, came from Phoenicia. The Phoenician writing system, though quite different from that of Mesopotamia, still owes its development to the Sumerians and their advances in the written word. Independently of the Near East or Europe, writing was developed in Mesoamerica by the Maya c. 250 CE with some evidence suggesting a date as early as 500 BCE and, also independently, by the Chinese.
Writing & History
Writing in China developed from divination rites using oracle bones c. 1200 BCE and appears to also have arisen independently as there is no evidence of cultural transference at this time between China and Mesopotamia. The ancient Chinese practice of divination involved etching marks on bones or shells which were then heated until they cracked. The cracks would then be interpreted by a Diviner. If that Diviner had etched Next Tuesday it will rain' andNext Tuesday it will not rain’ the pattern of the cracks on the bone or shell would tell him which would be the case. In time, these etchings evolved into the Chinese script.
History is impossible without the written word as one would lack context in which to interpret physical evidence from the ancient past. Writing records the lives of a people and so is the first necessary step in the written history of a culture or civilization. A prime example of this problem is the difficulty scholars of the late 19th/early 20th centuries CE had in understanding the Maya Civilization, in that they could not read the glyphs of the Maya and so wrongly interpreted much of the physical evidence they excavated. The early explorers of the Maya sites, such as Stephens and Catherwood, believed they had found evidence of an ancient Egyptian civilization in Central America.
This same problem is evident in understanding the ancient Kingdom of Meroe (in modern day Sudan), whose Meroitic Script is yet to be deciphered as well as the so-called Linear A script of the ancient Minoan culture of Crete which also has yet to be understood.
The Invention of Writing
The Sumerians first invented writing as a means of long-distance communication which was necessitated by trade. With the rise of the cities in Mesopotamia, and the need for resources which were lacking in the region, long-distance trade developed and, with it, the need to be able to communicate across the expanses between cities or regions.
The earliest form of writing was pictographs – symbols which represented objects – and served to aid in remembering such things as which parcels of grain had gone to which destination or how many sheep were needed for events like sacrifices in the temples. These pictographs were impressed onto wet clay which was then dried, and these became official records of commerce. As beer was a very popular beverage in ancient Mesopotamia, many of the earliest records extant have to do with the sale of beer. With pictographs, one could tell how many jars or vats of beer were involved in a transaction but not necessarily what that transaction meant.
In order to express concepts more complex than financial transactions or lists of items, a more elaborate writing system was required, and this was developed in the Sumeriancity of Uruk c. 3200 BCE. Pictograms, though still in use, gave way to phonograms – symbols which represented sounds – and those sounds were the spoken language of the people of Sumer. With phonograms, one could more easily convey precise meaning and so, in the example of the two sheep and the temple of Inanna, one could now make clear whether the sheep were going to or coming from the temple, whether they were living or dead, and what role they played in the life of the temple. Previously, one had only static images in pictographs showing objects like sheep and temples. With the development of phonograms one had a dynamic means of conveying motion to or from a location.
This new means of communication allowed scribes to record the events of their times as well as their religious beliefs and, in time, to create an art form which was not possible before the written word: literature. The first writer in history known by name is the Mesopotamian priestess Enheduanna (2285-2250 BCE), daughter of Sargon of Akkad, who wrote her hymns to the goddess Inanna and signed them with her name and seal.
The so-called Matter of Aratta, four poems dealing with King Enmerkar of Uruk and his son Lugalbanda, were probably composed between 2112-2004 BCE (though only written down between 2017-1763 BCE). In the first of them, Enmerkar and The Lord of Aratta, it is explained that writing developed because the messenger of King Enmerkar, going back and forth between him and the King of the city of Aratta, eventually had too much to remember and so Enmerkar had the idea to write his messages down; and so writing was born.
The Epic of Gilgamesh, considered the first epic tale in the world and among the oldest extant literature, was composed at some point earlier than c. 2150 BCE when it was written down and deals with the great king of Uruk (and descendent of Enmerkar and Lugalbanda) Gilgamesh and his quest for the meaning of life. The myths of the people of Mesopotamia, the stories of their gods and heroes, their history, their methods of building, of burying their dead, of celebrating feast days, were now all able to be recorded for posterity. Writing made history possible because now events could be recorded and later read by any literate individual instead of relying on a community’s storyteller to remember and recite past events.
So important was writing to the Mesopotamians that, under the Assyrian King Ashurbanipal (r. 685-627 BCE) over 30,000 clay tablet books were collected in the library of his capital at Nineveh. Ashurbanipal was hoping to preserve the heritage, culture, and history of the region and understood clearly the importance of the written word in achieving this end. Among the many books in his library, Ashurbanipal included works of literature, such as the tale of Gilgamesh or the story of Etana, because he realized that literature articulates not just the story of a certain people, but of all people.
The role of the poet in preserving heroic legends would become an important one in cultures throughout the ancient world. The Mesopotamian scribe Shin-Legi-Unninni (wrote 1300-1000 BCE) would help preserve and transmit The Epic of Gilgamesh. Homer (c. 800 BCE) would do the same for the Greeks and Virgil (70-19 BCE) for the Romans. The Indian epic Mahabharata (written down c. 400 BCE) preserves the oral legends of that region in the same way the tales and legends of Scotland and Ireland do. All of these works, and those which came after them, were only made possible through the advent of writing.
The early cuneiform writers established a system which would completely change the nature of the world in which they lived. The past, and the stories of the people, could now be preserved through writing. The Phoenicians‘ contribution of the alphabet made writing easier and more accessible to other cultures, but the basic system of putting symbols down on paper to represent words and concepts began much earlier.
Early writing systems, imported to other cultures, evolved into the written language of those cultures so that the Greek and Latin would serve as the basis for European script in the same way that the Semitic Aramaic script would provide the basis for Hebrew, Arabic, and possibly Sanskrit. The materials of writers have evolved as well, from the cut reeds with which early Mesopotamian scribes marked the clay tablets of cuneiform to the reed pens and papyrus of the Egyptians, the parchment of the scrolls of the Greeks and Romans, the calligraphy of the Chinese, on through the ages to the present day of computerized composition and the use of processed paper.
In whatever age, since its inception, writing has served to communicate the thoughts and feelings of the individual and of that person’s culture, their collective history, and their experiences with the human condition, and to preserve those experiences for future generations.
Most writing systems that have been invented through the ages took inspiration from another writing system: the Latin alphabet was inspired by the Greek alphabet; the Greek alphabet was inspired by the Phoenician abjad; the Phoenician abjad was inspired by Egyptian hieroglyphs. In another line of transmission, the Phoenician abjad (which, with the exception of the Chinese script, is the ancestor of all writing systems in use today) also inspired the Old Hebrew script (ca. 1000 BCE), which inspired the Aramaic script, which inspired the Syriac script (ca. 500 CE), which inspired the Sogdian script, which inspired the Uighur script (ca. 800 CE), which inspired the Mongolian script (1200 CE).
The details of most of these relationships of inspiration and imitation are lost in history and must be credited to anonymous traders, missionaries, or soldiers. Individual inventors of a writing system are rare exceptions, such as King Sejong, who invented the Korean script. King Sejong took inspiration from the Chinese script.
Creating a new writing system for a language by drawing on an existing model from another language, as King Sejong did for Korean, is undoubtedly an enormous achievement. However, it pales in comparison to the achievement of those inventors who created writing from scratch, at a time when writing did not exist anywhere else in their known world.
Why was writing invented?
Living in a highly literate society, it is tempting to imagine that those first inventors wanted to write down stories and transmit them to posterity. Unfortunately, you’d be mistaken. The transmission of stories worked really well orally. Our ancestors had much better memories than we have (and how literacy has affected our brains is another story), as is evidenced from the great epics or the extensive Aboriginal Dreamtime stories that were transmitted orally over thousands of years.
This means that in a preliterate society no one had any need to write down the knowledge that was encoded in stories, myths, legends, or genealogies. And we can be sure that no one just thought one day, “Hey, wouldn’t it be cool if we could write down spoken language?”
Writing is a technology that emerged together with urbanization. The first city states constituted a new form of social organization that created specific problems of record keeping: how to account for the surplus created by agriculture and trade, and the activities it resulted in. As humans founded city states and empires, practical problems such as these arose: How much arable land is there? How many heads of cattle can be kept on a particular plot of land? How much tax should be extracted from a farming household of a particular composition? How can we be sure that Farmer So-and-so has already paid his taxes and does not just say they paid? How many slaves need to be captured to build a new temple? How many soldiers need to be kept in the army to protect the city, and how much provisions and equipment will they need to invade the next city down the river and incorporate it into one’s kingdom?
Not necessarily pretty questions that inspired writing invention! Writing was not invented for some lofty intellectual pursuits but as a technology of power. Writing was invented as a means of record keeping. It is an information technology that emerged in the domains of state administration and bureaucracy, trade and commerce, and religion.
Early writing had little to do with language and everything to do with keeping a quantitative record of something. Think of it this way: our writing-inventing ancestors needed spreadsheets. It was only over time that these “spreadsheets” became writing: a visual form of language associated with a particular spoken language.
Who invented writing?
In fact, not all “spreadsheet systems” became fully-fledged writing systems. So, who invented writing? The answer you’re probably familiar with is: the Sumerians of ancient Mesopotamia. That’s true but it’s not the whole story because writing was invented multiple times, in response to social developments similar to those I outlined above.
To the best of our knowledge, writing was invented independently at least three times: Sumerian cuneiform in Mesopotamia (ca. 3400 BCE), Chinese characters in China (ca. 1200 BCE) and Mayan glyphs in Mesoamerica (ca. 300 BCE). Of these, only the Chinese script is an unbroken living tradition.
I’m saying “at least three times” because it may well have been more often. Our knowledge is limited in three ways.
First, the archeological record is incomplete and only the most durable early writing (pressed in clay or chiseled in stone) has survived while the record for less durable materials (drawn on paper, velum or bark in natural colors, scratched in bone) has disintegrated and only accidental fragments may or may not have survived.
Second, the relationship between different writing systems is unclear. For instance, there is debate whether Egyptian hieroglyphs (the earliest of which date back to ca. 3250 BCE) constitute an independent invention or were inspired by Sumerian cuneiform. Similar uncertainties exist related to the Indus Valley script (ca. 2600 BCE) or Linear B from the island of Crete in Greece (ca. 1450 BCE).
Third, the history of writing has largely been written by Europeans and is embedded in colonial epistemologies. This limits our knowledge in various ways.
These limitations are well illustrated by our scant knowledge of Mayan writing. To begin with very little research efforts are dedicated to that striking writing system, which only survives in a small number of stone inscriptions and four book manuscripts. This small number is not only due to natural degradation but is the result of active destruction by the Spanish colonizers. “We burned them all”, as Bishop Diego de Landa reported in 1566. Not only the products of Mayan writing were destroyed but transmission was suppressed and eventually knowledge of Mayan writing disappeared.
We travel back to the ancient Middle East, to look at why, over 5,000 years ago, we first came to write.
While not all human cultures have needed to develop writing, we appear to have been using signs to communicate with each other, or simply to help remember things, for tens of thousands of years. In ancient times we find engraved or painted marks on the walls of human dwellings, portable objects and mnemonic devices. And today, we engage in similar mark-making.
People developed writing to communicate across time and space, carrying it with them as they traded, migrated and conquered. From its first uses for counting and naming things and communicating beyond the grave, humans have altered and enriched writing to reflect their complicated needs and desires.
Keeping count
It has been argued that writing in Mesopotamia (present-day Iraq) grew from a counting system of clay tokens used to record transactions of goods. The goods represented were inferred by the shape of the tokens: spheres, cones and discs stood for measurements of grain, while cylinders stood for livestock. This counting system was used from 7500 BC onwards across the Fertile Crescent, from the Mediterranean coast down to the Persian Gulf.[1]
Around 3350 BC we find simple tokens supplemented by more complex ones with incised marks inside a temple complex of Inanna (the goddess of love), situated in Uruk, one of the most important cities in Mesopotamia.
Tokens were sometimes stored in clay ‘envelope’ balls: the tokens inside were impressed into the damp clay of the envelope so that the contents might remain known even when the ball was sealed. Eventually, the tokens were replaced by the signs made by their impressions onto the clay balls, or tablets.
However, whereas simple tokens could be pressed into the surface and easily recognised, complex tokens with their incised patterns could not be so clearly distinguished. So their shape and markings were drawn directly onto the surface of the clay ball or tablet with a cut reed.
Naming and claiming
From around 2900 BC onwards, temple scribes in Mesopotamia appear to have begun recording the names of individuals featured in their transactions. Compared to simple account-keeping of things coming and going, registering names presented a new challenge. Some of the names would have contained complicated sounds that were not represented by their current writing system.
In order to overcome this, they extended their use of some of their signs from their basic semantic value, to represent their sound value too. This principle could work especially well in Sumerian (the language spoken in Sumer, Southern Mesopotamia) as it contained an unusually high number of homophones, words that sound the same but have different meanings.
Using a sign such as a circle that originally stood for an object like the sun, to stand in addition for the word ‘son’, is a concept that we call the rebus principle. Rebus in Latin means ‘by things’ so the meaning here is that sounds can now be represented by pictograms or pictures of things.
If we applied the principal today to a surname like Sontag we could write the name using a circular disc and then add a drawing of a label for ‘tag’, and we could pronounce the whole word by ‘reading’ just two symbols. It was a kind of visual game of punning, but it represented an important cognitive flip.
Speaking beyond the grave
Evidence from the royal tombs of Ur dating from between 2700 and 2600 BC shows that writing was extended to a new use, with phonetic signs now being engraved in objects: precious golden bowls, statues and the grave goods of the dead. This new context for inscriptions brought other factors into play.
Sumerians believed in an afterlife, but it was a grim, dark and dusty place. The worst fate was to be forgotten by those alive in the world above. There were monthly ceremonies where the dead were remembered, fed with offerings and most significantly, their names were spoken aloud. This pronouncing of the name of the deceased saved them from oblivion; it gave them a satisfactory afterlife.
The fact that a king and queen were now sent into that afterlife with bowls, lamps and lapis lazuli seals that had the phonetic symbols for their names engraved on them for all perpetuity meant that funerary goods had become eloquent: they spoke their owners’ names. Perhaps they could accomplish even more than that though?
Between 2600 and 2500 BC funerary statues inscribed with their owners’ names also began to include more extended phonetic writing including short prayers and titles. It has been suggested that it was in the context of funerary rituals – where sound and spoken words carry special weight – that writing first became really important as a vehicle for recording speech and not just a system of marks that signified certain objects or numbers.[1]
The script continued to evolve. From around 2900 BC, pictograms had begun to become more stylised, leading to the cuneiform writing system.
From Mesopotamia to the Americas, discover how different regions around the world adopted writing at different times and for different reasons.
Full writing-systems appear to have been invented independently at least four times in human history: first in Mesopotamia (present-day Iraq) where cuneiform was used between 3400 and 3300 BC, and shortly afterwards in Egypt at around 3200 BC. By 1300 BC we have evidence of a fully operational writing system in late Shang-dynasty China. Sometime between 900 and 600 BC writing also appears in the cultures of Mesoamerica.
There are also several places such as the Indus River valley and Rapa Nui (Easter Island) where writing may have been invented but it remains undeciphered.
Although these dates suggest that writing could have spread out from one central point of origin, there is little evidence of any links between these systems, with each possessing unique qualities.
Mesopotamian origins
Scholars generally agree that the earliest form of writing appeared almost 5,500 years ago in Mesopotamia (present-day Iraq). Early pictorial signs were gradually substituted by a complex system of characters representing the sounds of Sumerian (the language of Sumer in Southern Mesopotamia) and other languages.
From 2900 BC, these began to be impressed in wet clay with a reed stylus, making wedge-shaped marks which are now known as cuneiform.
The process of writing cuneiform stabilised over the next 600 years. Curves were eliminated, signs simplified and the direct connection between the look of pictograms and their original object of reference was lost.
Sometime during this same period, the symbols – which were initially read from top to bottom – came to be read from left to right in horizontal lines (vertical alignments were kept for more traditional pronouncements). In keeping with this, the symbols were also realigned, rotated 90 degrees anti-clockwise.
Eventually, in 2340 BC, Sumer fell to the armies of Sargon, King of the Akkadians, a northern Semitic people who had previously co-existed with the Sumerians. By this time, cuneiform had, for several centuries, been used bilingually to write Akkadian too. Sargon, the latest in a line of expansive Akkadian leaders, built an Empire that ran from present day Lebanon down to ‘the nether sea’ (the Persian Gulf). Eventually, as many as 15 languages would use cuneiform-inspired characters.
Sumerian lingered on as the language of learning until at least 200 BC. Cuneiform, the system invented to record it, however, outlived it by almost three centuries: it lasted as a writing system for other languages well into the Christian era. The last datable document in cuneiform is an astronomical text from 75 AD.
Egypt
New discoveries have pushed back the date for writing in Egypt close to that of Mesopotamia. Discoveries of large-scale incised ceremonial scenes at the rock art site of El-Khawy in Egypt date to around 3250 BC. They show features similar to early hieroglyphic forms. Some of these rock-carved signs are nearly half a metre in height.
From 3200 BC onwards Egyptian hieroglyphs appeared on small ivory tablets used as labels for grave goods in the tomb of the pre-dynastic King Scorpion at Abydos and on ceremonial surfaces used for grinding cosmetics, such as the Narmer Palette.
Writing in ink using reed brushes and pens is first found in Egypt. This ink writing came to be known in Greek as hieratic (‘priestly’ script), whilst the carved and painted letters we see on monuments are called hieroglyphs (‘sacred carvings’).
Carved and written characters are close in date. This suggests that from the earliest times, writing in Egypt had two functions: one was ceremonial, a display script (carved), the other was in the service of royal and temple administrations (written).
Within four centuries of the finds in King Scorpion’s tomb, hieroglyphs and Hieratic (a cursive writing system used for Ancient Egyptian) developed a full range of characters. This included:
24 uni-consonantal symbols (an ‘alphabet’ containing various consonants only) phonetic components representing combinations of sounds determinative signs (signs with no phonetic value, used only to determine which of several alternative meanings for a word is meant in a particular context). It is from this Egyptian writing that an alphabet would first evolve, sometime from 1850 BC onwards.
China
The earliest examples of writing in China were found near present-day Anyang, on a tributary of the Yellow River, 500km south of Beijing. Here, the kings of the late Shang dynasty (1300–1050 BC) had founded their capital and carried out divination rituals using animal bones.
For centuries, fragments of bones had been found by farmers and sold for use in Chinese medicine as ‘dragon bones’. It was not until 1899 that politician and scholar Wang Yirong (1845–1900) recognised characters carved into the surface of some of these bones and realised their significance. As the earliest written records of Chinese civilization found to date, these inscriptions extended Chinese historical and linguistic knowledge by several centuries.
These ‘oracle’ bones (the shoulder blades of oxen and turtle plastrons) record questions that were posed to the royal ancestors about topics as diverse as crop rotation, warfare, childbirth and even toothache. To date, nearly 150,000 examples of such bones have been found, containing over 4,500 different symbols, many of which can be identified as the ancestors of Chinese characters still in use today.
But Shang readers would not be able to read present-day Chinese and the majority of the characters on the oracle bones remain undeciphered. Even the characters that can be identified have evolved considerably in terms of their function and form. Not only did pictographic characters become gradually more abstract, but as the written vocabulary expanded, more compound forms developed.
Basic components were shared between characters to reflect similarities in pronunciation or meaning. In this way, since ancient times, Chinese characters have been able to represent both concepts and the sounds of spoken language to varying degrees.
The bones show a fully developed writing system which must have been formed many years – perhaps centuries – earlier, although earlier materials have not yet been discovered and may not have survived.
Mesoamerica
Recent discoveries have pushed the evidence for writing in this area – which runs from southern Mexico to Costa Rica – close to 900 BC.
The discoveries have also widened the range of cultures and languages that we know used writing from the Maya, Mixtecs and Aztecs to include the earlier Olmecs and Zapotecs.
There were two types of writing systems in pre-colonial Mesoamerica:
Open systems were means of recording texts that were not linked to the grammatical and sound structures of specific languages. They functioned as mnemonic devices, guiding readers through the narratives of texts without relying on the linguistic background of the given audience. These were common among the Aztecs and other Mexica communities of central Mexico.
Closed systems were tied to the sound and grammatical structures of specific languages. These were targeted at particular linguistic communities and functioned similarly to the writing we know today. Examples of these closed systems can be found among the Maya.
The position of scribe was of high status. Maya artists were often younger sons of the royal family. The Keepers of the Holy Books, the highest scribal office, acted as librarians, historians, genealogists, tribute recorders, marriage arrangers, masters of ceremonies, and astronomers.
Just four Maya books survive from the pre-colonial period and fewer than 20 from the entire region. These codices are painted onto deer skin and tree bark, the writing surface coated (as were many of the buildings) with a polished lime paste or gesso.
Indus River valley (Pakistan and northwest India)
In the Indus River valley of Pakistan and northwest India, symbols have been found on objects that may be writing. The society that used these symbols was the culmination of a historical settlement in the Indus region that goes back to at least 7000 BC. A high urban culture flourished for 700 years, between 2600 and 1900 BC, at which point the cities declined.
Although we have about 5,000 known inscribed artefacts and the longest inscription consists of 26 symbols, most are just three or four signs long.
The 400 unique symbols that have been identified are too low in number for a viable logographic word-based writing system. This number of characters is similar to that found in pre-dynastic Egyptian hieroglyphs and early Sumerian script. Scholars have therefore suggested that like these two systems, the Indus River Valley script may contain a mixture of logographic and syllabic components.
Rapa Nui (Easter Island, Polynesia)
Around two dozen wooden tablets inscribed with glyphs were discovered on Rapa Nui in the 19th century. Rongorongo, a term the Rapa Nui themselves applied to these objects, was interpreted by missionaries at that time to mean ‘lines incised for chanting out’. But knowledge of how to use the tablets had already been lost by that time.
The characters reflect human, animal and plant motifs. There are 120 elementary (un-joined) glyphs, which have been used to write texts as long as 2,320 characters and as short as just two.
Whether rongorongo is purely a mnemonic device or a system of logographic and syllabic symbols remains an open question, as does its claim to be a unique sixth point of origin for a writing system.
Archaeological discoveries in ancient Mesopotamia (now mostly modern Iraq) show the initial power and purpose of writing, from administrative and legal functions to poetry and literature.
Mesopotamia was a region comprising many cultures over time speaking different languages. The earliest known writing was invented there around 3400 B.C. in an area called Sumer near the Persian Gulf. The development of a Sumerian script was influenced by local materials: clay for tablets and reeds for styluses (writing tools). At about the same time, or a little later, the Egyptians were inventing their own form of hieroglyphic writing.
Even after Sumerian died out as a spoken language around 2000 B.C., it survived as a scholarly language and script. Other peoples within and near Mesopotamia, from Turkey, Syria, and Egypt to Iran, adopted the later version of this script developed by the Akkadians (the first recognizable Semitic people), who succeeded the Sumerians as rulers of Mesopotamia. In Babylonia itself, the script survived for two more millennia until its demise around 70 C.E.
Writing began with pictographs (picture words) drawn into clay with a pointed tool. This early administrative tablet was used to record food rations for people, shown by a person’s head and bowl visible on the lower left side. Pictographs and numbers show amounts of grain allotted to cities and types of workers, including pig herders and groups associated with a religious festival.
Tablets like these helped local leaders organize, manage, and archive information. This tablet reflects bureaucratic accounting, but similar lists were used in the following centuries by individuals to keep track of personal property and business agreements.
From Pictures to Writing in Everyday Life
Writing evolved when someone decided to replace the pointed drawing tool with a triangular reed stylus. The reed could be pressed easily and quickly into clay to make wedges. At first, the wedges were grouped to make pictures, but slowly the groups evolved into more abstract signs and became the sophisticated script we call cuneiform (“wedge-shaped” in Latin). About one thousand signs represented the names of objects and also stood for words, syllables, and sounds (or parts of them).
Cuneiform records provide information about bureaucracy and authority, but they also document many fascinating aspects of daily life. Written texts reveal how individuals and families expressed their wishes, married and had children, did business, and worshipped. People wrote mainly on clay, but also on more expensive materials such as the golden plaque shown above.
In this clay marriage contract, which includes an oath to the chief god of Kish where the marriage would have taken place, a father gives his daughter to her new husband. In turn the husband pays a bride-price of silver to three men, perhaps her brothers. The document is enclosed in a clay envelope. Witnesses each rolled personal seals, inscribed cylinders like small rolling pins, across the left side of the envelope to impress a form of signature in relief.
Cylinder Seals as Signatures on Clay
To sign a clay document and sometimes to guarantee that it was officially closed, Mesopotamians used seals, mostly of durable and sometimes expensive materials. Many could be worn or pinned on like jewelry.
The cylinder seal above is inscribed with the name of a palace baker. He shows himself standing before an important seated divinity, being introduced by a lesser goddess. In the impression made by rolling the seal, you can see the text and first standing figure start to repeat on the right side.
Seals required special care. Image and text were reversed when pressed into clay, so on the seal a scribe and artist had to create mirror images and inscriptions. In addition, writing on hard materials required totally different techniques from writing directly on clay.
Who Wrote Cuneiform?
Professional writers of cuneiform were called “tablet writers”—scribes. In slow stages of schooling, they learned hundreds of cuneiform signs and memorized texts and templates in different languages. Most were men, but some women could become scribes.
Students’ interests and skills varied, and a proverb noted: “A disgraced scribe becomes a man of magical spells.” This was a pointed reminder that less-committed students might end up making an uncertain living writing common incantations. Working harder could lead to a prosperous life composing legal documents—or even writing correspondence for a royal court. Those who persevered could become scholars with knowledge of mathematics, medicine, religious ritual, divination, laws, and mythology, or even authors of literature.
This tablet is one of more than 20 similar tablets (nicknamed “Schooldays”) that present the life of a young student in a scribal school. The days were long, filled with copying and memorizing. Older scribes oversaw these efforts, while the school was led by a headmaster.
On this day the boy feels successful, but on the next, his teachers repeatedly beat him for infractions such as tardiness, talking, and poor handwriting. In the end, the boy’s father invites the headmaster to dinner and gives him gifts and money. Appeased (and bought off, although such payments may have been expected), the headmaster declares to the boy: “You have carried out well the school’s activities. You are a man of learning!”
Many people may have learned the basics of reading and writing, including royals. The first known author was Enheduanna, the daughter of Sargon, king of Akkad, the first king to conquer all of Mesopotamia. She was a priestess who composed religious poetry. Later, the Neo-Assyrian king Ashurbanipal praised his own literacy and scholarship. He is sometimes shown in royal art with a writing stylus stuck in his belt.
Although cuneiform endured for over three thousand years, as simpler alphabets became common the script was eventually used only for scholarly documents, and it faded away completely in the late-first century A.D. Within a few centuries, all understanding of the once-dominant writing was lost for about 1,800 years.
While spoken or signed language is a more or less universal human competence that has been characteristic of the species from the beginning and that is commonly acquired by human beings without systematic instruction, writing is a technology of relatively recent history that must be taught to each generation of children. Historical accounts of the evolution of writing systems have until recently concentrated on a single aspect, increased efficiency, with the Greek invention of the alphabet being regarded as the culmination of a long historical evolution. This efficiency is a product of a limited and manageable set of graphs that can express the full range of meanings in a language.
The Polish American Assyriologist Ignace Gelb distinguished four stages in this evolution, beginning with picture writing, which expressed ideas directly; followed by word-based writing systems; then by sound-based syllabic writing systems, including unvocalized syllabaries or consonantal systems; and concluding with the Greek invention of the alphabet.
The invention of the alphabet is a major achievement of Western culture. It is also unique; the alphabet was invented only once, though it has been borrowed by many cultures. It is a model of analytic thinking, breaking down perceptible qualities like syllables into more basic constituents. And because it is capable of conveying subtle differences in meaning, it has come to be used for the expression of a great many of the functions served by speech. The alphabet requires little of the reader beyond familiarity with its orthography. It allows the reader to decipher words newly encountered and permits the invention of spellings for new patterns of sound, including proper names (a problem that is formidable for nonalphabetic systems). Finally, its explicitness permits readers to make a relatively sharp distinction between the tasks of deciphering and interpreting. Less explicit orthographies require the reader first to grasp the meaning of a passage as a whole in order to decide which of several possible word meanings a particular graphic string represents.
It must be remembered, however, that efficiency depends not only on the nature of the writing system but also on the functions required of it by its users, for orthographies are invented to serve particular cultural purposes. Furthermore, an orthography invented to satisfy one purpose may acquire new applications. For instance, writing systems invented to serve mnemonic purposes were subsequently elaborated and used for communicative and archival purposes. Orthographies were not invented as art forms, but, once invented, they could serve aesthetic functions.
Notions of explicitness of representation depend on the morphophonemic structure of the language. An alphabet was a notable advance for representing the Greek language but not necessarily for representing a Semitic language. Moreover, for languages such as Chinese and Japanese, which have simple syllabic structures and a great number of homophones, a writing system that depended on phonological structure, such as a syllabary or an alphabet, would be extremely inefficient. It is with such factors in mind that late 20th-century accounts of writing systems stressed how many different orthographies may function efficiently, given the particular language they are used to represent. Just as linguists have abandoned the notion of progressive evolution of languages, with some languages ranking as more primitive than others, so historians of writing have come to treat existing orthographies as appropriate to the languages they represent.
Nonetheless, all contemporary orthographies have a history of development, and there are many common features in these histories. It is unlikely that writing was invented only once and then borrowed by different cultural groups. While all Western writing systems may be traced back to the beginnings of symbol making in Sumer, there is no reason to believe that Asian writing systems were borrowed from the Sumerian form. Consequently, there are two quite separate histories of writing, that of the writing system developed by the Sumerians and that of the one developed by the Chinese.
The outline of the development of the Sumerian writing system has been worked out by paleographers. It has long been known that the earliest writing system in the world was Sumerian script, which in its later stages was known as cuneiform. The earliest stages of development are still a matter of much speculation based on fragmentary evidence. The French American archaeologist Denise Schmandt-Besserat, building on a hypothesis advanced by the Assyriologist Pierre Amiet of the Louvre, demonstrated a series of small steps leading from the use of tokens for simple bookkeeping purposes to the development of written tablets on which graphs of the script stand for morphemes of spoken Sumerian. Archaeologists have discovered in lower Mesopotamia (now southern Iraq) large numbers of small, distinctively shaped clay objects. These are thought to date back to as early as 8000 BCE, about the time that hunter-gatherer societies were giving way to an agricultural way of life. A greatly elaborated set of these clay shapes—some shaped like jars and some like various animals and occasionally inserted in clay envelopes—dates from 3500 BCE, about the time of the rise of cities. Some of the envelopes have markings that correspond to the clay shapes inside. Moreover, these markings are more or less similar to the shapes drawn on clay tablets that date back to about 3100 BCE and that are unambiguously related to the Sumerian language. These markings are thought to constitute a logographic form of writing consisting of some 1,200 different characters representing numerals, names, and such material objects as cloth and cow.
The theory advanced by Schmandt-Besserat to explain this transformation is that the clay shapes are tokens representing agricultural goods such as grain, sheep, and cattle and that they were used as a form of bookkeeping. The multiplication of types of tokens could correspond to the increase in the number of kinds of goods that were exchanged with the rise of urbanization in the 4th millennium BCE. Tokens placed in an envelope might have constituted a sort of “bill of lading” or a record of indebtedness. To serve as a reminder of the contents of the envelope so that every reader would not need to break open the envelope to read the contents, corresponding shapes were impressed upon the envelope. But if the content was marked on the envelope, there was no need to put the tokens in an envelope at all; the envelope could be flattened into a convenient surface and the shapes impressed on it. Now that there was no need for the tokens at all, their message was simply inscribed into the clay. These shapes, drawn in the wet clay with a reed stylus or a pointed stick, constituted the first writing.
The historical record is much more explicit after 3200 BCE and reveals clearly the stages involved in the evolution from a limited system of notation suitable for recording particular events into a full general-purpose orthography. Archaic Sumerian used mostly graphs representing numerals, names for objects, and names of persons. Graphs for numerals were geometric shapes, while those for objects were often stylized pictures of the things they represented. Yet the system was a genuine logographic writing system generally adequate to economic and administrative purposes. With the substitution of a blunt writing stylus for a pointed one, the symbols become less picturelike and more conventionalized. The writing system takes the name cuneiform from the shape of the strokes that form the symbols (from Latin cuneus, “wedge”).
The next major stage in the evolution of Sumerian writing was the adoption of the phonographic principle, the use of a sign to represent a common sound rather than a common meaning. For example, the graph representing “water” appears to have been used also to represent the locative suffix “in,” because the latter sounded the same as, or similar to, the word water. It is as if in English a person used the word ball to stand for a person named Bill on the grounds that it is easy to represent the ball with a circular graph while there is no obvious way to represent Bill, and the two words sound similar. The Sumerian script, however, remained primarily logographic and resorted to phonographic signs only when forced to, for representing unpicturable words and for distinguishing ambiguous graphs.
Sumerian script was adopted in the 3rd millennium BCE by the Akkadians, who greatly expanded the phonographic properties of the script. The Assyrians and the Babylonians, both speaking dialects of the Akkadian language, were responsible for most of the cuneiform writing in a form known today as Akkadian cuneiform.
Alphabetic systems
While cuneiform had many graphs that represented syllables, many syllables were not represented. The methods used for representing syllables that did not have distinctive graphs were quite unsystematic. The first writing system consistently based on the sound structure of a language was Linear B, a Mycenaean Greek orthography developed about 1400 BCE and deciphered in 1952 by Michael Ventris, an English architect and cryptographer. The script is strictly syllabic; each consonant-vowel pair is given a distinctive graph. As an example, a set of syllables that an alphabetic system would represent with the consonant p plus a vowel are all represented in Linear B by different graphs. Although the script is highly systematic, it provides a limited representation of the phonology of Mycenaean Greek. Greek contains many syllables that are not simple consonant-vowel combinations, and not all consonantal sounds are followed by vowels. Linear B is thus an incomplete script for representing the phonological structures of the spoken language. Hence, there are usually several ways of reading a series of Linear B graphs, and a correct reading depends upon the reader’s knowing what the text is about.
The final stage in the evolution of writing systems was the discovery of the alphabetic principle, the procedure of breaking the syllable into its constituent consonantal and vowel sounds. (See alsoalphabet.) According to the British linguist Geoffrey Sampson, “Most, and probably all, ‘alphabetic’ scripts derive from a single ancestor: the Semitic alphabet, created sometime in the 2nd millennium [BCE].” The Semitic script was invented by speakers of some Semitic language, possibly Phoenician, who lived in the northern part of the Fertile Crescent. Modern versions of Semitic script include the Hebrew script and the Arabic script. Their most prominent characteristic is that they have graphs for consonants but not for vowels.
The inventors of the Semitic orthography apparently took the acrophonic principle, that of representing sounds by pictures of things whose names begin with that sound, from Egyptian hieroglyphic, a form of writing not different in principle from Akkadian cuneiform. The hieroglyphic sign , depicting waves of water, represented the sound /n/, the first sound of the spoken word for water. By means of this principle a 22-graph system was constructed with a memorized order, beginning alef, bet, gimel, that was suitable for representing a full range of meanings. These graphs represented the consonants of the language, vowels remaining unrepresented. This fact has led some scholars, notably Gelb and Havelock, to claim that Semitic scripts are not true alphabets but rather unvocalized syllabaries. Other scholars, noting that the graphs represent consonants rather than syllables—for example, pa, pe, pi, po, and pu would all be represented by the same character—insist that the script is an alphabet. The controversy is circumvented by referring to Semitic scripts, following Sampson, as consonantal writing systems. While such a script would be greatly limited in explicitness or completeness for a language with complex syllable structure such as English, it is relatively complete for Semitic languages in which vowel differences are rarely contrastive.
To illustrate, the following oral forms have in common the three consonantal phonemes /k/, /t/, and /b/ with different vowel sounds interdigitated. The meanings all contain the root meaning “write,” and the vowel differences mark subject, tense, and aspect: katab ‘he wrote,’ katabi ‘I wrote,’ katebu ‘they wrote,’ ketob ‘write,’ koteb ‘writing,’ katub ‘being written.’ All are written simply ktb.
Because vowel sounds generally distinguish grammatical rather than lexical meaning, some Semitic writing systems never developed any device for representing them. This is not necessarily a flaw in the orthography. Indicating the vowels could cause some confusion for the reader because, instead of a single root, there would now be a multiplicity of written words, each reflecting a particular grammatical context. Nonetheless, ignoring the vowels does result in an orthography that is far from explicit or complete; many ambiguities in decoding remain. Consequently, some scripts, such as Hebrew, added matres lectionis, literally “mothers of reading,” a pointing system to distinguish the vowel sounds. These were used especially for preserving the precise reading of sacred texts. To this day they are used in books written to be read by beginning readers and in poetry and other writings of which the prior knowledge of the reader may not be sufficient to reduce the residual ambiguity.
The transition from consonantal writing to alphabetic writing, writing with full representation of both consonants and vowels, occurred when the Semitic script was adapted to the Greek language. This occurred about 1000–900 BCE. Scholars have traditionally considered the Greek invention as a stroke of genius. While not minimizing the significance of the Greek invention, it is now recognized that the invention of the alphabet was in fact the rather straightforward consequence of applying a script invented for representing one kind of language to a quite different kind.
The letters used by the Greeks to represent consonantal sounds were borrowed rather directly from the Semitic script. What was distinctive was that the Greeks used six of the Semitic letters, those that represented sounds that did not occur in Greek, to represent vowel sounds. Greek, like English, is an Indo-European language that uses vowel distinctions to make lexical contrasts. Moreover, words may consist simply of vowels, words may begin with vowels, and words with adjacent vowels are not uncommon. Such forms are rare in Semitic languages in which simple consonant-vowel syllable structures predominate and in which vowel differences usually mark only grammatical inflections. Sampson suggested that in the Semitic language some of the consonants that preceded a vowel sound may have been nonphonemic to the Greeks, who thus in hearing the syllable would have heard only a vowel corresponding to a vowel already prominent in the Greek language.
The Romans borrowed the Greek alphabet (along with many Greek words and much of Greek culture) to form the Roman, or Latin, alphabet. Written “learned” Latin was the language of state and of scholarship in Europe until the end of the Middle Ages. Further developments of the alphabet resulted from changes in the phonology of Latin and of the Romance languages that evolved from it. For English, the differentiation of all the 26 letters was completed only in the 19th century.
While the invention of logographic writing, the later invention of the principle of phonetization, the analysis of syllables into a consonantal writing system, and the addition of vowels to make a full alphabet do constitute progress toward an efficient, economical, explicit, and complete writing system, this progress was not simply a matter of increasing insight. Advances resulted from attempts to apply a writing system invented for one language to another language for which it was not completely appropriate. Yet the accumulated discoveries yielded an analysis of deeper and deeper levels of linguistic structure of the type associated with discoveries in the natural sciences. For this reason, writing has almost always been the means not only for transcribing speech but also for uncovering its underlying structure. That is, to a large extent, writing is what has made people conscious of the properties of speech.
Observation of children learning to read and write an alphabetic orthography suggests that children pass through some of the same stages in interpreting the code that the writing system itself passed through in the course of its development. The youngest child’s hypothesis about writing is that words must be similar in some way to the objects they represent. Thus, at the earliest stage, children think that the word train must be represented by a long word because it is a long thing. Similarly, they think that two little pigs must be represented by two words, one for each pig, and so on. Later they invent the hypotheses that writing represents words rather than things and that these words are series of sounds. At this point children may write the word with a series of consonants: cat becomes kt. Only later do they recognize the alphabetic principle that words must be written with both consonants and vowels.
Yet the evolution of the alphabet, an invention of enormous importance for Greek and for all Indo-European languages, was of little use for Semitic languages, in which the vowels played a smaller role than in Greek. And it was of no use at all for Chinese, which is a monosyllabic language with a great many homophones.
Chinese writing
At about the time the Semitic alphabet was being developed, the Chinese were working on their very different writing system, one that best suited their language. Chinese is a language with clearly distinguished syllables, each of which corresponds to a meaningful unit, a morpheme. As it is an “isolating” language, rather than an inflected language like Latin or, to a lesser degree, English, each morpheme is represented separately by a separate syllable. Whereas in English one word (for example, make) yields, when inflected, a family of related words (make, makes, making, made, etc.), in Chinese one character would represent one morpheme (e.g., make). Because each morpheme is represented by a different character and because the number of morphemes in a language is far larger than the number of syllables, such a writing system needs an extremely large number of characters or graphs. For a more detailed history of writing in Chinese, seeChinese writing.
As mentioned above, the system that developed for Chinese is logographic: basically, symbols represent meaningful units of the language. As in cuneiform writing, simple signs based on pictures soon gave way to complex signs that included reference to sound. Still, a very large number of characters were needed, and by 1400 BCE the script included some 2,500 to 3,000 characters, most of which can be read to this day. To resolve the remaining problem of ambiguity, characters were modified so that sounds and meaning together could differentiate them. Although spoken Chinese continues to include many possible meanings for a given syllable, the written form became unambiguous. The correspondence between morpheme and graph resulted in about 40,000 characters; a literate Chinese person needs to know perhaps 4,000. Attempts at simplification tend to reintroduce ambiguity and make the language more difficult to read; the existing written system has endured intelligibly through many changes in the spoken language. For a more thorough treatment of the relationship between writing and language in Chinese, seeChinese languages: Historical survey of Chinese.
Japanese writing
The Japanese came into contact with Chinese culture during the Chinese Han dynasty (206 BCE–220 CE), and they began to write their own language in the 5th century CE, basing their writing system on the Chinese model. But the two languages are fundamentally different in structure: whereas Chinese words are monosyllables, Japanese words often consist of several syllables, and, whereas Chinese is an isolating language, Japanese is an inflected language. To write such a language, the Japanese developed a mixed system, partly logographic, based on the Chinese system, and partly syllabic, using the same characters in a second way for their sound values. In kun writing Chinese characters were used to represent Japanese words that have a similar meaning, while other characters were adopted to represent sounds.
In the 8th century the phonographic principle was applied more systematically in a writing system called man’yōgana, a syllabary very similar in form to the Semitic alphabet. However, given the large number of homophones and the fact that man’yōgana was combined with kun writing, it was almost impossible to establish a single correct reading of a text. Indeed, scribes took pride in being able to read the same text in various ways.
In the 9th or 10th century two sets of syllabic signs evolved: hiragana, or “plain” kana, which consists of simplified outlines, written cursively, of Chinese characters, and katakana, or “partial” kana, which consists of carefully written parts of the original Chinese characters. Writing with the full Chinese characters is called kanji. The two sets of kana characters are limited as are other syllabaries in that they are not unambiguous; kanji are unambiguous but are very complex visually. Consequently, modern Japanese writing uses a combination of characters from all three of these systems. In 1946 a standardizing reform established a limited list of 1,850 kanji (enlarged to 1,945 in 1981) and encouraged the use of kana for all other words. Modern written Japanese uses many more hiragana graphs than kanji in a piece of text.
Even with modern reforms, written Japanese is difficult to read unambiguously because of the great degree of homophony in the vocabulary. The word kan, for example, is the equivalent of “sweet,” “be affected,” “print,” “be accustomed to,” “view,” “investigate,” “slow,” “tube,” “enjoy,” “a volume,” “Chinese,” and “Korean,” among other meanings. As a result, a reader must know rather precisely what is being discussed in order to read a text accurately. Poetry in particular takes quite a different form in Japanese than in Indo-European languages. (For more on the relationship between the language and the writing, seeJapanese language: Linguistic characteristics of modern Japanese.
Korean writing
Korea too was greatly influenced by Chinese institutions and culture. Until the 20th century the normal medium of written communication was in Chinese, using the Chinese writing system. But beginning about the 6th century, the Chinese script was adapted to write Korean. The application of Chinese script to the Korean language created problems almost identical to those that arose in using Chinese to write the Japanese language. Yet the borrowed kanji script continues to be used for some purposes to this day. The most remarkable development in Korean writing was the invention of Hangul by King Sejong in 1446. It is a featural script consisting of some 24 letters that have a systematic visual structure directly related to the phonetic features of the phonemes. This writing system owes nothing to the Chinese orthography. The development of Korean writing is discussed in more detail in Korean language: Linguistic history and writing systems.
Because the principles employed by various writing systems vary greatly and because the languages they represent are organized so differently, it is difficult to state any general principles of the evolution of writing systems. Yet it appears that they all began with motivated pictorial signs representing objects. To turn such signs into a general orthography required the recognition that the signs must represent sound patterns and the consequent invention of the phonographic principle. Depending on the language, such sound-based systems developed in two directions. Western scripts went farthest in the phonographic direction, representing words by means of syllables and syllables by means of consonantal writing systems and eventually developing a full vocalic alphabet. Eastern scripts preserved the logographic principle even though some of the logographs were sound-based; each word was represented by a distinctive visual character. Only one practical orthography, Korean, adopted a featural system, and that invention bore little or no relation to neighbouring orthographies.
Laughter is part of the universal human vocabulary. All members of the human species understand it. Unlike English or French or Swahili, we don’t have to learn to speak it. We’re born with the capacity to laugh.
One of the remarkable things about laughter is that it occurs unconsciously. You don’t decide to do it. While we can consciously inhibit it, we don’t consciously produce laughter. That’s why it’s very hard to laugh on command or to fake laughter. (Don’t take my word for it: Ask a friend to laugh on the spot.)
Laughter provides powerful, uncensored insights into our unconscious. It simply bubbles up from within us in certain situations.
Very little is known about the specific brain mechanisms responsible for laughter. But we do know that laughter is triggered by many sensations and thoughts, and that it activates many parts of the body.
When we laugh, we alter our facial expressions and make sounds. During exuberant laughter, the muscles of the arms, legs and trunk are involved. Laughter also requires modification in our pattern of breathing.
We also know that laughter is a message that we send to other people. We know this because we rarely laugh when we are alone (we laugh to ourselves even less than we talk to ourselves).
Laughter is social and contagious. We laugh at the sound of laughter itself. That’s why the Tickle Me Elmo doll is such a success — it makes us laugh and smile.
The first laughter appears at about 3.5 to 4 months of age, long before we’re able to speak. Laughter, like crying, is a way for a preverbal infant to interact with the mother and other caregivers.
Contrary to folk wisdom, most laughter is not about humor; it is about relationships between people. To find out when and why people laugh, I and several undergraduate research assistants went to local malls and city sidewalks and recorded what happened just before people laughed. Over a 10-year period, we studied over 2,000 cases of naturally occurring laughter.
We found that most laughter does not follow jokes. People laugh after a variety of statements such as “Hey John, where ya been?” “Here comes Mary,” “How did you do on the test?” and “Do you have a rubber band?”. These certainly aren’t jokes.
Laughter clearly serves a social function. It is a way for us to signal to another person that we wish to connect with them. In fact, in a study of thousands of examples of laughter, the speakers in a conversation were found to be 46 percent more likely to laugh than the listeners.
We’re also 30 times more likely to laugh in a group. Young children between the ages of 2.5 and 4 were found to be eight times more likely to laugh at a cartoon when they watched it with another child even though they were just as likely to report that the cartoon was funny whether alone or not.
Evolutionarily speaking, this signal of connection likely played an important role in survival. Upon meeting a stranger, we want to know: What are your intentions with me? And who else are you aligned with?
In a study that spanned 24 different societies and included 966 participants, scientists played short sound bites of pairs of people laughing together. In some cases, the pair were close friends, in others, the pair were strangers.
Participants in the study were asked to listen to the simultaneous laughter and determine the level of friendship shared by the laughers. Using only the sound of the laughter as cues, they could reliably tell the difference between people who had just met and those who were long-time friends. These results suggest not only the link between true laughter and friendship but also that we aren’t fooling anyone when we pretend to laugh at another person’s joke.
Another theory, which takes the person-to-person connection provided by laughter a step further, is that laughter may be a replacement for the act of grooming each other. Grooming another is a behavior seen in primates. To groom someone else is a generous, one-sided act. Because it requires trust and investment of time, it bonds the groomer and groomee as friends.
As our communities got larger, we couldn’t all go around grooming each other to establish bonds. So, this is no longer our preferred method of exhibiting an offer of friendship. (And that’s probably a good thing.) But laughter, like the commitment offered through grooming, is also hard to fake, at least not without being obvious. And, unlike grooming, it can be done in a larger group and gives a more immediate impression. When we genuinely laugh, we signal that we are comfortable and feel like we belong.
According to the Mayo Clinic, there are also a multitude of physical health benefits to laughter. Laughter can increase your oxygen intake, which can in turn stimulate your heart, lungs, and muscles. Laughing further releases endorphins, the feel-good chemicals our bodies produce to make us feel happy and even relieve pain or stress. The act of increasing and then decreasing our heart rate and blood pressure through laughter is also ultimately calming and tension-relieving. Laughter can even boost our immune system response through the release of stress-and illness-reducing neuropeptides.
So laughter signals cooperation, a key aspect of human survival, and promotes a healthier body to boot. That’s the best excuse I’ve heard to make sure to take the time to enjoy a few laughs over dinner and drinks with friends.
I was sitting in my kitchen one day and made an attempt at being funny, but nobody laughed. My kids began to laugh when I bemoaned that nothing I say is funny. Perhaps ever since we began to emerge from our ancestral lineage, humor has been part of who we are as human beings.
I believe we are always doing the best we can. I call this our I-M. «This is who I am and I Matter.» Our I-M is always adapting to four domains: our home domain, our social domain, our biological domain (brain and body), and our IC domain (how I see myself and how I think other people see me). Using the I-M lens there is no pathology. There is only our I-M—doing the best we can at this moment in time—while adapting to a shift in any of the domains to another I-M.
Humor serves remarkable survival purposes, spanning over all four domains of our I-M. In the biological domain, humor and laughing relieve stress. In the home domain, humor and laughter create an environment of trust, a no-judgement zone. In the social domain, humor binds communities together with shared values. And in the IC domain, well, it feels great to be able to share a laugh.
When is the last time you laughed? What about the last time you chuckled or laughed so hard you cried? I had a laugh-so-hard-you-cry moment recently. I was playing a board game when one of the players asked if his girlfriend had been to a local hospital. He explained that they use a certain kind of soap in the bathrooms: “So I can tell when someone has gone to the bathroom at the hospital.”
Without taking a beat another person responded: “Strange brag but okay.” The tone of the response, the cadence of the words, the soft and slight resignation resonated in such a way that I started laughing, and the thought of it makes me smile even as I write this now. It was not funny for everyone, at least not as funny. But for me, this brief interaction captured one of the reasons people laugh: incongruity.
Our brains are designed to compare bits of information. We are always comparing things. From a survival perspective, an ancestor that notices a new rustling in a bush that a moment before was still and then ran away or prepared for a fight survived more than an ancestor that didn’t notice the difference and was then eaten by a tiger. Both did the best they could in their I-M, but one was less successful.
Incongruity can be funny. The experience of an unexpected twist in a story or when something happens in real life can make us laugh just because it was unexpected and posed no danger. Like this dark humor joke: a woman is digging a hole in her backyard when she unexpectedly uncovers a treasure chest full of gold and jewels, runs to tell her husband, and then remembers why she was digging the hole. Is this funny to you or not?
Our sense of humor is influenced by our home and social domains. Things in my family may not seem funny to someone from another family. Perhaps an ancestor that could share a joke with another created more social bonding and with greater bonding came greater protection. Group humor extends to larger and larger groups, encompassing cultures and points in history. Humor can be transient. Jokes from my parent’s generation may seem politically incorrect today. Humor shifts from era to era.
Sometimes we laugh because we feel joy when superior to someone else. Some humor is mean and derisive, laughter at a person or group’s expense. Superiority humor can be traced back to the ancient Greeks like Socrates and Plato, but it probably has its roots long before the written word. Perhaps this was also adaptive at some time in our history and we see examples of this still in our world every day. Sometimes we laugh at another person’s misfortune: «schadenfreude.»
Superiority humor may actually mask deep insecurity. Insecurity is founded on an IC Domain that worried other people will see one as less-than, with less value and at greater risk of being kicked out of their protective group. While also an I-M, we don’t have to like it but try to understand it.
And then there is that nervous kind of laughter we all have when faced with a difficult or awkward situation. This laughter is the result of feeling relief, perhaps when danger has passed. From an IC domain, we all fear that we will be seen as less valuable, increasing the biological domain stress response from being rejected and kicked out of our protective group. In relief, we may giggle and feel less stressed out.
Laughter is the enactment of humor, turning a perception into an action. Laughter has all sorts of healing properties. Is that why humor evolved? Did early humans survive better than their counterparts if they could laugh when faced with adversity?
How can you use humor today to make a small change in any of your four domains? What kind of influence do you want to be on the I-M of those in your home or social domains?
I laugh every day. I find the humor around me and am grateful for that ability. What sort of things make you laugh? What do you find funny? In my family, it is often irony, something I got from both my parents. And while my home domain was not always funny growing up, my folks could find humor even in the midst of their divorce. As my mom once said, she was a «divorcée but always wanted to be a widow.»
The evolutionary origins of laughter are rooted more in survival than enjoyment
Laughter plays a crucial role in every culture across the world. But it’s not clear why laughter exists. While it is evidently an inherently social phenomenon – people are up to 30 times more likely to laugh in a group than when alone – laughter’s function as a form of communication remains mysterious.
A new study published in the Proceedings of the National Academy of Sciences, and involving a large group of researchers led by Gregory Bryant from UCLA, suggests that laughter may indicate to listeners the friendship status of those laughing. The researchers asked listeners to judge the friendship status of pairs of strangers and friends based on short snippets of their simultaneous laughter. Drawn from 24 different societies, they found that listeners were able to reliably distinguish friends from strangers, based on specific acoustic characteristics of the laughter.
Laughter’s evolutionary past
Spontaneous laughter, which is unintentionally triggered by conversation or events, emerges in the first few months of life, even in children who are deaf or blind. Laughter not only transcends human cultural boundaries, but species boundaries, too: it is present in a similar form in other great apes. In fact, the evolutionary origins of human laughter can be traced back to between 10 and 16m years ago.
While laughter has been linked to higher pain tolerance and the signalling of social status, its principal function appears to be creating and deepening social bonds. As our ancestors began to live in larger and more complex social structures, the quality of relationships became crucial to survival. The process of evolution would have favoured the development of cognitive strategies that helped form and sustain these cooperative alliances.
Laughter probably evolved from laboured breathing during play such as tickling, which encourage cooperative and competitive behaviour in young mammals. This expression of the shared arousal experienced through play may have been effective in strengthening positive bonds, and laughter has indeed been shown to prolong the length of play behaviours in both children and chimpanzees, and to directly elicit both conscious and unconscious positive emotional responses in human listeners.
Laughter as a social tool
The emergence of laughter and other primal vocalisations was at first intimately tied to how we felt: we only laughed when aroused in a positive way, just as we cried only when distressed, or roared only when angry. The key development came with the ability to vocalise voluntarily, without necessarily experiencing some underlying pain, rage, or positive emotion. This increased vocal control, made possible as our brains grew more complex, was ultimately vital in the development of language. But it also allowed us to consciously mimic laughter (and other vocalisations), providing a deceptive tool to artificially quicken and expand social bonds – and so increase survival odds.
The idea that this volitional laughter also has an evolutionary origin is reinforced by the presence of similar behaviour in adult chimpanzees, who produce laugh imitations in response to the spontaneous laughter of others. The fake laughter of both chimpanzees and humans develops during childhood, is acoustically distinct from its spontaneous counterpart, and serves the same social bonding function.
Today, both spontaneous and volitional laughter are prevalent in almost every aspect of human life, whether sharing a joke with a mate or during polite chitchat with a colleague. However, they’re not equivalent in the ear of beholder. Spontaneous laughter is characterised by higher pitch (indicative of genuine arousal), shorter duration and shorter laugh bursts compared to volitional laughter. Researchers recently demonstrated that human listeners can distinguish between these two laugh types. Fascinatingly, they also showed that if you slow down and adjust the pitch of volitional laughter (to make it less recognisable as human) listeners can distinguish it from animal vocalisations, whereas they cannot do the same for spontaneous laughter, whose acoustic structure is far more similar to nonhuman primate equivalents.
Friend or stranger?
It’s this audible difference that is demonstrated in the paper by Bryant and his colleagues. Friends are more likely to produce spontaneous laughs, while strangers who lack an established emotional connection are more likely to produce volitional laughter.
The fact that we can accurately perceive these distinctions means that laughter is to some extent an honest signal. In the neverending evolutionary arms race, adaptive strategies for deception tend to co-evolve with strategies to detect that deception. The acoustic characteristics of authentic laughter are therefore useful cues to the bonds between and status of members of a group. This is something that may have aided decision-making in our evolutionary past.
However, the study found that judgement accuracy was on average only 11% higher than chance. Perhaps this is partially because some strangers may have produced spontaneous laughs and some friends volitional laughs, but it’s clear that imitating authentic emotional laughter is a valuable deceptive tool for social lubrication. One need only witness the contagious effects of canned laughter to see how true this is.
In the complex reality of modern human social interaction, laughs are often aromatic blends of the full-bodied spontaneous and dark but smooth volitional types, further blurring the boundaries. Regardless, the goal is the same and we will most likely find ourselves becoming fonder of those we share the odd chuckle with.
John Cleese once said: “Laughter connects you with people. It’s almost impossible to maintain any kind of distance or any sense of social hierarchy when you’re just howling with laughter.” He might just have hit the nail on the head – even when we’re faking it.
It’s true: laughter is strong medicine. It draws people together in ways that trigger healthy physical and emotional changes in the body. Laughter strengthens your immune system, boosts mood, diminishes pain, and protects you from the damaging effects of stress. Nothing works faster or more dependably to bring your mind and body back into balance than a good laugh. Humor lightens your burdens, inspires hope, connects you to others, and keeps you grounded, focused, and alert. It also helps you release anger and forgive sooner.
With so much power to heal and renew, the ability to laugh easily and frequently is a tremendous resource for surmounting problems, enhancing your relationships, and supporting both physical and emotional health. Best of all, this priceless medicine is fun, free, and easy to use.
As children, we used to laugh hundreds of times a day, but as adults, life tends to be more serious and laughter more infrequent. But by seeking out more opportunities for humor and laughter, you can improve your emotional health, strengthen your relationships, find greater happiness—and even add years to your life.
Laughter is good for your health
Laughter relaxes the whole body. A good, hearty laugh relieves physical tension and stress, leaving your muscles relaxed for up to 45 minutes after.
Laughter boosts the immune system. Laughter decreases stress hormones and increases immune cells and infection-fighting antibodies, thus improving your resistance to disease.
Laughter triggers the release of endorphins, the body’s natural feel-good chemicals. Endorphins promote an overall sense of well-being and can even temporarily relieve pain.
Laughter protects the heart. Laughter improves the function of blood vessels and increases blood flow, which can help protect you against a heart attack and other cardiovascular problems.
Laughter burns calories. Okay, so it’s no replacement for going to the gym, but one study found that laughing for 10 to 15 minutes a day can burn approximately 40 calories—which could be enough to lose three or four pounds over the course of a year.
Laughter lightens anger’s heavy load. Nothing diffuses anger and conflict faster than a shared laugh. Looking at the funny side can put problems into perspective and enable you to move on from confrontations without holding onto bitterness or resentment.
Laughter may even help you to live longer. A study in Norway found that people with a strong sense of humor outlived those who don’t laugh as much. The difference was particularly notable for those battling cancer.
Laughter helps you stay mentally healthy
Laughter makes you feel good. And this positive feeling remains with you even after the laughter subsides. Humor helps you keep a positive, optimistic outlook through difficult situations, disappointments, and loss.
More than just a respite from sadness and pain, laughter gives you the courage and strength to find new sources of meaning and hope. Even in the most difficult of times, a laugh–or even simply a smile–can go a long way toward making you feel better. And laughter really is contagious—just hearing laughter primes your brain and readies you to smile and join in the fun.
Laughter stops distressing emotions. You can’t feel anxious, angry, or sad when you’re laughing.
Laughter helps you relax and recharge. It reduces stress and increases energy, enabling you to stay focused and accomplish more.
Laughter shifts perspective, allowing you to see situations in a more realistic, less threatening light. A humorous perspective creates psychological distance, which can help you avoid feeling overwhelmed and diffuse conflict.
Laughter draws you closer to others, which can have a profound effect on all aspects of your mental and emotional health.
Laughter brings people together and strengthens relationships
There’s a good reason why TV sitcoms use laugh tracks: laughter is contagious. You’re many times more likely to laugh around other people than when you’re alone. And the more laughter you bring into your own life, the happier you and those around you will feel.
Sharing humor is half the fun—in fact, most laughter doesn’t come from hearing jokes, but rather simply from spending time with friends and family. And it’s this social aspect that plays such an important role in the health benefits of laughter. You can’t enjoy a laugh with other people unless you take the time to really engage with them. When you care about someone enough to switch off your phone and really connect face to face, you’re engaging in a process that rebalances the nervous system and puts the brakes on defensive stress responses like “fight or flight.” And if you share a laugh as well, you’ll both feel happier, more positive, and more relaxed—even if you’re unable to alter a stressful situation.
Shared laughter is one of the most effective tools for keeping relationships fresh and exciting. All emotional sharing builds strong and lasting relationship bonds, but sharing laughter also adds joy, vitality, and resilience. And humor is a powerful and effective way to heal resentments, disagreements, and hurts. Laughter unites people during difficult times.
Humor and playful communication strengthen our relationships by triggering positive feelings and fostering emotional connection. When we laugh with one another, a positive bond is created. This bond acts as a strong buffer against stress, disagreements, and disappointment. Humor and laughter in relationships allows you to:
Be more spontaneous. Humor gets you out of your head and away from your troubles.
Let go of defensiveness. Laughter helps you forget resentments, judgments, criticisms, and doubts.
Release inhibitions. Your fear of holding back is pushed aside.
Express your true feelings. Deeply felt emotions are allowed to rise to the surface.
Laughter is an especially powerful tool for managing conflict and reducing tension when emotions are running high. Whether with romantic partners, friends and family, or co-workers, you can learn to use humor to smooth over disagreements, lower everyone’s stress level, and communicate in a way that builds up your relationships rather than breaking them down.
How to bring more laughter into your life
Laughter is your birthright, a natural part of life that is innate and inborn. Infants begin smiling during the first weeks of life and laugh out loud within months of being born. Even if you did not grow up in a household where laughter was a common sound, you can learn to laugh at any stage of life.
Begin by setting aside special times to seek out humor and laughter, as you might with exercising, and build from there. Eventually, you’ll want to incorporate humor and laughter into the fabric of your life, finding it naturally in everything.
Here are some ways to start:
Smile. Smiling is the beginning of laughter, and like laughter, it’s contagious. When you look at someone or see something even mildly pleasing, practice smiling. Instead of looking down at your phone, look up and smile at people you pass in the street, the person serving you a morning coffee, or the co-workers you share an elevator with. Notice the effect on others.
Count your blessings. Literally make a list. The simple act of considering the positive aspects of your life will distance you from negative thoughts that block humor and laughter. When you’re in a state of sadness, you have further to travel to reach humor and laughter.
When you hear laughter, move toward it. Sometimes humor and laughter are private, a shared joke among a small group, but usually not. More often, people are very happy to share something funny because it gives them an opportunity to laugh again and feed off the humor you find in it. When you hear laughter, seek it out and ask, “What’s funny?”
Spend time with fun, playful people. These are people who laugh easily–both at themselves and at life’s absurdities–and who routinely find the humor in everyday events. Their playful point of view and laughter are contagious. Even if you don’t consider yourself a lighthearted, humorous person, you can still seek out people who like to laugh and make others laugh. Every comedian appreciates an audience.
Bring humor into conversations. Ask people, “What’s the funniest thing that happened to you today? This week? In your life?”
So, what if you really can’t “find the funny?” Believe it or not, it’s possible to laugh without experiencing a funny event—and simulated laughter can be just as beneficial as the real thing. It can even make exercise more fun and productive. A Georgia State University study found that incorporating bouts of simulated laughter into an exercise program helped improve older adults’ mental health as well as their aerobic endurance. Plus, hearing others laugh, even for no apparent reason, can often trigger genuine laughter.
To add simulated laughter into your own life, search for laugh yoga or laugh therapy groups. Or you can start simply by laughing at other people’s jokes, even if you don’t find them funny. Both you and the other person will feel good, it will draw you closer together, and who knows, it may even lead to some spontaneous laughter.
Tips for developing your sense of humor
An essential ingredient for developing your sense of humor is to learn not to take yourself too seriously and laugh at your own mistakes and foibles. As much as we’d like to believe otherwise, we all do foolish things from time to time. Instead of feeling embarrassed or defensive, embrace your imperfections. While some events in life are clearly sad and not opportunities for laughter, most don’t carry an overwhelming sense of either sadness or delight. They fall into the gray zone of ordinary life—giving you the choice to laugh or not. So, choose to laugh whenever you can.
Laugh at yourself. Share your embarrassing moments. The best way to take yourself less seriously is to talk about times when you took yourself too seriously.
Attempt to laugh at situations rather than bemoan them. Look for the humor in a bad situation, and uncover the irony and absurdity of life. When something negative happens, try to make it a humorous anecdote that will make others laugh.
Surround yourself with reminders to lighten up. Keep a toy on your desk or in your car. Put up a funny poster in your office. Choose a computer screensaver that makes you laugh. Frame photos of you and your family or friends having fun.
Remember funny things that happen. If something amusing happens or you hear a joke or funny story you really like, write it down or tell it to someone to help you remember it.
Don’t dwell on the negative. Try to avoid negative people and don’t dwell on news stories, entertainment, or conversations that make you sad or unhappy. Many things in life are beyond your control—particularly the behavior of other people. While you might view carrying the weight of the world on your shoulders as admirable, in the long run it’s unrealistic and unhealthy.
Find your inner child. Pay attention to children and try to emulate them—after all, they are the experts on playing, taking life lightly, and laughing at ordinary things.
Deal with stress. Stress can be a major impediment to humor and laughter, so it’s important to keep your stress levels in check. One great technique to relieve stress in the moment is to draw upon a favorite memory that always makes you smile—something your kids did, for example, or something funny a friend told you.
Don’t go a day without laughing. Think of it like exercise or breakfast and make a conscious effort to find something each day that makes you laugh. Set aside 10 to 15 minutes and do something that amuses you. The more you get used to laughing each day, the less effort you’ll have to make.
Using humor to overcome challenges and enhance your life
The ability to laugh, play, and have fun not only makes life more enjoyable but also helps you solve problems, connect with others, and think more creatively. People who incorporate humor and play into their daily lives find that it renews them and all of their relationships.
Life brings challenges that can either get the best of you or become playthings for your imagination. When you “become the problem” and take yourself too seriously, it can be hard to think outside the box and find new solutions. But when you play with the problem, you can often transform it into an opportunity for creative learning.
Playing with problems seems to come naturally to children. When they are confused or afraid, they make their problems into a game, giving them a sense of control and an opportunity to experiment with new solutions. Interacting with others in playful ways helps you retain this creative ability.
As laughter, humor, and play become integrated into your life, your creativity will flourish and new opportunities for laughing with friends, coworkers, acquaintances, and loved ones will occur to you daily. Laughter takes you to a higher place where you can view the world from a more relaxed, positive, and joyful perspective.
Why Is Art Important? – Understanding Why Humans Make and Value Art
Art Explained
Art is not something that can easily be classified, especially when it comes to contemporary art. Art is a form of expression that has been influenced and molded by the entirety of the human species’ existence, and all of human cultural and social developments. Art is an inspiration, facilitating the eras, and influencing our actions, even striking up resistance when political times are rough. Art is also a means of enabling seeing: a method for showing the world, ideas, and experiences in ways that encourage people to regard them anew.
One could argue that simply creating art from the idea you have inside our head, with the idea that it is a piece of art that you are creating, will create an impetus, which is the word we use for the feeling that is ignited when you view art or make something that is deeply satisfying to you. This impetus can also be inspired by the act of crying, laughing, or using an appliance that works as it is meant to.
Defining Art
As we have mentioned before, art can be pretty much anything if you have created it to be so. Therefore, art is defined according to our perceptions. Some people might look at a statue or a display and they might laugh it off, where others might have a deeper feeling inspired from their mind or their gut. If you were to go online and search for the definition of art, you might come across the Merriam-Webster dictionary that has a few definitions for the term art. Some of them are as follows:
“skill acquired by experience, study, or observation – the art of making friends”
“a branch of learning: humanities”
“an occupation requiring knowledge or skill – the art of organ building”
“the conscious use of skill and creative imagination especially in the production of aesthetic objects – the art of painting landscapes”
From these definitions, we can see that art is more of the skill than the actual artwork. “The art of organ building” is a great example of this because it shows how complicated the task is, and that compilation or intricacy makes it similar to an art form.
Art will have had different definitions over the different eras. Even though it has always been used to record significant events or portray them in certain lights, they have served as stone setters in our timelines. During the Renaissance period, art was far more strict than free-flowing, and artists would have been educated in the arts to form a distinction between craftsmen and artists.
The 20th century and onwards saw a great change in the acceptance of art and the reliance we may have on it to be a conductor of our ideas and expressions. There are fewer restrictions, and in some artforms there are none at all. Art is a form of communication, protest, and bringing out awareness. The rigidity of criteria slackened and now there are pretty much no limitations when it comes to creating art. More mediums were invented and accepted, other than the standard oil paints and marble sculptures.
With the rise of the industrial revolution, electricity and factories elevated the accessibility of all mediums, and canvases for painting. All facets of art became more approachable because things were more affordable. Art went from being a noble affair and not worthy of poverty, but now it is a practice for the people.
The Various Forms of Art
When you listen to music, you can hear the difference between all the genres, right? Classical music is not the same as rock-n-roll or country-folk music, is it? No. Art too has the genres that it can be classified as. These are the different forms that it comes in, like drawing, painting, sculpting, and more.
Dancing is also a beautiful expression of art. The different movements of art might strike various feelings within those who are dancing and those who are watching. There are different styles of dance, relative to the many cultures we have in our world. The costumes that the dancers wear can also fall under the costume-as-art definition, expressing the emotions and contexts of the meaning behind the dance.
The Importance of Art
The discussion of art has commenced, but, the question remains, why is art important? Or better yet, why is art important to culture? In this next section, we will get a little more into detail about the importance of arts.
The Universal Language: Art
The creative written word is also considered a deeply significant artform. In poetry and prose, metaphors and similes are employed to create mental images unique to every person who hears or reads them. These words and the mental imagery they evoke will be influenced by individual experiences, triggering a vast variety of responses.
Visual art can draw on more universal communicative strategies than the verbal arts. It does not require its audience to have fluency in a specific language to fully comprehend what is being expressed. Visual artists can rely on formal elements such as color, line, form, and pattern to evoke a range of feelings and thoughts. Symbolism however, functions as a kind of visual language in that symbols are specific to particular cultures, even though some images (like the sun and moon) can signify similar ideas across various artistic and cultural traditions.
Self-Expressive Art
Why is art important for the artist? Just like the one who views the art, in whatever form, and has emotions and thoughts triggered by the artist who intended for their art to do so, the artist finds a deep satisfaction in the way that art enables expression. There are so many emotions and ideas that we keep bottled up within us, that is better to express than leave unattended. This helps us to resolve any subconscious issues that might be influencing our behavior.
Art, History, and Culture
The burning question is still apparent, why is art important to culture? It has made such a connection, even though the definition of what constitutes art have evolved so many times through the ages. But, is that not very similar to how we, as humans, have evolved as a species? The two are the same, in that art is a form of communication, commemoration, celebration, and expression, so ultimately, art is an expression of our story as a species.
We are very fortunate that we have artworks created in every human era, that have survived, even though some were made many millennia ago. We can step back in time, by viewing all these different forms of art. It might not be the easiest thing to empathize with the early humans who left their mark in caves across the globe like those at Lascaux, because that way of life is too remote for us, but seeing these vey ancient masterpieces gives an extraordinary insight into the full breadth of human history, and the role of art within it.
Our Development and Art
There are parts of our brain that are used for deciphering puzzles and interpreting images. This part can whither if it is not used and trained. So, you could say the greater your exposure to art, the more habituated you will be to engaging with complexity, and greater your ability to appreciate the different interpretations your mind creates.
Art is also based on perspective, and usually very much so for the artist who created the piece. Art can improve empathy within people because it helps you see things from another’s perspective. In our opinion, this is one of the main reasons why art is important in life today. Learning to understand perspective through art is a valuable thing.
We can look back in our timelines, personal, or general, and we can understand the parts of ourselves that we have grown from, as a way to become something new, and more developed. The only constant in our lives, with the change we go through, is in art. Artistic expressions are ever-changing without personal development, but it is always there, providing a touch-stone.
Another constant in art is the various mediums and materials we use to create it. Some artforms, like photography and digital art are entirely new. On the other hand, many traditional forms have evolved alongside the materials used in their production. the industrial revolution brought factory-made art materials, such as canvas and papers, paints, and pencils that have been improved but essentially provide the same service. Therefore, there are as many constants in the world of art as there are innovations.
One area of the human experience that seems not to have parallels out in nature is the arts. It is difficult to conceive of a pigeon Picasso or a baboon Botticelli. Indeed, only a few animal species have even the faintest hints of the beginning of culture. Without culture, there really can’t be art, as we know it, because art cannot exist separate from culture. Art reflects culture, transmits culture, shapes culture, and comments on culture. There is just no way that animals can possibly experience art as we do.
And there is the rub. Of course, they don’t experience art as we do because they don’t experience anything as we do. And we don’t experience anything as another animal does either. But if we really think about what art is and how it first began in humans, we might indeed see budding artists among our animal friends.
Art is about beauty
If we divorce art from its cultural implications, we can agree that art is very often all to do with the expression of beauty. Throughout history, much artwork was made for no other explicit purpose than the production of beauty. Artwork is to be beheld and admired. It is breathtaking and can even make us emotional. It is this beauty that I draw the first connection between nature and art. Nature and art are both beautiful, no matter how you define beauty. They both can dazzle us and hold us breathless. They can inspire us and make us feel connected to something. They both can strike an emotional nerve that leaves an impact on us that is not soon forgotten. Perhaps this connection between art and emotion reveals something about the origin of art.
First, let’s consider a specific subset of the beauty we find in the natural world: beautiful animals. From the unmatched colors of a tropical macaw to the flowing mane of an African lion, to the striking features of a mandarinfish, animals are beautiful. But never was the saying, “beauty is in the eye of the holder” more true than in the animal world. The bright and flashy colors that we see in many animals have evolved to be conspicuous, to help the animal “stick out.” The most spectacularly beautiful animals are so decorated in order to get the attraction, respect, or fear of other animals. In all cases, the utility of the beauty is found in the reaction it gets from the observer. Can’t the exact same thing be said of human art?
Although it’s not all about sex, much of animal beauty is indeed about impressing potential mates. To date, scientists have found no other reason that peacocks have those beautiful and intricate tails except that peahens seem to dig them. These tails are not a small decorative adornment, either. The peacock tail is more than 60% of its body length. When they try to walk, let alone fly, with those obnoxious monstrosities, it is a pitiful sight indeed.
Nevertheless, peahens are deeply attracted to this tail, and peafowl are not the only species with a strategy like this. Beginning in the simplest of invertebrates, colorful and striking ornamentation has been used to attract mates. I could give a long list of beautifully colored animals whose intricate visual patterns are designed for no other purpose than attraction of a mate, but I don’t think it’s necessary.
Art and beauty evoke an emotional response
This beauty-as-sexy phenomenon has a deep biological parallel with human art because it is the connection of a visual stimulus with an inner emotional state. In the case of an attractive animal, the external physical beauty is transformed into a desire, a behavioral impulse in the brain of the observer. When peahens behold a beautiful peacock, they are “moved” by it, in the sense that it affects their current mental state. We know this because it affects their behavior and we assume that behaviors spring from mental states. This is exactly what the fine arts are all about in humans: they employ a visual stimulus to affect the mental or emotional state of the viewers.
Art induces recall of past events or emotions
Admittedly, art and beauty in humans is more than just sex appeal. The effectiveness of art depends on some basic assumptions about the knowledge and experience that is common between the artist and the audience. Andy Warhol’s Campbell’s Soup Cans will probably have very little impact on the bushmen of eastern Africa. Art capitalizes on specific stored memories and associations in the brain of the observer. Here, we have the second clue to the origin of art in humans: visual recall of past events and emotions.
As human brains became more sophisticated over the last million years, we became capable of storing extensive details as memories, a skill that came in handy as our behaviors became more detailed. The hunting and gathering way of life common to all of the various hominid species required extensive visual memory. How else could they have accomplished organized group hunting, fashioning of simple tools, and the deciphering of the migratory patterns of big game on the African savannah? These complicated skills require the comparison of current visual cues with past experience in a computational and predictive way. Pattern recognition is what we are talking about here.
Further still, the ability to make and use tools, a skill that began in apes and exploded in hominids, requires a great deal of visual and tactile memory. As fully modern Homo sapiens began making tools that were more and more sophisticated, we suddenly found ourselves with the ability to depict our memories using primitive painting implements. With our newfound cognitive abilities, our impressive memory-recall, and eventually tools, it is not at all surprising that the first art produced by our ancestors depicted the very subject that probably spawned all of our cognitive abilities in the first place: the hunt.
Cave paintings are the earliest artifacts that anthropologists and art historians agree are truly art, but I find it hard to believe that they didn’t also facilitate functions in the communities in which they were created. We could speculate all day about what those functions could have been, but I think the point is that the benefits of being able to create a visual representation was immediately recognized. I also seriously doubt that the cave paintings were the first such attempts at visual representations. They were just the first that ended up surviving through the eons.
As language was developing in Homo sapiens during the great leap forward, humans began teaching each other about the tools they’d made, the food they found, and the skills they had perfected. This was the beginning of the concept of education. I can’t imagine that the education of the paleolothic era didn’t also make use of visual aids, like education does today. Whether they were just crude drawings made by dragging sticks in the dirt, or more elaborate representation on stone “canvases,” I am sure that drawings accompanied spoken language (or gestures) right from the beginning. Once again, the key feature was the ability to use visual representations to induce memory recall or visual understanding. By drawing something, an early human could make another human remember something.
Various forms of drawing, painting, and other visual depictions almost certainly facilitated communication and education among early humans. That much seems rather obvious. In addition, it seems likely that early humans also used the new innovation of artistic depictions for various efforts of problem-solving and calculation. As cognition continued to develop, it began to grow into consciousness and introspection as we think of them today.
The visual arts were probably right there with us along the way, helping to provide a means to express the complex thoughts that were beginning to materialize in our massive brains. Indeed, appreciation and understanding of art seem to be among the highest-order functions of the human brain.
For these reasons, I tend to believe that artistic expression and reactions to art evolved hand-in-hand with higher cognitive functions in early humans. It is natural that the new richness of our inner experience would also manifest in creative outward expression. In turn, the communication of that inner experience through art would find receptive observers and the phenomenon of art then became ingrained culturally.
I also fully expect that artistic talent would have eventually been transferred from our culture to our genes. After all, during a million or two years of natural selection, artistic ability was likely to confer some advantage on those that had it. This advantage could have come in the form of increased social standing as a leader in the hunt, a prolific teacher of skills, and so on. Any special place in the social structure means greater odds of reproductive success. In this way, I suspect, humanity evolved into a species of artists and art enthusiasts.
Do other animals make art?
Once humanity developed a tendency to produce artistic renderings, there was plenty of biological space for it to flourish and reasons why it would be favored, but how might it have emerged in the first place? Surely the behavior of art production cannot be traced to a singular mutation. What precursors might have existed that allowed art to emerge? Are there any examples of animals making something we would consider artful?
There is an Indian artist named Siri, whose drawings currently go for hundreds or thousands of dollars. She has been written about extensively and featured in publications from Harper’s Weekly to the Los Angeles Times. Her work ranges from abstract to highly representational art including landscapes and self-portraits. Of course, she trained for many years to develop this skill, but she can now complete a painting in mere minutes. Siri is a fifty year old Asian Elephant.
Siri is not alone. Scores of elephants have been taught to paint. The art that these elephants produce by holding a brush in their trunks is truly impressive. It’s certainly better than anything I could produce. What these elephants are really doing, in terms of producing art or simply repeating a trained task, is the subject of heated debate. I don’t intend to wade into those murky waters. Rather, I mention painting elephants here only to indicate that the technical skill set necessary to produce art is certainly not unique to humans. These trained painting elephants can take a visual stimulus, even one that is new to them, and recreate it through artistic techniques that they have learned. They appreciate color, perspective, and proportion, at least in the basic sense. I’m not saying that these elephants are Cézanne, but they’re better than Nathan H. Lents, that’s for sure.
In species closer to us, orangutans, chimpanzees, and gorillas have all been taught to draw and paint. Like the elephants, some of them are quite good at it. Unlike the elephants, there is little debate that these primates seem to actually enjoy making the art and will do it spontaneously, without reward, and for its own sake. They sometimes make art that they become quite attached to and never actively show to anyone else.
Once again, I don’t want to get into a discussion of what this art creation really says about chimpanzee consciousness, although that is interesting to me. My point here is simply that these great apes all have the physical ability to create visual representations, they enjoy doing so, and the art that they produce seems to mean something to them. All of those abilities and features were obviously present in early humans and their ancestors as well.
So it was not really that big of a jump when early humans looked up at the cave wall, dimly lit by a dying fire… and saw a canvas.
Humans are rather clever animals. We’ve managed to teach ourselves how to express ideas through written words, as well as emotions through art and music. Think about that for a minute. Through our art and music, the emotion felt by an artist may be encoded through color, shadow and harmony to be decoded and experienced by an observer or listener. That is arguably one of humanity’s greatest accomplishments.
And yet, despite the apparent ease with which artists or musicians can create a work that unlocks emotions in the audience, we have no idea how it does such an amazing thing. We do know that emotions may be expressed in art — and that humans can readily recognize them. One study, conducted in Germany, found that indigenous Mafa tribespeople from Cameroon were able to reliably identify the emotions of happiness, sadness and fear in Western-style music [source: Fritz, et al]. How music and art evoke emotion is another matter entirely — one that we have yet to understand.
The renowned cognitive psychologist and linguist Steven Pinker famously called music «auditory cheesecake,» indicating that music is an example of humans learning to exploit our emotions just as cheesecake is the result of our exploitation of our taste buds [source: Carroll]. Both are comparatively hollow in contrast to the underlying, elementary foundations — language and taste — that each is based on. In other words, there’s no mystery to how art and music evoke emotions; they’re designed to do so by assaulting the audience’s basic communicative instincts.
Not everyone agrees with Pinker, however. Others in psychology and other fields have asserted that both music and art are separate from other innate forms of communication. This belief, however, doesn’t explain why music and art can evoke emotion in the audience if it’s not simply due to an exploitation of more basic senses.
Cognitive scientist Mark Changzi suggests that music creates emotion in the listener because we associate the sound with human movement. Since we’re able to infer emotion from the expressions and movements of others and to empathize with those people through our own emotions, then if we associate music with human movement, this explains how music can evoke emotion [source: Changzi]. It’s an intermediary, a stand-in for real human movement.
Our visual renderings of music being played indicate this unconscious association. If you carry out an image search online for the keywords «musical notes,» points out Changzi, you’ll find a lot of images of musical notes that depict them in motion. Humans associated music with movement [source: Changzi]. Music moves us because we envision movement in it. Even the terms we used to describe music — that it «moves us,» or a piece of a larger composition is called a «movement» — attest to Changzi’s idea.
Our acquaintance with emotions is an ancient one. We’ve learned to use these appraisals of how our surroundings and situations match our goals, such as staying alive and preserving our health. Emotions may alert us and grab our attention, as when we feel fear and should run away. They may also assure us; we feel joy when we’re safe and nourished after a meal. Understanding and reacting to our emotions is how we survive; perhaps no other sense has played a larger role in deriving emotional cues from our environment than sight.
Our ability to infer emotion from visual cues is primal. Our interpretation of these cues takes place through our emotions. If we see someone who looks cross, we may feel fear; encountering a person who looks happy may elicit a sense of calm or happiness in the observer as well. Although unproven, cognitive science suggests that visual art can evoke emotions in the viewer by exploiting emotional cues we’ve learned to read over the millennia [source: Gratch, et al]. A portrait of a stern-looking person of authority should elicit some sense of fear or dread in the average viewer, while a child’s picture may generate happy feelings.
Certainly, studies using functional MRIs bear this out. The same regions of the brain that are involved in experiencing emotion are activated when shown aesthetically pleasing art [source: Prinz]. The use of color and symbolism — although it can be very abstract in some works — may still be recognizable to the viewer enough that it may evoke an emotion. This can happen even if the conscious mind isn’t aware of the reason for the sudden onset of emotion. Red, for example, can indicate anger. A series of lines in disarray, like those found in the work of painter Franz Kline, can evoke uncomfortable feelings of disorder.
The color red, lines in disarray, smiling children and furrowed eyebrows are all called cognitive antecedents. By including them in visual art, the artist may encapsulate an emotional state to be decoded and experienced by the viewer later on. The emotions elicited by a work of art aren’t necessarily intentional. They’re often individually and culturally specific. Philosopher William Lyons points out that a Japanese viewer of the white turkeys depicted in Monet’s «Turkeys at Montgeron» may experience a sense of sadness or melancholy, since white is associated with death in that culture [source: Lyons].
Perhaps an understanding of exactly how to elicit a specific emotion in the viewer is best left unsolved. To some, the best works of art are open to interpretation.
Evidence from brain research is only one of many reasons education and engagement in fine arts is beneficial to the educational process. As Eric Jensen, one of the leading translators in the world of neuroscience into education, states in his book Arts with the Brain in Mind, “The systems they nourish, which include our integrated sensory, attentional, cognitive, emotional, and motor capacities, are, in fact, the driving forces behind all other learning.”
This notion of how the arts and the brain work together is supported by another study, conducted by Judith Burton, professor of Art Education and Research, Teachers College, Columbia University, which reveals that subjects such as mathematics, science, and language require complex cognitive and creative capacities that are “typical of arts learning.”
Art Changes Consciousness
In his article on the Natural Blaze website Jacob Devaney wrote “When you observe a profound piece of art you are potentially firing the same neurons as the artist did when they created it thus making new neural pathways and stimulating a state of inspiration. This sense of being drawn into a painting is called “embodied cognition”.
Education in the Arts is An Integral Part
And, according to Bob Bryant, Executive Director of Fine Arts at Katy, ISD (Katy, Independent School District in Houston, Texas, “Education in the arts is an integral part of the development of each human being. Education and engagement in the fine arts are an essential part of the school curriculum and an important component in the educational program of every student in Katy ISD.”
Increased Blood Flow to the Brain
In May, 2011, Robert Mendick, reporter for The Telegraph, wrote an article about an experiment conducted by Professor Semir Zeki, chair in neuroaesthetics at University College London. Zeki explained, “We wanted to see what happens in the brain when you look at beautiful paintings.” The experiment concluded when you look at art “whether it is a landscape, a still life, an abstract or a portrait – there is strong activity in that part of the brain related to pleasure.” The participants underwent brain scans while being shown a series of 30 paintings by major artists. When viewing art they considered most beautiful their blood flow increased in a certain part of the brain by as much as 10%, which is the equivalent to gazing at a loved one. Paintings such as those by artists Constable, Ingres, and Monet produce the most powerful ‘pleasure’ response.
Awe, Wonder and Beauty Promote Healing
Psychologist Dr Dacher Keltner, of California University in Berkeley, said: “That awe, wonder and beauty promote healthier levels of cytokines suggests the things we do to experience these emotions – a walk in nature, losing oneself in music, beholding art – has a direct influence upon health and life expectancy.” Source: Art does heal: scientists say appreciating creative works can fight off disease (www.telegraph.co.uk)
The experience of viewing awe-inspiring art has a positive effect on the physical body and mental state. In an article on the University of Wisconsin Health website, psychologist Shilagh Mirgain, PhD was quoted as saying, “Awe has many important implications for our well-being.” Mirgain explained, “Experiencing awe can give us a sense of hope and provide a feeling of fulfillment.”
Many medical related articles also provide evidence that points to the physical benefits derived from experiencing awe-inspiring moments on a routine basis. A recent study from the University of California-Berkeley found that “participants who experienced more awe-struck moments had the lowest levels of interleukin-6, a marker of inflammation.”
Art Reduces Stress
On the U.S. National Institute of Health’s (NIH) website, according to the extensive research publication “The Connection Between Art, Healing, and Public Health: A Review of Current Literature” by Heather L. Stuckey, DEd and Jeremy Nobel, MD, “Engagement with creative activities has the potential to contribute toward reducing stress and depression and can serve as a vehicle for alleviating the burden of chronic disease.” They also stated, “Through creativity and imagination, we find our identity and our reservoir of healing. The more we understand the relationship between creative expression and healing, the more we will discover the healing power of the arts.”
Art and Brain Functions
A new report from the National Endowment for the Arts (NEA) is titled “How Creativity Works in the Brain”. This report comes after previous NEA initiatives including the NEA/Walter Reed Healing Arts Partnership. As Bill O’Brien, NEA senior advisor to the chairman for innovation stated, “The time is ripe for bringing together artists, scientists, and educators to collaboratively confront the question of how creativity functions in the brain.”He went on to say, “Imagine the potential for our nation’s health, education, culture, and productivity if we were able to truly understand the anatomy of our ‘aha’ moments, and how they can be nurtured, optimized, and deployed.”
Brain research shows the arts promote mental health
During self-isolation due to coronavirus, many are turningto the arts. Perhaps they seek a creative outlet or opportunity for expression; but it’s also possible that their attraction may be driven by an innate desire to use their brains in ways that make them feel good.
As a professor and arts educator for over 20 years, I have witnessed the mental benefits of an arts-rich life — but don’t take my word for it. There is a powerful and compelling case, supported by cutting-edge research, that the arts have positive effects on mental health.
The arts offer an evidence-based solution for promoting mental health. While practising the arts is not the panacea for all mental health challenges, there’s enough evidence to support prioritizing arts in our own lives at home as well as in our education systems.
In short: the arts create conditions for mindfulness by accessing and engaging different parts of the brain through conscious shifting of mental states. For those of us who practise regularly in the arts, we are aware of those states, able to shift in and out and reap the physiological benefits through a neurological system that delights in and rewards cognitive challenges. Neuroesthetic findings suggest this is not an experience exclusive to artists: it is simply untapped by those who do not practise in the arts.
Seven artists explain why they write, rap, take photos, draw, dance, and make movies.
Why do you make art? That’s the simple question Greater Good posed to seven artists. Their answers are surprising, and very diverse. They mention making art for fun and adventure; building bridges between themselves and the rest of humanity; reuniting and recording fragments of thought, feeling, and memory; and saying things that they can’t express in any other way.
Gina Gibney: Giving power to others
Gina Gibney is the artistic director of the New York-based Gina Gibney Dance Company, which was founded in 1991 to serve a dual mission: to create and perform contemporary choreography that draws upon the strength and insights of women and men, and to enrich and reshape lives through programs that give voice to communities in need, especially survivors of domestic abuse and individuals living with HIV/AIDS.
I make art for a few reasons. In life, we experience so much fragmentation of thought and feeling. For me, creating art brings things back together.
In my own work, that is true throughout the process. At the beginning, developing the basic raw materials for the work is deeply reflective and informative. Later, bringing those materials together into a form—distilling and shaping movement, creating a context, working to something that feels cohesive and complete. That’s incredibly powerful for me—something that really keeps me going.
Interestingly, the body of my work is like a catalog of the events and thoughts of my life. For me, making work is almost like keeping a journal. Giving that to someone else—as a kind of gift through live performance—is the most meaningful aspect of my work.
Dance is a powerful art form for the very reason that it doesn’t need to explain or comment on itself. One of the most amazing performances I have ever seen in my life was of a woman—a domestic violence survivor—dancing in a tiny conference room in a domestic violence shelter for other survivors. She was not a professional dancer. She was a woman who had faced unbelievable challenges and who was living with a great deal of sadness. She created and performed an amazing solo—but to have described her performance as “sad” would have been to diminish what we experienced.
That’s the power of dance. You can feel something and empathize with it on a very deep level, and you don’t have to put words to it.
Judy Dater: I like expressing emotions
Judy Dater has been making photographs for more than 40 years, and is considered one of America’s foremost photographers. The recipient of a Guggenheim and many other awards, her books include Imogen Cunningham: A Portrait, Women and Other Visions, Body and Soul and Cycles.
I like expressing emotions—to have others feel what it is I’m feeling when I’m photographing people.
Empathy is essential to portraiture. I’ve done landscapes, and I think they can be very poetic and emotional, but it’s different from the directness of photographing a person. I think photographing people is, for me, the best way to show somebody something about themselves—either the person I photograph or the person looking—that maybe they didn’t already know. Maybe it’s presumptuous, but that’s the desire. I feel like I’m attending to people when I’m photographing them, and I think I understand people better because I’ve been looking at them intensely for 40-some years.
Pete Docter: It’s fun making things
Pete Docter has been involved in some of Pixar Studio’s most popular and seminal animated features, including Toy Story, A Bug’s Life, Cars, and Wall-E, but he is best known as the director of the Academy-Award-winning Monsters, Inc. Docter is currently directing Up, set for release in May of 2009.
I make art primarily because I enjoy the process. It’s fun making things.
And I’m sure there is also that universal desire to connect with other people in some way, to tell them about myself or my experiences. What I really look for in a project is something that resonates with life as I see it, and speaks to our experiences as humans. That probably sounds pretty highfalutin’ coming from someone who makes cartoons, but I think all the directors at Pixar feel the same way. We want to entertain people, not only in the vacuous, escapist sense (though to be sure, there’s a lot of that in our movies too), but in a way that resonates with the audience as being truthful about life—some deeper emotional experience that they recognize in their own existence. On the surface, our films are about toys, monsters, fish, or robots; at a foundational level they’re about very universal things: our own struggles with mortality, loss, and defining who we are in the world.
As filmmakers, we’re pretty much cavemen sitting around the campfire telling stories, only we use millions of dollars of technology to do it. By telling stories, we connect with each other. We talk about ourselves, our feelings, and what it is to be human.
Or we just make cartoons. Either way we try to have a good time, and we hope the audience does too.
Harrell Fletcher: Anything anyone calls art is art
Harrell Fletcher teaches in the art department at Portland State University. He has exhibited at the San Francisco Museum of Modern Art, the Berkeley Art Museum, Socrates Sculpture Park in New York, and in numerous other museums and galleries around the world. In 2002, Fletcher started Learning To Love You More, a participatory website with Miranda July, which they turned into a book, published in 2007. Fletcher is the recipient of the 2005 Alpert Award in Visual Arts.
The question of why I make art needs to be broken down a bit before I can answer.
First of all, what is art? The definition for art that I have come up with, which seems to work best for me, is that anything anyone calls art is art. This comes from my belief that there is nothing intrinsic about art. We cannot do a chemical analysis to determine if something is art or not. Instead, I feel like calling something “art” is really just a subjective way of indicating value—which could be aesthetic, cultural, monetary, and so on.
If we look at other kinds of creative activity we can see how various forms can all exist and be valid at the same time. I’ve made what I think of as art since I was a child, initially drawings, then photographs, paintings, videos, and so on. By the time I got to graduate school, I was not so interested in making more stuff, and instead started to move into another direction, which these days is sometimes called “Social Practice.”
This is sort of a confusing term since it is so new and undefined. In a broad way, I think of it as the opposite of Studio Practice—making objects in isolation, to be shown and hopefully sold in a gallery context. Most of the art world operates with this Studio Practice approach. In Social Practice, there is more of an emphasis on ideas and actions than on objects; it can take place outside of art contexts, and there is often a collaborative or participatory aspect to the work.
So back to the question why I make art. In my case, the projects that I do allow me to meet people I wouldn’t ordinarily meet, travel to places I wouldn’t normally go to, learn about subjects that I didn’t know I would be interested in, and sometimes even help people out in small ways that make me feel good. I like to say that what I’m after is to have an interesting life, and doing the work that I do as an artist helps me achieve that.
Kwame Dawes: An environment of empathy
Kwame Dawes, Ph.D., is Distinguished Poet in Residence at the University of South Carolina. He is the author of 13 books of verse, most recently Gomer’s Song, and a novel, She’s Gone, which won the 2008 Hurston/Wright Legacy Award for Best First Novel.
I write in what is probably a vain effort to somehow control the world in which I live, recreating it in a manner that satisfies my sense of what the world should look like and be like.
I’m trying to capture in language the things that I see and feel, as a way of recording their beauty and power and terror, so that I can return to those things and relive them. In that way, I try to have some sense of control in a chaotic world.
I want to somehow communicate my sense of the world—that way of understanding, engaging, experiencing the world—to somebody else. I want them to be transported into the world that I have created with language.
And so the ultimate aim of my writing is to create an environment of empathy, something that would allow the miracle of empathy to take place, where human beings can seem to rise out of themselves and extend themselves into others and live within others. That has a tremendous power for the human being. And I know this, because that is what other people’s writing does to me when I read it.
James Sturm: The reasons are unimportant
James Sturm is a cartoonist and co-founder of the Center for Cartoon Studies in White River Junction, Vermont. He is the author of the best-selling and award-winning graphic novel The Golem’s Mighty Swing, chosen as the Best Graphic Novel of 2000 by Time magazine. In 2007, his trilogy of historical graphic novels was collected in a volume entitled James Sturm’s America: God, Gold, and Golems.
I like the question “Why Do You Make Art?” because it assumes what I do is art. A flattering assumption. The question also takes me back to my freshman year of college, where such questions like “What is nature?” and “Is reality a wave or a circle?” were earnestly debated (usually late at night and after smoking too much weed).
Twenty-five years later I’d like to think I am a little more clear-headed regarding this question. Perhaps the only insight I’ve gained is the knowledge that I have no idea and, secondly, the reasons are unimportant. Depending on my mood, on any given day, I could attribute making art to a high-minded impulse to connect with others or to understand the world or a narcissistic coping mechanism or a desire to be famous or therapy or as my religious discipline or to provide a sense of control or a desire to surrender control, etc., etc., etc.
Whatever the reason, an inner compulsion exists and I continue to honor this internal imperative. If I didn’t, I would feel really horrible. I would be a broken man. So whether attempting to make art is noble or selfish, the fact remains that I will do it nevertheless. Anything past this statement is speculation. I would be afraid that by proclaiming why I make art would be generating my own propaganda.
KRS-One: Hip hop is beyond time, beyond space
Lawrence Krisna Parker, better known by his stage name KRS-One, is widely considered by critics and other MCs to be one of hip hop’s most influential figures. At the 2008 Black Entertainment Television Awards, KRS-One was the recipient of the Lifetime Achievement Award for his rapping and activism.
I was born this way, born to make art, to make hip hop. And I think I’m just one of the people who had the courage to stay with my born identity. Hip hop keeps me true to myself, keeps me human.
Hip hop is the opposite of technology. Hip hop is what the human body does: Breaking, DJing, graffiti writing. The human body breakdances, you can’t take that away. DJing is not technology; it’s human intelligence over technology: cutting, mixing, scratching. It’s physical. The manipulation of technology is what humans do, that’s art.
Or take graffiti writing. Put a writing utensil in any kid’s hand at age two or three. They will not write on a paper like they’ll later be socialized to do, they will write on the walls. They’re just playing. That’s human. Graffiti reminds you of your humanity, when you scrawl your self-expression on the wall. Hip hop helps us to see the things in the world in new ways.
That’s why hip hop has kept me young. It doesn’t allow you to grow up too fast. Hip hop is beyond time, beyond space. That’s why I make hip hop.
Creativity is a human quality that exists in every single one of us
When you think about creativity, it might be highly creative people like Mozart, da Vinci or Einstein who spring to mind. They were all considered to be “geniuses” for their somewhat unique talents that led to global innovation in their fields. Their type of creativity is what’s known as “Big C creativity” (or historical) and is not very common in everyday life. Not all of us can create works of art or music or scientific theories that are new to the world.
But while we can’t all be Mozart, da Vinci or Einstein, many people do enjoy creative activity – through hobbies such as water colour painting or playing the piano. And these types of pursuits are often what people think of when asked what being creative looks like. Our finished pieces may not be comparable with the likes of the great masters, but often the process is therapeutic and the end result can be aesthetically pleasing.
On top of hobbies and interests, we all possess creative attributes that can help as we solve life’s problems and make decisions. It is this type of creativity that enables us to plan different routes to get to the same destination, or how to fit in a trip to the supermarket when our schedule looks full.
It might not sound very creative, but this aspect of creativity relies on our ability to consider options and assess their suitability, as well as how to make decisions based on personal prior experience or what we have learnt formally or informally. These examples are known as “small c creativity” or “personal everyday creativity”.
Creative outcomes
While Big C creativity is valued and celebrated, it is often small c creativity that has allowed humans to flourish over thousands of years. It sets us apart from other animals and it is also the type of creativity which can be fostered through our education system and beyond into the workplace.
Traditionally, research tells us that creativity has been largely associated with the arts. Our previous research has shown that teachers are often able to give examples of creative activity in arts subjects, but find it harder to do so when asked to describe creativity in subjects such as science.
But there is a growing realisation that opportunities to be creative are found across a broader range of subjects. For instance, engineering provides opportunities to be creative through problem solving, and history gives the opportunity to think creatively about why events happened, and what motivated those involved.
Research has shown that training teachers to ask particular types of questions can be one way to help support creativity across the curriculum. This is because generating solutions to problems and explanations are creative processes, and these are vital if children are to have a “complete education”.
Our research also shows how it can be more helpful to talk about “thinking creatively” rather than “creativity”. This is because people tend to see thinking creatively as independence of thought and a willingness to take risks and seek new perspectives. It is also seen as a way to perceive new relationships, make new connections, and generate new ideas.