Optimal Categorisation: How do we categorise the world around us?

Optimal Categorisation: How do we categorise the world around us?

People love to categorise! We do this on a daily basis, consciously and subconsciously. When we are confronted with something new we try and figure out what it is by comparing it to something we already know. Say, for instance, I saw something flying through the air – I may think to myself that the object is a bird, or I may say it is a plane based on my previous experiences of birds and planes. Of course the object may turn out to be something completely new, perhaps even superman!

Is it a bird? Is it a plane? No it’s Superman!

Our love of classification runs deep in scientific enquiry. Botanists and zoologists classify plants and animals into different taxonomies. Even the humble linguist loves to classify – is this new word a noun or a verb? What about the new word zoodle that was recently added to the Merrriam Webster dctionary? Is it a thing? Or an action? Can I zoodle something or is it something I can pick up and touch? Well apparently zoodle is a noun which means ‘a long, thin strip of zucchini that resembles a string or narrow ribbon of pasta’. To be honest, I love eating zoodles, though until now I never knew what they were called!

The way people classify entities around them has become encoded in the different languages we speak in many different ways. The most obvious example that springs to mind is when we learn a new language, like French or German, we are confronted with a grammatical gender system. French has two genders – Masculine and Feminine. But German has three – Masculine, Feminine and Neuter. Other languages can have many more gender distinctions. Fula, a language spoken in west and central Africa, has twenty different gender categories!

So what exactly are grammatical gender systems and how are they realised in different languages? Gender systems categorise nouns into different groups and tend to appear not on the noun itself, but on other elements in the phrase. In German, nouns are split into three different gender categories – masculine, feminine and neuter. The gender of a noun is shown by using different articles (the word ‘the’ or ‘a’) and sometimes by changing the ending of an adjective, but never on the noun itself. Thus the word for ‘the’ in German is either der, die or das depending on whether the noun in the phrase is masculine, feminine or neuter.

(1)        der       Mann
              the       man

(2)        die        Frau
              the       woman

(3)        das       Haus
              the       house

This is called ‘agreement’ as the adjectives and articles must agree with the gender of the noun. In a language with gender, each noun typically can only occur in one gender category.

Not every language has a grammatical gender system, but they are highly pervasive, with around 40% of all languages having such a system. English is quite a poor example when it comes to gender. There is no real gender agreement in English, with the exception of pronouns. We have to say: Bill walked into the grocers. He bought some apples. Where the pronoun he must agree with the gender of the noun that was previously mentioned. English uses he, she and it as the only markers of gender agreement.

Languages behave differently in how they allocate nouns to the different genders, which can be very baffling for language learners! Why in French is chair feminine, la chaise, but in German it is masculine, der Stuhl? How a language allocates nouns to its gender categories can seem somewhat arbitrary – with the exception of the words for women and men, which fall into the feminine and masculine genders being the only semantically obvious choices.

But wait! If you thought the English gender system was dull, think again! A couple of months ago my piano was being restored and when it was being moved back into the lounge the piano movers kept saying: “pull her a little bit more” and “turn her this way”. The movers used the female pronouns to describe the piano. In English, countries, pianos, ships and sometimes even cars use the feminine pronouns.

Grammatical gender isn’t the only way languages classify nouns. Some languages use words called classifiers to categorise nouns. Classifiers are similar to English measure terms, which categorise the noun in terms of its quantity, such as ‘sheet of paper’ vs. ‘pack of paper’ or ‘slice of bread vs. ‘loaf of bread’. Classifiers are found in languages all over the world and are able to categorise nouns depending on the shape, size, quantity or use of the referent, e.g. ‘animal kangaroo’ (alive) vs. ‘meat kangaroo’ (not alive). Classifier systems are very different to gender systems as nouns in a language with classifiers can appear with different classifiers depending on what property of the noun you wish to highlight. There are many different types of classifier systems, but to keep things short I am just going talk about possessive classifiers, which are mainly found in the Oceanic languages, spoken in the South Pacific.

When an item is in your possession we use possessive pronouns in English to say who the item belongs to. For instance if I say ‘my coconut’ – the possessive pronoun is my. In many Oceanic languages a noun can occur with different forms for the word my depending on how the owner intends to use it. For instance the Paamese language, spoken in Vanuatu, has four possessive classifiers and I could use the ‘drinkable’ if I was talking about my coconut that I was going to drink. I would use the ‘edible’ classifier if I was going to eat my coconut. I would use the classifier for ‘land’ if I was talking about the coconut growing in my garden. Finally, I could use the ‘manipulative’ classifier if I was going to use my coconut for some other purpose – perhaps to sit on!

(4)        ani                   mak
              coconut           my.drinkable
              ‘my coconut (that I will drink)’

(5)        ani                   ak
              coconut           my.edible
              ‘my coconut (that I will eat)’

Why do languages have different ways of categorising nouns? How do these systems develop and change over time? Are gender systems easier to learn than classifier systems? Are gender and classifiers completely different systems? Or is there more similarity to them than meets the eye? These are some of the big questions in linguistics and psychology. We are excited to start a new research project at the Surrey Morphology Group, called optimal categorisation: the origin and nature of gender from a psycholinguistic perspective, that seeks to answer these fundamental questions. Over the next three years we will talk more about these fascinating categorisation systems, explain our experimental research methods, introduce the languages and speakers under investigation, and share our findings via this blog. Just look out for the ‘Optimal Categorisation’ headings!

No we [kæn]

No we [kæn]

If something bad happened to someone you hold in contempt, would you give a fig, a shit or a flying f**k? While figs might be a luxury food item in Britain, their historical status as something that is valueless or contemptible puts them on the same level as crap, iotas and rats’ asses for the purposes of caring.

In English, we have a wide range of tools for expressing apathy. But we don’t always agree on how to express it, and even use seemingly opposite affirmative and negative sentences to express very similar concepts.  Consider the confusing distinction between ‘I couldn’t care less’ vs. ‘I could care less’ which are used in identical contexts by British and American speakers of English to mean pretty much the same thing. This mind-boggling pattern makes sense when we realise that those cold-hearted people who couldn’t care less have a care-factor of zero, while the others don’t care much, but could do so even less, if necessary.

Putting aside such oddities, negation is normally crucial to interpreting a sentence – words like ‘not’ determine whether the rest of the sentence is affirmative or negative (i.e. whether you’re claiming it is true or false). Accordingly, languages tend to mark negation clearly, sometimes in more than once place within a sentence. One of the world’s most robust languages in this respect is Bierebo, an Austronesian language spoken in Vanuatu, where no less than three words for expressing negation are required at once (Budd 2010: 518):

Mara   a-sa-yal              re         manu  dupwa  pwel.
NEGl   3PL.S-eat-find   NEG2  bird     ANA      NEG3
‘They didn’t get to eat the bird.’

While marking negation three times might seem a little inefficient, this pales in comparison to the problems that arise when you don’t clearly indicate it all. We only have to turn to English to see this at work, where the distinction between Received Pronunciation can [kæn] and can’t [kɑ:nt] is frequently imperceptible in American varieties where final /t/ is not released, resulting in [kæn] or [kən] in both affirmative and negative contexts.

You might think that once a word or affix or sound that indicates negation has been removed from a word, there isn’t anywhere else to go. But some Dravidian languages spoken in India really push the boat out in this respect. Instead of adding some sort of negative word or affix to an affirmative sentence to signal negation, the tense affix (past –tt or future -pp) is taken away, as shown by the contrast between literary Tamil affirmatives and negatives.

pati-tt-ēn                    pati-pp-ēn                  patiy-ēn
‘I learned’                  ‘I will learn.’               ‘I do/did/will not learn.’

This is highly unusual from a linguistic point of view, and it’s tempting to think that languages avoid this type of negation because it is difficult to learn or doesn’t make sense design-wise. But historical records show similar patterns have been attested across Dravidian languages for centuries. This demonstrates that inflection patterns of this kind can be highly sustainable when they come about – so we might be stuck with the can/can’t collapse for a while to come.

Today’s vocabulary, tomorrow’s grammar

Today’s vocabulary, tomorrow’s grammar

If an alien scientist were designing a communication system from scratch, they would probably decide on a single way of conveying grammatical information like whether an event happened in the past, present or future. But this is not the case in human languages, which is a major clue that they are the product of evolution, rather than design. Consider the way tense is expressed in English. To indicate that something happened in the past, we alter the form of the verb (it is cold today, but it was cold yesterday), but to express that something will happen in the future we add the word will. The same type of variation can also be seen across languages: French changes the form of the verb to express future tense (il fera froid demain, ‘it will be cold tomorrow’, vs il fait froid aujourd’hui, ‘it is cold today’).

The future construction using will is a relatively recent development. In the earliest English, there was no grammatical means of expressing future time: present and future sentences had identical verb forms, and any ambiguity was resolved by context. This is also how many modern languages operate. In Finnish huomenna on kylmää ‘it will be cold tomorrow’, the only clue that the sentence refers to a future state of affairs is the word huomenna ‘tomorrow’.

How, then, do languages acquire new grammatical categories like tense? Occasionally they get them from another language. Tok Pisin, a creole language spoken in Papua New Guinea, uses the word bin (from English been) to express past tense, and bai (from English by and by) to express future. More often, though, grammatical words evolve gradually out of native material. The Old English predecessor of will was the verb wyllan, ‘wish, want’, which could be followed by a noun as direct object (in sentences like I want money) as well as another verb (I want to sleep). While the original sense of the verb can still be seen in its German cousin (Ich will schwimmen means ‘I want to swim’, not ‘I will swim’), English will has lost it in all but a few set expressions like say what you will. From there it developed a somewhat altered sense of expressing that the subject intends to perform the action of the verb, or at least, that they do not object to doing so (giving us the modern sense of the adjective ‘willing’). And from there, it became a mere marker of future time: you can now say “I don’t want to do it, but I will anyway” without any contradiction.

This drift from lexical to grammatical meaning is known as grammaticalisation. As the meaning of a word gets reduced in this way, its form often gets reduced too. Words undergoing grammaticalisation tend to gradually get shorter and fuse with adjacent words, just as I will can be reduced to I‘ll. A close parallel exists in in the Greek verb thélō, which still survives in its original sense ‘want’, but has also developed into a reduced form, tha, which precedes the verb as a marker of future tense. Another future construction in English, going to, can be reduced to gonna only when it’s used as a future marker (you can say I’m gonna go to France, but not *I’m gonna France). This phonetic reduction and fusion can eventually lead to the kind of grammatical marking within words that we saw with French fera, which has arisen through the gradual fusion of earlier  ferre habet ‘it has to bear’.

Words meaning ‘want’ or ‘wish’ are a common source of future tense markers cross-linguistically. This is no coincidence: if someone wants to perform an action, you can often be reasonably confident that the action will actually take place. For speakers of a language lacking an established convention for expressing future tense, using a word for ‘want’ is a clever way of exploiting this inference. Over the course of many repetitions, the construction eventually gets reinterpreted as a grammatical marker by children learning the language. For similar reasons, another common source of future tense markers is words expressing obligation on the part of the subject. We can see this in Basque, where behar ‘need’ has developed an additional use as a marker of the immediate future:

ikusi    behar   dut

see       need     aux

‘I need to see’/ ‘I am about to see’

This is also the origin of the English future with shall. This started life as Old English sceal, ‘owe (e.g. money)’. From there it developed a more general sense of obligation, best translated by should (itself originally the past tense of shall) or must, as in thou shalt not kill. Eventually, like will, it came to be used as a neutral way of indicating future time.

But how do we know whether to use will or shall, if both indicate future tense? According to a curious rule of prescriptive grammar, you should use shall in the first person (with ‘I’ or ‘we’), and will otherwise, unless you are being particularly emphatic, in which case the rule is reversed (which is why the fairy godmother tells Cindarella ‘you shall go to the ball!’). The dangers of deviating from this rule are illustrated by an old story in which a Frenchman, ignorant of the distinction between will and shall, proclaimed “I will drown; nobody shall save me!”. His English companions, misunderstanding his cry as a declaration of suicidal intent, offered no aid.

This rule was originally codified by Bishop John Wallis in 1653, and repeated with increasing consensus by grammarians throughout the 18th and early 19th centuries. However, it doesn’t appear to reflect the way the words were actually used at any point in time. For a long time shall and will competed on fairly equal terms – shall substantially outnumbers will in Shakespeare, for example – but now shall has given way almost entirely to will, especially in American English, with the exception of deliberative questions like shall we dance? You can see below how will has gradually displaced shall over the last few centuries, mitigated only slightly by the effect of the prescriptive rule, which is perhaps responsible for the slight resurgence of shall in the 1st person from approximately 1830-1920:

Until the eventual victory of will in the late 18th century, these charts (from this study) actually show the reverse of what Wallis’s rule would predict: will is preferred in the 1st person and shall in the 2nd , while the two are more or less equally popular in the 3rd person. Perhaps this can be explained by the different origins of the two futures. At the time when will still retained an echo of its earlier meaning ‘want’, we might expect it to be more frequent with ‘I’, because the speaker is in the best position to know what he or she wants to do. Likewise, when shall still carried a shade of its original meaning ‘ought’, we might expect it to be most frequent with ‘you’, because a word expressing obligation is particularly useful for trying to influence the action of the person you are speaking to. Wallis’ rule may have been an attempt to be extra-polite: someone who is constantly giving orders and asserting their own will comes across as a bit strident at best. Hence the advice to use shall (which never had any connotations of ‘want’) in the first person, and will (without any implication of ‘ought’) in the second, to avoid any risk of being mistaken for such a character, unless you actually want to imply volition or obligation.

Words apart: when one word becomes two

Words apart: when one word becomes two

As any person working with language knows, the list of words from which we build our sentences is not a fixed one but rather is in a state of constant flux. Words (or lexemes in linguists’ terminology) are constantly being borrowed (such as ‘sauté’ from French), coined (such as ‘brexit’ from a blend of ‘Britain’ and ‘exit’) or lost (such as ‘asunder’, a synonym for ‘apart’). These happen all the time. However, two more logical processes exist that can alter the total number of entries in the dictionary of our language. Occasionally, lexemes may also merge, if two or more become one; or split, if one becomes two. These more exotic cases constitute a window into the fascinating workings of the grammar. In this blog I will present the story of one of these splitting events. It involves the Spanish verb saber, from Latin sapiō.

The verb’s original meaning must have been ‘taste’ in the sense of ‘having a certain flavour’, as in the sentence “Marmite tastes awful”. At some point it also began to be used figuratively to mean ‘come to know something’, not only by means of the sense of taste but also for knowledge arrived at by means of other senses. It is interesting that in the Germanic languages it seems that it was sight rather that taste that was traditionally used in the same way. Consider, for instance, the common use, in English, of the verb ‘see’ in contexts like “I see what you mean”, where it is interchangeable with ‘know’. Whether the source verb can be explained by the differences between traditional Mediterranean and Anglo-Saxon cuisines I’d rather not suggest for fear of deportation.

In any case, what must have been once a figurative use of the verb ‘taste’ became at some point the default way of expressing ‘know’. These are the two main senses of saber in contemporary Spanish and of its equivalents in most other Romance languages. The question I ask here is: do speakers of Spanish today categorize this as one word with two meanings? Or do they feel they are two different words that just happen to sound the same? There may be a way to tell.

In Spanish, unlike in English, a verb can take dozens of different forms. The shape of a verb changes depending on who is doing the action of the verb, whether the action is a fact or a wish etc. Thus, for example, speakers of Spanish say yo sé ‘I know’ but t sabes ‘you know’. They also use one form (so-called ‘indicative’) in sentences like yo veo que t sabes inglés ‘I see that you know English’ but a different form (so-called ‘subjunctive’) in yo espero que t sepas inglés ‘I hope that you know English’. The Real Academia Española, the prescriptive authority in the Spanish language, has ruled that, because saber is a single verb, it should have the same forms (sé, sabes etc.) regardless of its particular sense. Speakers, however, have trouble to abide by this rule, which is probably the reason why the need for a rule was felt in the first place. My native speaker intuition, and that of other speakers of Spanish, is that the verb may have a different form depending on its sense:

Forms of Spanish saber (forms starting with sab– in light gray, forms starting with sep– in dark gray)

The most obvious explanation for why this change could happen is that, when the two main senses of saber drifted sufficiently away from each other, speakers ceased to make the generalization that they were part of the same lexeme. When this happened, the necessity to have the same forms for the two meanings of saber dissappeared. But, why sepo?

Because cannibalism is on the wane (also in Spain) we hardly ever speak about how people taste. As a result, the first and second person forms of saber (e.g. irregular ) are only ever encountered by speakers under their meaning ‘know’. Because of this, they do not count as evidence for language users’ deduction of the full array of forms of saber. This meant that the first and second person forms of saber₂ ‘taste’, when needed (imagine someone saying sepo salado ‘I taste salty’ after coming out of the sea), had to be formed on the fly on evidence exclusive to its sense ‘taste’ (i.e. third persons and impersonal forms):

Because of the evidence available to speakers, at first sight it might seem strange that this ‘fill-in-the-gaps’ exercise did not result in the apparently more regular 1SG indicative form sabo. This would have resulted in a straightforward indicative vs subjunctive distinction in the stem. The chosen form, however, makes more sense when one observes the patterns of alternation present in other Spanish verbs:

Verbs that have a difference in the stem in the third person forms between indicative and subjunctive (cab- vs quep- or ca- vs caig-) overwhelmingly use the form of the subjunctive also in the formation of the first person singular indicative. This is a quirk of many Spanish verbs. It appears that, by sheer force of numbers, the pattern is spotted by native speakers and occasionally extended to other verbs which, like saber look like could well belong in this class.

In this way, the tiny change from to sepo allows us linguists to see that patterns like those of caber and caer are part of the grammatical knowledge of speakers and are not simply learnt by heart for each verb. In addition, it gives us crucial evidence to conclude that, today, there are in Spanish not one but two different verbs whose infinitive form is saber. Much like the T-Rex in Jurassic Park, we linguists can sometimes only see some things when they ‘move’.

A plurality of plurals

A plurality of plurals

Of all the world’s languages, English is the most widely learnt by adults. Although Mandarin Chinese has the highest number of speakers overall, owing to the huge size of China’s population, second-language speakers of English outnumber those of Mandarin more than three times.

Considering that the majority of English speakers learn the language in adulthood, when our brains have lost much of their early plasticity, it’s just as well that some aspects of English grammar are pretty simple compared to other languages. Take for example the way we express the plural. With only a small number of exceptions, we make plurals by adding a suffix –s to the singular. The pronunciation differs depending on the last sound of the word it attaches to – compare the ‘z’ sound at the end of dogs to the ‘s’ sound at the end of cats, and the ‘iz’ at the end of horses – but it varies in a consistently predictable way, which makes it easy to guess the plural of an English noun, even if you’ve never heard it before.

That’s not the case in every language. Learners of Greek, for example, have to remember about seven common ways of making plurals. Sometimes knowing the final sounds of a noun and its gender make it possible to predict the plural, but  other times learners simply have to memorise what kind of plural a noun has: for example pateras ‘father’ and loukoumas ‘doughnut’ both have masculine gender and singulars ending in –as, but in Standard Greek their plurals are pateres and loukoumathes respectively.

This is similar to how English used to work. Old English had three very common plural suffixes, -as, -an and –a, as well as a number of less common types of plural (some of these survive marginally in a few high-frequency words, including vowel alternations like tooth~teeth and zero-plurals like deer). The modern –s plural descends from the suffix –as, which originally was used only for a certain group of masculine nouns like stān, ‘stone’ (English lost gender in nouns, too, but that’s a subject for another blog post).

How did the -s plural overtake these competitors to become so overwhelmingly predominant in English? Partly it was because of changes to the sounds of Old English as it evolved into Middle English. Unstressed vowels in the last syllables of words, which included most of the suffixes which expressed the gender, number and case of nouns, coalesced into a single indistinct vowel known as ‘schwa’ (written <ə>, and pronounced like the ‘uh’ sound at the beginning of annoying). Moreover, final –m came to be pronounced identically to –n. This caused confusion between singulars and plurals: for example, Old English guman ‘to a man’ and gumum ‘to men’ both came to be pronounced as gumən in Middle English. It also caused confusion between two of the most common noun classes, the Old English an-plurals and the a-plurals. As a result they merged into a single class, with -e in the singular and -en in the plural.

This left Middle English with two main types of plural, one with –en and one with –(e)s. Although a couple of the former type remain to this day (oxen and children), the suffix –es was gradually generalised until it applied to almost all nouns, starting in the North of England and gradually moving South.

A similar kind of mass generalisation of a single strategy for expressing a grammatical distinction is often seen in the final stages of language death, as a community of speakers transition from a minority to a majority language as their mother tongue. Nancy Dorian has spent almost 50 years documenting the dying East Sutherland dialect of Scots Gaelic as it is supplanted by English in three remote fishing villages in the Scottish highlands. In one study the Gaelic speakers were divided into fluent speakers and ‘semi-speakers’, who used English as their first language and Gaelic as a second language. Dorian found that the semi-speakers tended to overgeneralise the plural suffix –an, applying it to words for which fluent speakers would have used one of another ten inherited strategies for expressing plural number, such as changing the final consonant of the word (e.g. phũ:nth ‘pound’, phũnčh ‘pounds’), or altering its vowel (e.g. makh ‘son’, mikh ‘sons’).

But why should the last throes of a dying language bear any resemblance to the evolution of a thriving language like English? A possible link lies in second language acquisition by adults. At the same time as these changes were taking place, English was undergoing intense contact with Scandinavian settlers who spoke Old Norse. During the same period English shows many signs of Old Norse influence. In addition to many very common words like take and skirt (which originally had a meaning identical to its native English cognate shirt), English borrowed several grammatical features of Scandinavian languages, such as the suffix –s seen in third person singular present verbs like ‘she blogs’ (the inherited suffix ended in –th, as in ‘she bloggeth’), and the pronouns they, their and them, which replaced earlier hīe, heora and heom. Like the extension of the plural in –s, these innovations appeared earliest in Northern dialects of English, where settlements of Old Norse speakers were concentrated, and gradually percolated South during the 11th to 15th centuries.

It’s possible that English grammar was simplified in some respects as a consequence of what the linguist Peter Trudgill has memorably called “the lousy language-learning abilities of the human adult”. Research on second-language acquisition confirms what many of us might suspect from everyday experience, that adult learners struggle with inflection (the expression of grammatical categories like ‘plural’ within words) and prefer overgeneralising a few rules rather than learning many different ways of doing the same thing. In this respect, Old Norse speakers in Medieval England would have found themselves in a similar situation to semi-speakers of East Sutherland Gaelic – when confronted with a number of different ways of expressing plural number, it is hard to remember for each noun which kind of plural it has, but simple to apply a single rule for all nouns. After all, much of the complexity of languages is unnecessary for communication: we can still understand children when they make mistakes like foots or bringed.

 

What happened to whom (and why)?

What happened to whom (and why)?

Wh- words like which, whom and why get a lot of knickers in a twist, as attested by this oatmeal comic on when to use who vs whom, or the age-old debate about the correct use of which vs that (on which see this blog post by Geoffrey Pullum). But in Old English the wh- words formed a complete and regular system which would have been easy to get the hang of. They were used strictly as interrogative pronouns – words that we use for asking questions like who ate all the pies? – rather than relative pronouns, which give extra information about an item in the sentence (Jane, who ate all the pies, is a prolific blogger) or narrow down the reference of a noun (women who eat pies are prolific bloggers). They developed their modern relative use in Middle English, via reinterpretation of indirect questions – in other words, sentences like she asked who ate all the pies, containing the question who ate all the pies?, served as the template for new sentences like she knew who ate all the pies, where who functions as a relative.

Who ate all the pies? They did.

Originally, the new relative pronoun whom (in its Middle English form hwām) functioned as the dative case form of who, used when the person in question is the indirect object of a verb or after prepositions like for. For direct objects, the accusative form hwone was used instead. So to early Middle English ears, the man for whom I baked a pie would be fine, while the man whom I baked in a pie would be objectionable (on grammatical as well as ethical grounds). Because nouns also had distinct nominative, dative and accusative forms, the wh- words would have posed no special difficulty for speakers. But as English lost distinct case forms for nouns, the pronoun system was also simplified, and the originally dative forms started to replace accusative forms, just as who is now replacing whom. This created a two-way opposition between subject and non-subject which is best preserved in our system of personal pronouns: we say he/she/they baked a pie, but I baked him/her/them (in) a pie.

Thus hwone disappeared the way of hine, the old accusative form of he. Without the support of a fully-functioning case system in the nouns, other case forms of pronouns were reinterpreted. Genitive pronouns like my and his were transformed into possessive adjectives (his pie is equivalent to the pie of him, but you can no longer say things like I thought his to mean ‘I thought of him’). The wh- words also used to have an instrumental case form, hwȳ, meaning ‘by/through what?’, which became an autonomous word why.

Although him and them are still going strong, whom has been experiencing a steady decline. Defenders of ‘whom’ will tell you that the rule for deciding whether to use who or whom is exactly the same as that for he and him, but outside the most formal English, whom is now mainly confined to fixed phrases like ‘to whom it may concern’. For many speakers, though, it has swapped its syntactic function for a sociolinguistic one by becoming merely a ‘posh’ variant of who: in the words of James Harding, creator of the ‘Whom’ Appreciation Society, “those who abandon ‘whom’ too soon will regret it when they next find themselves in need of sounding like a butler.”

The death of the dual, or how to count sheep in Slovenian

The death of the dual, or how to count sheep in Slovenian

‘How cool is that?’ in German, literally ‘how horny is that then?’

One reason why translation is so difficult – and why computer translations are sometimes unreliable – is that languages are more than just different lists of names for the same universal inventory of concepts. There is rarely a perfect one-to-one equivalence between expressions in different languages: the French word mouton corresponds sometimes to English sheep, and at other times to the animal’s meat, where English uses a separate word lamb or mutton.

This was one of the great insights of Ferdinand de Saussure, arguably the father of modern linguistics. It applies not only in the domain of lexical semantics (word meaning), but also to the categories which languages organise their grammars around. In English, we systematically use a different form of nouns and verbs depending on whether we are referring to a single entity or multiple entities. The way we express this distinction varies: sometimes we make the plural by adding a suffix to the singular (as with hands, oxen), sometimes we change the vowel (foot/feet) and occasionally we don’t mark the distinction on a noun at all, as with sheep (despite the best efforts of this change.org petition to change the singular to ‘shoop’). Still, we can often tell whether someone is talking about one or more sheep by the form of the agreeing verb: compare ‘the sheep are chasing a ball’ to ‘the sheep is chasing a ball’.

Some languages make more fine-grained number distinctions. The English word sheep could be translated as ovca, ovci or ovce in Slovenian, depending on whether you’re talking about one, two, or three or more animals, respectively. Linguists call this extra category between singular and plural the dual. The difference between dual and plural doesn’t show up just in nouns, but also in adjectives and verbs which agree with nouns. So to translate the sentence ‘the beautiful sheep are chasing a ball’, you need to ascertain whether there are two or more sheep, not just to translate sheep, but also beautiful and chase.

Lepi ovci lovita žogo
beautiful sheep chase ball
Lepe ovce lovijo žogo
beautiful sheep chase ball

According to some, having a dual number makes Slovenian especially suited for lovers (could this explain the Slovenian tourist board’s decision to title their latest campaign I feel sLOVEnia?). But putting such speculations aside, it’s hard to see what the point of a dual could be. We rarely need to specify whether we are talking about two or more than two entities, and on the rare occasions we do need to make this information explicit, we can easily do so by using the numeral two.

This might be part of the reason why many languages, including English, have lost the dual number. Both English and Slovenian ultimately inherited their dual from Proto-Indo-European, the ancestor of many of the languages of Europe and India. Proto-Indo-European made a distinction between dual and plural number in its nouns, adjectives, pronouns, and verbs, but most of the modern languages descended from it have abandoned this three-way system in favour of a simpler opposition between singular and plural. Today, the dual survives only in two Indo-European languages, Slovenian and Sorbian, both from the Slavic subfamily.

In English the loss of the dual was a slow process, taking place over thousands of years. By the time the predecessor of English had split off from the other Germanic languages, the plural had replaced the dual everywhere except the first and second-person pronouns we and you, and verbs which agreed with them. By the earliest written English texts, it had lost the dual forms of verbs altogether, but still retained distinct pronouns for ‘we two’ and ‘you two’. By the 15th century, these were replaced by the plural forms, bringing the dual’s final demise.

Grammatical categories do not always disappear without a trace – in some languages the dual has left clues of its earlier existence, even though no functional distinction between dual and plural remains. Like English, German lost its dual, but in some Southern German dialects the dual pronoun enk (cognate with Old English inc, ‘to you two’) has survi­ved instead of the old plural form. In modern dialects of Arabic, plural forms of nouns have generally replaced duals, except in a few words mostly referring to things that usually exist in pairs, like idēn ‘hands’, where the old dual form has survived as the new plural instead. Other languages show vestiges of the dual only in certain syntactic environments. For example, Scottish Gaelic has preserved old dual forms of certain nouns only after the numeral ‘two’: compare aon chas ‘one foot’, dà chois ‘two feet’, trì casan ‘three feet’, casan ‘feet’.

Although duals seem to be on the way out in Indo-European languages, it isn’t hard to find healthy examples in other language families (despite what the Slovenian tourist board might say). Some languages have even more complicated number systems: Larike, one of the languages spoken in Indonesia, has a trial in addition to a dual, which is used for talking about exactly three items. And Lihir, one of the many languages of Papua New Guinea, has a paucal number in addition to both dual and trial, which refers to more than three but not many items. This system of five number categories (singular/dual/trial/paucal/plural) is one of the largest so far discovered. Meanwhile, on the other end of the spectrum are languages which don’t make any number distinction in nouns, like English sheep.