A “let’s circle back” guy

A “let’s circle back” guy

As everyone knows by now, for the foreseeable future we must all stay at home as much as possible to slow the spread of COVID-19 and reduce the burden on our health services – which has already been substantial, and will soon be enormous even in the best possible scenario.

This shift in the way we operate as a society will have a wide range of effects on our lives, which are already being noticed. Some of these were the kind of thing you might have thought of in advance – but others less so. For example, soon after the advice to work from home really started to bite in the US, a substantial thread developed on Twitter, all started off by the following tweet:

https://twitter.com/inLaurasWords/status/1240687424377720835

The thousands of responses that appeared within a few hours of this tweet shows how deeply it resonated: many people must have been through their own version of the same surprising experience, some of them presumably in the last few days. But what happened here, and why was it so surprising? And why, as a linguist, am I sitting at home and writing a blog post about it now?

This single tweet, which people found so easy to identify with, in fact brings together a number of issues that linguists are interested in. For one thing, it works as a clear illustration of a point that people intuitively appreciate, but which has endless ramifications: the language you use is never just an instrument for communicating your thoughts, but is also taken to say something important about your identity, whether you intend it to or not. If a guy uses the expression “let’s circle back”, meaning to return to an issue later, that makes him a “let’s circle back” guy – that is, a particular kind of person. In a jokey way, the tweeter is implying that she already had a mental category of ‘the kind of person who would say things like that’, and she takes it for granted that we do too. In this case, the surprise for Laura Norkin was in suddenly discovering that her own husband belonged in that pre-existing category: the way she tells it, hearing him use a specific turn of phrase counted as finding out important new information about who he is as a person, which she was not necessarily best pleased about.

Making a linguistic choice: a bilingual road sign in Wales

Since the mid-twentieth century, the field of sociolinguistics has drawn attention to the fact that this kind of thing is going on everywhere in language. Consciously or unconsciously, people are making linguistic choices all the time – whether that means choosing between two totally different languages, between two different expressions with the same meaning (do you circle back to something or just return to it?), or between two very slightly different pronunciations of the same word. Any of these choices might turn out to ‘say something’ about how you see yourself – or how other people see you. And the social meanings and values assigned to the different choices are likely to change over time: so understanding what is going on with one person’s use of language really requires you to understand what is going on right across the community, which is like an ecosystem full of co-existing language diversity. How do linguistic developments, and the social responses to them, propagate and interact in this ecosystem? That’s something that researchers work hard to find out.

The tweet also picks up on the importance of the situational context for the way people use language. Laura Norkin had never heard her husband use the offending expression before because it belongs to a particular register – meaning a variety of language which is characteristic of a particular sphere of activity. Circling back is characteristic of ‘full work mode’, something which had never previously needed to surface in the domestic setting.

Why do registers exist? Partly it must be to do with the fact that different people know different things: for example, lawyers can expect to be able to use technical legal terminology with their colleagues, but not with their clients, even if they are talking about all the same issues – because behind the terminology there lies a wealth of specialist knowledge. Similarly, anyone would modify their language when talking to a five-year-old as opposed to a fifty-year-old.

But this cannot be the whole story: it doesn’t help you to explain the difference between returning and circling back. Should we think of the business/marketing/management world, where terms like circling back are stereotypically used, as a mini community within the community, with its own ideas of what counts as normal linguistic practice? Or is everyone involved giving a signal that they take on a new, businesslike identity when they turn up to the office – even if these days that doesn’t involve leaving the house? Again, working out the relationship between the language aspect and the social aspect here makes an interesting challenge for linguistics.

The medical profession is well known for having its own technical register

But this was not just an anecdote about how unusual it is to be at home and yet hear terms that usually turn up at work. We can tell that “let’s circle back”, just like other commonly mocked corporate expressions such as “blue-sky thinking” or “push the envelope”, is something we are expected to dislike – but why? The existence of different registers is not generally thought of as a bad thing in itself. You could give the answer that this expression is overused, a cliché, and thus sounds ugly. But really, things must be the other way round: English abounds in commonly used expressions, and only the ones that ‘sound ugly’ get labelled as overused clichés. And there is nothing inherently worse about circle back than about re-turn – in fact, when you think about it, they are just minor variations on the same metaphor.

So what is really going on here? The popular reaction to circle back, and other things of that kind, seems to involve lots of factors at once. The expression is new enough that people still notice it; but it is not unusual enough to sound novel or imaginative. It is currently restricted to a particular kind of professional setting that most people never find themselves in; but it does not refer to a complex or specific enough concept to ‘deserve’ to exist as a technical term. And we do not tend to worry too much about making fun of the linguistic habits of people who have a relatively privileged position in society: certainly, teasing your husband by outing him as a “let’s circle back” guy is not really going to do him any harm.

Spelling it out like this helps to suggest just how much information we are factoring in whenever we react to the linguistic behaviour of the people around us – and this is something we do all the time, mostly without even noticing. We are social beings, and cannot help looking for the social message in the things people say, as well as the literal message: establishing this fact, and working out how to investigate it scientifically, has been one of the great overarching projects of modern linguistics. Right now, for everyone’s benefit, we need to learn how to be less sociable than ever. But as the tweet above suggests, people’s inbuilt sensitivity to language as a social code is not going to change any time soon.

Cushty Kazakh

Cushty Kazakh

With thousands of miles between the East End of London and the land of Kazakhs, cushty was the last word one expected to hear one warm spring afternoon in the streets of Astana (the capital of Kazakhstan, since renamed Nur-Sultan). The word cushty (meaning ‘great, very good, pleasing’) is usually associated with the Cockney dialect of the English language which originated in the East End of London.

Del Boy from Only Fools and Horses
Del Boy from Only Fools and Horses

Check out Del Boy’s Cockney sayings (Cushty from 4:04 to 4:41).

Cockney is still spoken in London now, and the word is often used to refer to anyone from London, although a true Cockney would disagree with that, and would proudly declare her East End origins. More specifically, a true ‘Bow-bell’ Cockney comes from the area within hearing distance of the church bells of St. Mary-le-Bow, Cheapside, London.

Due to its strong association with modern-day London, the word ‘Cockney’ might be perceived as being one with a fairly short history. This could not be further from the truth as its etymology goes back to a late Middle English 14th century word cokenay, which literally means a “cock’s egg” – a useless, small, and defective egg laid by a rooster (which does not actually produce eggs). This pejorative term was later used to denote a spoiled or pampered child, a milksop, and eventually came to mean a town resident who was seen as affected or puny.

The pronunciation of the Cockney dialect is thought to have been influenced by Essex and other dialects from the east of England, while the vocabulary contains many borrowings from Yiddish and Romany (cushty being one of those borrowings – we’ll get back to that in a bit!). One of the most prominent features of Cockney pronunciation is the glottalisation of the sound [t], which means that [t] is pronounced as a glottal stop: [ʔ]. Another interesting feature of Cockney pronunciation is called th-fronting, which means that the sounds usually induced by the letter combination th ([θ] as in ‘thanks’ and [ð] as in ‘there’ are replaced by the sounds [f] and [v]. These (and some other) phonological features characteristic of the Cockney dialect have now spread far and wide across London and other areas, partly thanks to the popularity of television shows like “Only Fools and Horses” and “EastEnders”.

As far as grammar is concerned, the Cockney dialect is distinguished by the use of me instead of my to indicate possession; heavy use of ain’t in place of am not, is not, are not, has not, have not; and the use of double negation which is ungrammatical in Standard British English: I ain’t saying nuffink to mean I am not saying anything.

Having borrowed words, Cockney also gave back generously, with derivatives from Cockney rhyming slang becoming a staple of the English vernacular. The rhyming slang tradition is believed to have started in the early to mid-19th century as a way for criminals and wheeler-dealers to code their speech beyond the understanding of police or ordinary folk. The code is constructed by way of rhyming a phrase with a common word, but only using the first word of that phrase to refer to the word. For example, the phrase apples and pears rhymes with the word stairs, so the first word of the phrase – apples – is then used to signify stairs: I’m going up the apples. Another popular and well-known example is dog and bone – telephone, so if a Cockney speaker asks to borrow your dog, do not rush to hand over your poodle!


Test your knowledge of Cockney rhyming slang!

Right, so did I encounter a Cockney walking down the field of wheat (street!) in Astana saying how cushty it was? Perhaps it was a Kazakh student who had recently returned from his studies in London and couldn’t quite switch back to Kazakh? No and no. It was a native speaker of Kazakh reacting in Kazakh to her interlocutor’s remark on the new book she’d purchased by saying күшті [kyʃ.tɨˈ] which sounds incredibly close to cushty [kʊˈʃ.ti]. The meanings of the words and contexts in which they can be used are remarkably similar too. The Kazakh күшті literally means ‘strong’, however, colloquially it is used to mean ‘wonderful, great, excellent’ – it really would not be out of place in any of Del Boy’s remarks in the YouTube video above! Surely, the two kushtis have to be related, right? Well…

Recall, that cushty is a borrowing from Romany (Indo-European) kushto/kushti, which, in turn, is known to have borrowed from Persian and Arabic. In the case of the Romany kushto/kushti, the borrowing could have been from the Persian khoši meaning ‘happiness’ or ‘pleasure’. It would have been very neat if this could be linked to the Kazakh күшті, however, there seems to be no connection there… Kazakh is a Turkic language and the etymology of күшті can be traced back to the Old Turkic root küč meaning ‘power’, which does not seem to have been borrowed from or connected with Persian. Certainly, had we been able to go back far enough, we might have found a common Indo-European-Turkic root in some Proto-Proto-Proto-Language. As things stand now, all we can do is admire what appears to be a wonderful coincidence, and enjoy the journeys on which a two-syllable word you’d overheard in the street might take you.

Poolish

Poolish

Courtesy of thefreshloaf.com

Those who have out of desire have chosen to or out of dire necessity been forced to bake their own bread may have encountered the term poolish. It refers to a semi-liquid pre-ferment used in bread-making, a mixture of half water and half white flour mixed with a teeny bit of yeast and allowed to slowly ferment for several hours, up to a day, before mixing up the final dough.

The word itself is an exceedingly odd one, and has been the source of much head-scratching and inconclusive speculation among bread-bakers across the world: it looks like the English word Polish, but is spelled funny, and anyway seems to be borrowed from French, where the spelling would be funnier still. Most discussions of the technique include the obligatory etymological digression, usually fantastical, involving journeymen Polish bakers fanning out over Europe. Linguists too have gotten on the trail: David Gold’s Studies in Etymology and Etiology (2009) devotes a whole page to the question, but does not get too far.

In its current form it is technical jargon from French commercial baking, and has probably made its way to a broader public through Raymond Calvel’s influential Le gout du pain (‘The taste of bread’) from 1990. In his account:

This method of breadmaking was first developed in Poland during the 1840s, from whence its name. It was then used in Vienna by Viennese bakers, and it was during this same period that it became known in France. (2001 edition translated by Ronald Wirtz)

This explanation has been widely accepted, and appears in one form or another in any number of bread-baking books. But how could it even be true? The first problem is the word itself. Poolish is not the French word for Polish, and doesn’t much look a French word anyway. In earlier French texts it crops as pouliche, which looks more French and is indeed the word for a young mare, whose connection to bread dough is tenuous at best. But earlier French texts also have the spelling poolisch or polisch, which looks rather more German than French and suggests we follow the Viennese trail instead.

This thread of inquiry has its own potential hiccoughs. The German word for Polish is polnisch, with an [n], so would this not just be fudging things? Actually not: polisch, poolischpohlisch or pollisch turn up often enough in older texts as alternative words for ‘Polish’, particularly in southern varieties of German that include Austria. And it is exactly in these form that we find it being used to refer to this particular process, juxtaposed with Dampfl (or Dampfel or Dampel), the term in southern Germany and Austria for a rather stiffer pre-ferment which goes through a shorter rising period, as in these two examples from 1865, one from Leopold Wimmer’s self-published advertising advertising screed for St. Marxer brand (of Vienna) pressed yeast, where it turns up as Pohlisch:

the other from Ignaz Reich’s (of Pest, as in Budapest) account of ancient Hebrew baking practices, where it’s rendered as pollisch.

The term polisch (in all its variants) in this sense seems to have died a natural death in German, only to reemerge during the current craft-baking revival in the guise of poolish.

But if poolish was originally the (or a) German word for Polish, we run up against the sticky question of what it was actually referring to. Calvel repeats the story that this technique was invented by Polish bakers (which turns up in a 1972 article in The Atlantic Monthly, I think anyway, because it’s but coyly revealed by Google in snippet view), a supposition which lacks as much plausibility as it does historical attestation. Poland has traditionally been a land of sourdough rye bread. Is seems unlikely that a novel technique involving the use both of white wheat flour and commercial pressed yeast (a relatively new product) would have been devised there and introduced into the imperial capital that was Vienna. So what on earth could it have meant?

Here I make my own foray into speculation; you read it here first. Poland is not just a land of sourdough rye bread, it is a land of a soup made from rye sourdough: żur or żurek (itself derived from sur, one variant of the German word for ‘sour’), still widely consumed and also sold in ready form form for time-strapped gourmands. Since the Austro-Hungarian Empire included much of what had once been Poland, it isn’t too far-fetched to think that people in Vienna might have been familiar with this soup. And since the salient characteristic of poolish is that it is basically liquid, in opposition to more solid doughs, my guess is that the term poolish arose as a facetious allusion to żur: a soup-like fermenting dough mixture, like the thinned-out sourdough soup that Poles eat.

This theory has the minor drawback of lacking any positive evidence in its favor. So far the only 19th century reference to żur outside of its normal context that I have been able to find is as a cure for equine distemper, otherwise known as ‘strangles’. That leads us into the topic of pluralia tantum disease names…

Sense and polarity, or why meaning can drive language change

Sense and polarity, or why meaning can drive language change

Generally a sentence can be negative or positive depending on what one actually wants to express. Thus if I’m asked whether I think that John’s new hobby – say climbing – is a good idea, I can say It’s not a good idea; conversely, if I do think it is a good idea, I can remove the negation not to make the sentence positive and say It’s a good idea. Both sentences are perfectly acceptable in this context.

From such an example, we might therefore conclude that any sentence can be made positive by removing the relevant negative word – most often not – from the sentence. But if that is the case, why is the non-negative response I like it one bit not acceptable, odd when its negative counterpart I don’t like it one bit is perfectly acceptable and natural?

This contrast has to do with the expression one bit: notice that if it is removed, then both negative and positive responses are perfectly fine: I could respond I don’t like it or, if I do like it, I (do) like it.

It seems that there is something special about the phrase one bit: it wants to be in a negative sentence. But why? It turns out that this question is a very big puzzle, not only for English grammar but for the grammar of most (all?) languages. For instance in French, the expression bouger/lever le petit doigt `lift a finger’ must appear in a negative sentence. Thus if I know that John wanted to help with your house move and I ask you how it went, you could say Il n’a pas levé le petit doigt `lit. He didn’t lift the small finger’ if he didn’t help at all, but I could not say Il a levé le petit doigt lit. ‘He lifted the small finger’ even if he did help to some extent.

Expressions like lever le petit doigt `lift a finger’, one bit, care/give a damn, own a red cent are said to be polarity sensitive: they only really make sense if used in negative sentences. But this in itself is not the most interesting property.

What is much more interesting is why they have this property. There is a lot of research on this question in theoretical linguistics. The proposals are quite technical but they all start from the observation that most expressions that need to be in a negative context to be acceptable are expressions of minimal degrees and measures. For instance, a finger or le petit doigt `the small finger’ is the smallest body part one can lift to do something, a drop (in the expression I didn’t drink a drop of vodka yesterday) is the smallest observable quantity of vodka, etc.

Regine Eckardt, who has worked on this topic, formulates the following intuition: ‘speakers know that in the context of drinking, an event of drinking a drop can never occur on its own – even though a lot of drops usually will be consumed after a drinking of some larger quantity.’ (Eckardt 2006, p. 158). However the intuition goes, the occurrence of this expression in a negative sentence is acceptable because it denies the existence of events that consist of just drinking one drop.

What this means is that if Mary drank a small glass of vodka yesterday, although it is technically true to say She drank a drop of vodka (since the glass contains many drops) it would not be very informative, certainly not as informative as saying the equally true She drank a glass of vodka.

However imagine now that Mary didn’t drink any alcohol at all yesterday. In this context, I would be telling the truth if I said either one of the following sentences: Mary didn’t drink a glass of vodka or Mary didn’t drink a drop of vodka. But now it is much more informative to say the latter. To see this consider the following: saying Mary didn’t drink a glass of vodka could describe a situation in which Mary didn’t drink a glass of vodka yesterday but she still drank some vodka, maybe just a spoonful. If however I say Mary didn’t drink a drop of vodka then this can only describe a situation where Mary didn’t drink a glass or even a little bit of vodka. In other words, saying Mary didn’t drink a drop of vodka yesterday is more informative than saying Mary didn’t drink a glass of vodka yesterday because the former sentence describes a very precise situation whereas the latter is a lot less specific as to what it describes (i.e. it could be uttered in a situation in which Mary drank a spoonful of vodka or maybe a cocktail that contains 2ml of vodka, etc)

By using expressions of minimal degrees/measures in negative environments, the sentences become a lot more informative. This, it seems, is part of the reason why languages like English have changed such that these words are now only usable in negative sentences.

What’s the good of ‘would of’?

What’s the good of ‘would of’?

As schoolteachers the English-speaking world over know well, the use of of instead of have after modal verbs like would, should and must is a very common feature in the writing of children (and many adults). Some take this an omen of the demise of the English language,  and would perhaps agree with Fowler’s colourful assertion in A Dictionary of Modern English Usage (1926) that “of shares with another word of the same length, as, the evil glory of being accessory to more crimes against grammar than any other” (though admittedly this use of of has been hanging around for a while without doing any apparent harm: this study finds one example as early as 1773, and another almost half a century later in a letter of the poet Keats).

According to the usual explanation, this is nothing more than a spelling mistake. Following ‘would’, ‘could’ etc., the verb have is usually pronounced in a reduced form as [əv], usually spelt would’ve, must’ve, and so on. It can even be reduced further to [ə], as in shoulda, woulda, coulda. This kind of phonetic reduction is a normal part of grammaticalisation, the process by which grammatical markers evolve out of full words. Given the famous unreliability of English spelling, and the fact that these reduced forms of have sound identical to reduced forms of the preposition of (as in a cuppa tea), writers can be forgiven for mistakenly inferring the following rule:

‘what you hear/say as [əv] or [ə], write as of’.

But if it’s just a spelling mistake, this use of ‘of’ is surprisingly common in respectable literature. The examples below (from this blog post documenting the phenomenon) are typical:

‘If I hadn’t of got my tubes tied, it could of been me, say I was ten years younger.’ (Margaret Atwood, The Handmaid’s Tale)

Couldn’t you of – oh, he was ignorant in his speech – couldn’t you of prevented it?’ (Hilary Mantel, Beyond Black)

Clearly neither these authors nor their editors make careless errors. They consciously use ‘of’ instead of ‘have’ in these examples for stylistic effect. This is typically found in dialogue to imply something about the speaker, be it positive (i.e. they’re authentic and unpretentious) or negative (they are illiterate or unsophisticated).

 

These examples look like ‘eye dialect’: the use of nonstandard spellings that correspond to a standard pronunciation, and so seem ‘dialecty’ to the eye but not the ear. This is often seen in news headlines, like the Sun newspaper’s famous proclamation “it’s the Sun wot won it!” announcing the surprise victory of the conservatives in the 1992 general election. But what about sentences like the following from the British National Corpus?

“If we’d of accepted it would of meant we would have to of sold every stick of furniture because the rooms were not large enough”

The BNC is intended as a neutral record of the English language in the late 20th century, containing 100 million words of carefully transcribed and spellchecked text. As such, we expect it to have minimal errors, and there is certainly no reason it should contain eye dialect. As Geoffrey Sampson explains in this article:

“I had taken the of spelling to represent a simple orthographic confusion… I took this to imply that cases like could of should be corrected to could’ve; but two researchers with whom I discussed the issue on separate occasions felt that this was inappropriate – one, with a language-teaching background, protested vigorously that could of should be retained because, for the speakers, the word ‘really is’ of rather than have.”

In other words, some speakers have not just reinterpreted the rules of English spelling, but the rules of English grammar itself. As a result, they understand expressions like should’ve been and must’ve gone as instances of a construction containing the preposition of instead of the verb have:

Modal verb (e.g. must, would…) + of + past participle (e.g. had, been, driven…)

One way of testing this theory is to look at pronunciation. Of can receive a full pronunciation [ɒv] (with the same vowel as in hot) when it occurs at the end of a sentence, for example ‘what are you dreaming of?’. So if the word ‘really is’ of for some speakers, we ought to hear [ɒv] in utterances where of/have appears at the end, such as the sentence below. To my mind’s ear, this pronunciation sounds okay, and I think I even use it sometimes (although intuition isn’t always a reliable guide to your own speech).

I didn’t think I left the door open, but I must of.

The examples below from the Audio BNC, both from the same speaker, are transcribed as of but clearly pronounced as [ə] or [əv]. In the second example, of appears to be at the end of the utterance, where we might expect to hear [ɒv], although the amount of background noise makes it hard to tell for sure.

 “Should of done it last night when it was empty then” (audio) (pronounced [ə], i.e. shoulda)

(phone rings) “Should of.” (audio) (pronounced [əv], i.e. should’ve)

When carefully interpreted, writing can also be a source of clues on how speakers make sense of their language. If writing have as of is just a linguistically meaningless spelling mistake, why do we never see spellings like pint’ve beer or a man’ve his word? (Though we do, occasionally, see sort’ve or kind’ve). This otherwise puzzling asymmetry is explained if the spelling of in should of etc. is supported by a genuine linguistic change, at least for some speakers. Furthermore, have only gets spelt of when it follows a modal verb, but never in sentences like the dogs have been fed, although the pronunciation [əv] is just as acceptable here as in the dogs must have been fed (and in both cases have can be written ‘ve).

If this nonstandard spelling reflects a real linguistic variant (as this paper argues), this is quite a departure from the usual role of a preposition like of, which is typically followed by a noun rather than a verb. The preposition to is a partial exception, because while it is followed by a noun in sentences like we went to the party, it can also be followed by a verb in sentences like we like to party. But with to, the verb must appear in its basic infinitive form (party) rather than the past participle (we must’ve partied too hard), making it a bit different from modal of, if such a thing exists.

She must’ve partied too hard

Whether or not we’re convinced by the modal-of theory, it’s remarkable how often we make idiosyncratic analyses of the language we hear spoken around us. Sometimes these are corrected by exposure to the written language: I remember as a young child having my spelling corrected from storbry to strawberry, which led to a small epiphany for me, as that was the first time I realised the word had anything to do with either straw or berry. But many more examples slip under the radar. When these new analyses lead to permanent changes in spelling or pronunciation we sometimes call them folk etymology, as when the Spanish word cucaracha was misheard by English speakers as containing the words cock and roach, and became cockroach (you can read more about folk etymology in earlier posts by Briana and Matthew).

Meanwhile, if any readers can find clear evidence of modal of with the full pronunciation as  [ɒv], please comment below! I’m quite sure I’ve heard it, but solid evidence has proven surprisingly elusive…

No we [kæn]

No we [kæn]

If something bad happened to someone you hold in contempt, would you give a fig, a shit or a flying f**k? While figs might be a luxury food item in Britain, their historical status as something that is valueless or contemptible puts them on the same level as crap, iotas and rats’ asses for the purposes of caring.

In English, we have a wide range of tools for expressing apathy. But we don’t always agree on how to express it, and even use seemingly opposite affirmative and negative sentences to express very similar concepts.  Consider the confusing distinction between ‘I couldn’t care less’ vs. ‘I could care less’ which are used in identical contexts by British and American speakers of English to mean pretty much the same thing. This mind-boggling pattern makes sense when we realise that those cold-hearted people who couldn’t care less have a care-factor of zero, while the others don’t care much, but could do so even less, if necessary.

Putting aside such oddities, negation is normally crucial to interpreting a sentence – words like ‘not’ determine whether the rest of the sentence is affirmative or negative (i.e. whether you’re claiming it is true or false). Accordingly, languages tend to mark negation clearly, sometimes in more than once place within a sentence. One of the world’s most robust languages in this respect is Bierebo, an Austronesian language spoken in Vanuatu, where no less than three words for expressing negation are required at once (Budd 2010: 518):

Mara   a-sa-yal              re         manu  dupwa  pwel.
NEGl   3PL.S-eat-find   NEG2  bird     ANA      NEG3
‘They didn’t get to eat the bird.’

While marking negation three times might seem a little inefficient, this pales in comparison to the problems that arise when you don’t clearly indicate it all. We only have to turn to English to see this at work, where the distinction between Received Pronunciation can [kæn] and can’t [kɑ:nt] is frequently imperceptible in American varieties where final /t/ is not released, resulting in [kæn] or [kən] in both affirmative and negative contexts.

You might think that once a word or affix or sound that indicates negation has been removed from a word, there isn’t anywhere else to go. But some Dravidian languages spoken in India really push the boat out in this respect. Instead of adding some sort of negative word or affix to an affirmative sentence to signal negation, the tense affix (past –tt or future -pp) is taken away, as shown by the contrast between literary Tamil affirmatives and negatives.

pati-tt-ēn                    pati-pp-ēn                  patiy-ēn
‘I learned’                  ‘I will learn.’               ‘I do/did/will not learn.’

This is highly unusual from a linguistic point of view, and it’s tempting to think that languages avoid this type of negation because it is difficult to learn or doesn’t make sense design-wise. But historical records show similar patterns have been attested across Dravidian languages for centuries. This demonstrates that inflection patterns of this kind can be highly sustainable when they come about – so we might be stuck with the can/can’t collapse for a while to come.

On prodigal loanwords

On prodigal loanwords

Most people at some point in their life will have heard someone remark on how their language X (where X is any language) is getting corrupted by other languages and generally “losing its X-ness”. Today I would like to focus on one aspect of the so-called corruption of languages by other languages — lexical borrowings – and show that it’s perhaps not that bad.

European French (at least the French advertised by the Académie Française) is certainly a language about which its speakers worry, so much so that there is even an institution in charge of deciding what is French and what is not (see Helen’s earlier post). A number of English-looking/sounding words now commonly used in spoken French have indeed been taken from English, but English first took them from French!

For instance, the word flirter ‘to court someone’ is obviously adapted from English to flirt and it has the same meaning in both languages. But the English word is the adaptation of the French word fleurette in the expression conter fleurette! The expression conter fleurette is no longer used (casually) in spoken French.

“How could the universe live without your beauty?” “I wonder how sincere he is…”

Other examples of English words borrowed from (parts of) French expressions which then get adapted into French are in (2).

Thus un rosbif is an adaptation into French of roast beef which is itself an adaptation into English of the passive participle of the verb rostir “roast” which later became rôtir in Modern French, and buef “ox/beef” which later became boeuf in the Modern French.

The word un toast comes from English toast with the meaning “piece of toasted bread”. The English word itself was borrowed from tostée, an Old French noun derived from the verb toster which is not used in Modern French. The word pédigré comes from English pedigree but this word is itself adapted from French pied de grue “crane foot”, describing the shape of junctions in genealogical trees.

Pied de grue ‘Crane foot’

Finally, the verb distancer is transitive in Modern French, which means that it requires a direct object: thus the sentence in (a) is good because the verb distancer “distance” has a direct object, the phrase la voiture blanche  “the white car”. By contrast, the construction in (b) is not acceptable (signified by the * symbol) because it lacks an object.

a. La voiture rouge a distancé la voiture blanche.
‘The red car distanced the white car.’
b. *La voiture rouge a distancé.

The (transitive) Modern French verb distancer comes from English to distance which itself is a borrowing from the no-longer-used Old French verb distancer which was uniquely intransitive with the meaning “be far” (that is, in Old French, distancer could only be used in a construction with no direct object).

Another instance is (3): the word tonnelle ‘bower, arbor’ was borrowed into English and became tunnel under the influence of the local pronunciation. The word tunnel was then borrowed by French to refer exclusively to …. wait for it … tunnels. Both words now subsist in French with different meanings.

Une tonnelle ‘a bower’, Un tunnel ‘a tunnel’

Other examples of words that were borrowed into English and ‘came back’ into French with a different meaning are in (4).

The ancestor of tennis is the jeu de paume during which players would say tenez “there you go” as they were about to serve (at that time the final “z” was pronounced [z], it is not in Modern French). This word was adapted into English and became tennis which was then borrowed back into French to refer to the sport jeu de paume evolved into.

Jeu de paume vs. tennis

The Middle French word magasin used to refer to a warehouse, a collection of things. This word was borrowed into English and came to refer to a collection of things on paper. The word magazine was then borrowed back into French with this new meaning.

The history of the word budget also interesting. The word bouge used to mean “bag” and a small bag was therefore bougette (the -ette suffix is used as a diminutive, e.g. fourche “pitchfork” – fourchette “fork”). The word was borrowed into English where its pronunciation was “nativized” and it came to refer to a small bag of money. It was then borrowed back into French with the new meaning of “allocated sum of money”. Finally, ticket was borrowed from English which borrowed it from French estiquet, which referred to a piece of paper where someone’s name was written.

This happens in other languages of course. For instance, Turkish took the word pistakion ‘pistachio’ from (Ancient) Greek which became fistik. (Modern) Greek then borrowed this word back from Turkish which was then spelled phistiki with the meaning ‘pistachio’.

The main lesson I draw from the existence of ‘prodigal loanwords’ is that one’s impressions of language corruption often lack the perspective to actually ground that impression in reality. A French speaker looking at flirter ‘flirt’ may think that this is another sign of the influence of English — and they would be right — without being aware that this is after all a French word fleurette just coming back home.

Do you know other examples of prodigal loanwords? Please, share by commenting on this post!

Sources:
L’aventure des langues en Occident, Henriette Walter
Honni soit qui mal y pense, Henriette Walter
Jérôme Serme. 1998. Un exemple de résistance à l’innovation lexicale: les “archaïsmes” du français régional, Thèse Lyon II
Javier Herráez Pindado. 2009. Les emprunts aller-retour entre le français et l’anglais dans le sport. Universidad Politécnica de Madrid.

Reindeer = rein + deer?

Reindeer = rein + deer?

In linguists’ jargon, a ‘folk etymology’ refers to a change that brings a word’s form closer to some easily analyzable meaning. A textbook example is the transformation of the word asparagus into sparrowgrass in certain dialects of English.

Although clear in theory, it is not easy to decide whether ‘folk etymology’ is called for in other cases. One which has incited heated coffee-time discussion in our department is the word reindeer. The word comes ultimately from Old Norse hreindyri, composed of hreinn ‘reindeer’ and dyri ‘animal’. In present-day English, some native speakers conceive of the word reindeer as composed of two meaningful parts: rein + deer. This is something which, in the Christian tradition at least, does make a lot of sense. Given that the most prominent role of reindeer in the West is to serve as Santa’s means of transport, an allusion to ‘reins’ is unsurprising. This makes the hypothesis of folk etymology plausible.

When one explores the issue further, however, things are not that clear. The equivalent words in other Germanic languages are often the same (e.g. German Rentier, Dutch rendier, Danish rensdyr etc.) even though the element ren does not refer to the same thing as in English. However, unlike in English, another way of referring to Rudolf is indeed possible in some of these languages that omits the element ‘deer’ altogether: German Ren, Swedish ren, Icelandic hreinn, etc.

Another thing that may be relevant is the fact that the word ‘deer’ has narrowed its meaning in English to refer just to a member of the Cervidae family and not to any living creature. Other Germanic languages have preserved the original meaning ‘animal’ for this word (e.g. German Tier, Swedish djur).

Since reindeer straightforwardly descends from hreindyri, it may seem that, despite the change in the meaning of the component words, we have no reason to believe that the word was altered by folk etymology at any point. However, the story is not that simple. Words that contained the diphthong /ei/ in Old Norse do not always appear with the same vowel in English. Contrast, for example, ‘bait’ [from Norse beita] and ‘hail’ [from heill] with ‘bleak’ [from bleikr] and ‘weak’ [from veikr]). An orthographic reflection of the same fluctuation can be seen in the different pronunciation of the digraph ‘ei’ in words like ‘receive’ and ‘Keith’ vs ‘vein’ and weight’. It is, thus, not impossible that the preexistence of the word rein in (Middle) English tipped the balance towards the current pronunciation of reindeer over an alternative one like “reendeer”. Also, had the word not been analyzed by native speakers as a compound of rein+deer, it is not unthinkable that the vowels may have become shorter in current English (consider the case of breakfast, etymologically descending from break + fast).

So, is folk etymology applicable to reindeer? The dispute rages on. Some of us don’t think that folk etymology is necessary to explain the fate of reindeer. That is, the easiest explanation (in William of Occam’s sense) may be to say that the word was borrowed and merely continued its overall meaning and pronunciation in an unrevolutionary way.

Others are not so sure. The availability of “fake” etymologies like rein+deer (or even rain+deer before widespread literacy) seems “too obvious” for native speakers to ignore. The suspicion of ‘folk etymology’ might be aroused by the presence of a few mild coincidences such as the “right” vowel /ei/ instead of /i:/, the fact that the term was borrowed as reindeer rather than just rein as in some other languages [e.g. Spanish reno] or by the semantic drift of deer exactly towards the kind of animal that a reindeer actually is. These are all factors that seem to conspire towards the analyzability of the word in present-day English but which would have to be put down to coincidence if they just happened for no particular reason and independently of each other. Even if no actual change had been implemented in the pronunciation of reindeer, the morphological-semantic analysis of the word has definitely changed from its source language. Under a laxer definition of what folk etymology actually is, that could on its own suffice to label this a case of folk etymology.

There seems to be, as far as we can see, no easy way out of this murky etymological and philological quagmire that allows us to conclude whether a change in the pronunciation of reindeer happened at some point due to its analyzability. To avoid endless and unproductive discussion one sometimes has to know when to stop arguing, shrug and write a post about the whole thing.

Tongue twisters

Tongue twisters

Today I offer links to three international recipes: from Germany we have Kabeljau mit gebratener Blutwurst, Rosenkohl und Lakritzsauce (‘cod with pan-fried blood sausage, brussels sprouts and licorice sauce’), from France Cabillaud à la nage de réglisse (‘cod in licorice sauce’), and from Spain we have Lomo de bacalao en salsa de regaliz con juliana de judias verdes (‘filet of cod in licorice sauce with  julienned green beans’).

We will report later on the Morph cook-off challenge, once we scare up some participants and tasters. In the meanwhile, take note of what all these recipes have in common: cod and licorice. While I can’t for the life of me fathom why anyone would think to combine them on a plate, they do share something in common. Not culinarily, but linguistically. Let’s look at the words for these two ingredients as written in the recipes. They’re each vaguely similar across all three languages, but in a way which is hard to put your finger on. The word for ‘cod’ in all three languages has a [k] (or [c] – they’re pronounced the same) and a [b], but the order switches German and French on the one hand, and Spanish on the other. Similarly with ‘licorice’, where the place of [l] and [r] switch between German on the one hand and French and Spanish on the other:

‘cod’ ‘licorice’
German Kabeljau Lakritz
French cabillaud réglisse
Spanish bacalao regaliz

All neatly lined up here for comparison:

‘cod’ ‘licorice’
German k b l r
French c b r l
Spanish b c r l

This looks like an example of metathesis, where two sounds in a word swap places, as in English comfort versus comfortable, where the [t] and [r] switch places in pronunciation if not spelling (for those of us who pronounce the [r] at all, that is).

Metathesis as a gastronomic selling point may need a bit of refinement, but it does make for some curious word histories. The case of ‘licorice’ is fairly clear. It started out as Greek glykyrrhīza ‘sweet root’ and was borrowed into Latin as liquiritia, where it is believed that the first part got slightly mangled because people thought it had something to do with liquor (an example of folk etymology). The Latin word was borrowed into Old High German as lakerize or lekerize, which is where the Modern German word comes from. Meanwhile, in Old French, Latin’s daughter language, the word ended up as licorece, which then made its way into English. It was after this that French made the switch to ricolece, swapping [l] and [r], whose first part again got mangled to réglisse through another bout of folk etymology, because people thought it had something to do with règle ‘ruler’ (since licorice will have been sold in the form of ruler-like bars).

The word ‘cod’ remains something of a mystery. The German and French word were both borrowed from Dutch, first attested (in Latin sources) as cabellauwus, represented in contemporary Dutch as kabeljauw. Spanish bacalao is not attested before 1500, and it is generally agreed that the spread of this word was due to Basque fishermen. But whether kabeljauw morphed into bacalao or vice versa, nobody knows. Equally, it could all be coincidence, and the resemblance between the two words is just chance, a point of view that gains some mild support from the fact that bacalao and its ilk refer to a salted fish, whereas kabeljauw and its cousins refer to the fresh fish. This is how Dutch ends up with two words, kabeljauw and bakkeljauw: the first being its native word, the second borrowed from Portuguese bacalhau in the former Dutch colony of Suriname and transported to the Netherlands with Surinamese immigrants, used to refer to a salted and dried fish (not necessarily cod). I have yet to see both on a menu, let along combined in a single dish, but the search has only started.

(Sources: Etymologisch Woordenboek van het NederlandsEtymologisches Wörterbuch des Deutschen, Dictionnaire électronique de l’Académie Française.)

Today’s vocabulary, tomorrow’s grammar

Today’s vocabulary, tomorrow’s grammar

If an alien scientist were designing a communication system from scratch, they would probably decide on a single way of conveying grammatical information like whether an event happened in the past, present or future. But this is not the case in human languages, which is a major clue that they are the product of evolution, rather than design. Consider the way tense is expressed in English. To indicate that something happened in the past, we alter the form of the verb (it is cold today, but it was cold yesterday), but to express that something will happen in the future we add the word will. The same type of variation can also be seen across languages: French changes the form of the verb to express future tense (il fera froid demain, ‘it will be cold tomorrow’, vs il fait froid aujourd’hui, ‘it is cold today’).

The future construction using will is a relatively recent development. In the earliest English, there was no grammatical means of expressing future time: present and future sentences had identical verb forms, and any ambiguity was resolved by context. This is also how many modern languages operate. In Finnish huomenna on kylmää ‘it will be cold tomorrow’, the only clue that the sentence refers to a future state of affairs is the word huomenna ‘tomorrow’.

How, then, do languages acquire new grammatical categories like tense? Occasionally they get them from another language. Tok Pisin, a creole language spoken in Papua New Guinea, uses the word bin (from English been) to express past tense, and bai (from English by and by) to express future. More often, though, grammatical words evolve gradually out of native material. The Old English predecessor of will was the verb wyllan, ‘wish, want’, which could be followed by a noun as direct object (in sentences like I want money) as well as another verb (I want to sleep). While the original sense of the verb can still be seen in its German cousin (Ich will schwimmen means ‘I want to swim’, not ‘I will swim’), English will has lost it in all but a few set expressions like say what you will. From there it developed a somewhat altered sense of expressing that the subject intends to perform the action of the verb, or at least, that they do not object to doing so (giving us the modern sense of the adjective ‘willing’). And from there, it became a mere marker of future time: you can now say “I don’t want to do it, but I will anyway” without any contradiction.

This drift from lexical to grammatical meaning is known as grammaticalisation. As the meaning of a word gets reduced in this way, its form often gets reduced too. Words undergoing grammaticalisation tend to gradually get shorter and fuse with adjacent words, just as I will can be reduced to I‘ll. A close parallel exists in in the Greek verb thélō, which still survives in its original sense ‘want’, but has also developed into a reduced form, tha, which precedes the verb as a marker of future tense. Another future construction in English, going to, can be reduced to gonna only when it’s used as a future marker (you can say I’m gonna go to France, but not *I’m gonna France). This phonetic reduction and fusion can eventually lead to the kind of grammatical marking within words that we saw with French fera, which has arisen through the gradual fusion of earlier  ferre habet ‘it has to bear’.

Words meaning ‘want’ or ‘wish’ are a common source of future tense markers cross-linguistically. This is no coincidence: if someone wants to perform an action, you can often be reasonably confident that the action will actually take place. For speakers of a language lacking an established convention for expressing future tense, using a word for ‘want’ is a clever way of exploiting this inference. Over the course of many repetitions, the construction eventually gets reinterpreted as a grammatical marker by children learning the language. For similar reasons, another common source of future tense markers is words expressing obligation on the part of the subject. We can see this in Basque, where behar ‘need’ has developed an additional use as a marker of the immediate future:

ikusi    behar   dut

see       need     aux

‘I need to see’/ ‘I am about to see’

This is also the origin of the English future with shall. This started life as Old English sceal, ‘owe (e.g. money)’. From there it developed a more general sense of obligation, best translated by should (itself originally the past tense of shall) or must, as in thou shalt not kill. Eventually, like will, it came to be used as a neutral way of indicating future time.

But how do we know whether to use will or shall, if both indicate future tense? According to a curious rule of prescriptive grammar, you should use shall in the first person (with ‘I’ or ‘we’), and will otherwise, unless you are being particularly emphatic, in which case the rule is reversed (which is why the fairy godmother tells Cindarella ‘you shall go to the ball!’). The dangers of deviating from this rule are illustrated by an old story in which a Frenchman, ignorant of the distinction between will and shall, proclaimed “I will drown; nobody shall save me!”. His English companions, misunderstanding his cry as a declaration of suicidal intent, offered no aid.

This rule was originally codified by Bishop John Wallis in 1653, and repeated with increasing consensus by grammarians throughout the 18th and early 19th centuries. However, it doesn’t appear to reflect the way the words were actually used at any point in time. For a long time shall and will competed on fairly equal terms – shall substantially outnumbers will in Shakespeare, for example – but now shall has given way almost entirely to will, especially in American English, with the exception of deliberative questions like shall we dance? You can see below how will has gradually displaced shall over the last few centuries, mitigated only slightly by the effect of the prescriptive rule, which is perhaps responsible for the slight resurgence of shall in the 1st person from approximately 1830-1920:

Until the eventual victory of will in the late 18th century, these charts (from this study) actually show the reverse of what Wallis’s rule would predict: will is preferred in the 1st person and shall in the 2nd , while the two are more or less equally popular in the 3rd person. Perhaps this can be explained by the different origins of the two futures. At the time when will still retained an echo of its earlier meaning ‘want’, we might expect it to be more frequent with ‘I’, because the speaker is in the best position to know what he or she wants to do. Likewise, when shall still carried a shade of its original meaning ‘ought’, we might expect it to be most frequent with ‘you’, because a word expressing obligation is particularly useful for trying to influence the action of the person you are speaking to. Wallis’ rule may have been an attempt to be extra-polite: someone who is constantly giving orders and asserting their own will comes across as a bit strident at best. Hence the advice to use shall (which never had any connotations of ‘want’) in the first person, and will (without any implication of ‘ought’) in the second, to avoid any risk of being mistaken for such a character, unless you actually want to imply volition or obligation.