Browsed by
Category: historical linguistics

Remember, remember

Remember, remember

A lot of the work that linguists do involves taking a language as it is spoken at a particular time, finding generalizations about how it operates, and coming up with abstractions to make sense of them. In English, for example, we identify a category of ‘number’ (with possible values ‘singular’ and ‘plural’); and we do that because in many ways the relationship between cat and cats is the same as that between mouse and mice, man and men, and so on, meaning that it would be useful to treat all of these pairings as specific examples of a more general phenomenon. We can then make the further generalization that whatever this linguistic concept of ‘number’ really is, it is not only relevant to nouns but also to verbs, and to some other items too – because English speakers all know that this cat scratches whereas these cats scratch, and you can’t have any other combination like *these cat scratch.

A black cat wearing bat wings for Halloween
This bat scratches

Once you start looking, you discover layer upon layer of generalizations like these, and you need more and more abstractions in order to take care of them all. This all gives rise to a view of language as a kind of machine built out of abstract principles, all coexisting at the same time inside a speaker’s head. On that basis, we can ask questions like: are there any principles that all languages use? Does having pattern X always go along with having pattern Y? Are there any generalizations that you can easily come up with, but that turn out not to be found anywhere? What does all this tell us about human psychology?

But that is not the only approach to language we could take. While we can point to a general principle of English to explain what is wrong with these cat, there is no similar principle explaining why we refer to the meowing, purring, scratching creature as a cat in the first place. The word cat has nothing feline about it, and the fact that we use that sequence of sounds – rather than e.g. tac – is not based on some higher-level truth that applies for all English speakers right now: instead, the ‘explanation’ is rooted in the fact that this is the word we happened to inherit from earlier generations of speakers.

Portrait photo of General Burnside, featuring his famous sideburns
General Ambrose Burnside (1824-1881)

So studying the etymology of individual words serves as a good reminder that as well as an abstract, principled system residing in human minds, every language is also a contingent historical artefact, shaped by the peoples and cultures of the past.1 Nothing makes this more obvious than the continued existence of ordinary vocabulary items that commemorate individuals from centuries gone by – often without modern-day speakers even knowing it. In English, sandwiches are named after the Earl of Sandwich, wellingtons are named after the Duke of Wellington, and cardigans are named after the Earl of Cardigan; and the parallelism here says something about the locus of cultural influence in Georgian and Victorian Britain. More cryptically, sideburns owe their name to a General Burnside of the US Army, justly famed for his facial hair; algorithms celebrate the Persian mathematician al-Khwarizmi; and Duns Scotus, although a towering figure of medieval philosophy, now lives on in the word dunce popularized by his academic opponents.2

But which historical figure has had the greatest success of all in getting his name woven into the fabric of modern English? I reckon that, against all the odds, it could well be this Guy.

A close up of the face of Guy Fawkes, labelled Guido Fawkes, from a depiction of several conspirators together

While all English speakers are familiar with the word guy as an informal word corresponding to man, probably not that many know that it can be traced back to a historical figure from 400 years ago who, in a modern context, would be called a religious terrorist. Guy Fawkes was one of the conspirators in the ‘Gunpowder Plot’ of November 1605: with the aim of installing a Catholic monarchy, they planned to assassinate England’s Protestant king, James I, by blowing up Parliament with him inside. Fawkes was not one of the leaders of the conspiracy, but he was the one caught red-handed with the gunpowder; as a result, one cultural legacy of the plot’s failure is the celebration every 5th November (principally in the UK) of Guy Fawkes Night, which commonly involves letting off fireworks and setting a bonfire on which a crude effigy of Fawkes was traditionally burnt.

But how did the name of one specific Guy, for a while the most detested man in the English-speaking world, end up becoming a ubiquitous informal term applying to any man? The crucial factor is the effigy. It is unsurprising that this came to be called a Guy, ‘in honour’ of the man himself; but by the 19th century, the word was also being used to refer to actual men who dressed badly enough to earn the same label, in the way one might jokingly liken someone to a scarecrow (one British woman writing home from Madras in 1836 commented: ‘The gentlemen are all ‘rigged Tropical’,… grisly Guys some of them turn out!’). It is not a big step from there to using guy as a humorous and, eventually, just a colloquial word for men in general.3

Procession of a Guy (1864)

And of course the story does not stop there. While a guy is still almost always a man, for many speakers the plural guys can now refer to people in general, especially as a term of address. The idea that a word with such unambiguously masculine origins could ever be treated as gender-neutral has been something of a talking point in recent years, as in this article from The Atlantic about the rights and wrongs of greeting women with a friendly ‘hey guys’; but the fact that it is debated at all shows that it is happening. In fact, there is good reason to think that in some varieties of English, you-guys is being adopted as a plural form of the personal pronoun you: one piece of evidence is the existence of special possessive forms like your-guys’s, a distinctively plural version of your.

It is interesting to notice that the rise of non-standard you-guys, not unlike y’all and youse, goes some way towards ‘fixing’ an anomaly within modern English as a system: almost all nouns, and all other personal pronouns, have distinct singular and plural forms, whereas the standard language currently has the same form you doing double duty as both singular and plural. Any one of these plural versions of you might eventually win out, further strengthening the (already pretty reliable) generalization that English singulars and plurals are formally distinct. This just goes to show that the two ways of looking at language – as a synchronic system, and as a historical object – need to complement each other if we really want to understand what is going on. At the same time, it is fun to think of linguists of the distant future researching the poorly attested Ancient English language of the twenty-second century, and wondering where the mysterious personal pronoun yugaiz came from. Would anyone who didn’t know the facts dare to suggest that the second syllable of this gender-neutral plural pronoun came from the given name of a singular male criminal, executed many centuries before?

  1. For example, cat itself seems to be traceable back to an ancient language of North Africa, reflecting the fact that cats were household animals among the Egyptians for millennia before they became popular mousers in Europe. []
  2. Of course, it is no accident that all of these examples feature men. Relatively few women in history have had the opportunity to turn into items of English vocabulary; in fact, fictional female characters – largely from classical mythology – have had much greater success, giving us e.g. calypso, rhea and Europe. []
  3. A similar thing also happened to the word joker in the 19th century, though it didn’t get as far as guy: that suggests that sentences containing guy would once have had the same ring to them as Who’s this joker?; and then some joker turns up and says… []
The Story of Aubergine

The Story of Aubergine

As the University of Surrey’s foremost (and indeed only) blog about languages and how they change, MORPH is enjoyed by literally dozens of avid readers from all over the world. But so far these multitudes have not received an answer to the one big linguistic question besetting modern society. Namely, what on earth is going on with the name of the plant that British English calls the aubergine, but that in other times and places has been called eggplant, melongene, brown-jolly, mad-apple, and so much more? Where do all these weird names come from?

I think the time has finally come to put everyone’s mind at rest. Aubergines may not seem particularly eggy, melonish, jolly or mad, but lots of the apparently diverse and whimsical terms for them used in English and other languages are actually connected – and in trying to understand how, we can get some insight about how vocabulary spreads and develops over time. It turns out that one powerful impulse behind language change is the fact that speakers like to ‘make sense’ of things that do not inherently make sense. What do I mean by that? Stay tuned to find out.

Long purple aubergine

To get one not-so-linguistic point out of the way first, there is no real mystery about eggplant (the word generally used in the US and some other English-speaking countries, dating back to the 18th century), which is not linked to anything else I am talking about here. It is hard to imagine mistaking the large, purple fruit in the photo above for any kind of egg, but that is not the only kind of aubergine in existence. There are cultivars with a much more oval shape, and even ones with white rather than purple skin: pictures like this, showing an imposter alongside some real eggs, make it obvious how the word eggplant was able to catch on.

Small white eggshaped aubergine in an eggbox between two real eggs

Meanwhile, aubergine, which is borrowed from French as you might expect, has a much more complex history, and can be traced back over many centuries, hopping from language to language with minor adjustments along the way. The plant is not native to the US, Britain or France, but to southern or eastern Asia, and investigating the history of the word will eventually take us back in the right geographical direction. Aubergine got into French from the Catalan albergínia, whose first syllable gives us a clue as to where we should look next: as in many al- words in the Iberian peninsula (e.g. Spanish algodón ‘cotton’), it reflects the Arabic definite article. So, along with medieval Spanish alberengena, the Catalan item is from Arabic al-bādhinjān ‘the aubergine’, where only the bādhinjān bit will be relevant from here on. This connection makes sense, because the Arab conquest had such an impact on the history of Iberia. And more generally, we have the Arabs to thank for the spread of aubergine cultivation into the West, and also – indirectly – for this charming illustration in a 14th-century Latin translation of an Arabic health manual:

Illustration featuring three people in front of a stand of aubergine plants
Page from the 14th c. Tacuinum Sanitatis (Vienna), SN2644

But bādhinjān is not Arabic in origin either: it was borrowed into Arabic from its neighbour, Persian. In turn, Persian bādenjān is a borrowing from Sanskrit vātiṅgaṇa… and Sanskrit itself got this from some other language of India, probably belonging to the unrelated Dravidian family. The word for aubergine in Tamil, vaṟutuṇai, is an example of how the word developed inside Dravidian itself.

That is as far back as we are able to trace the word. But the journey has already been quite convoluted. To recap, a Dravidian item was borrowed into Sanskrit, from there into Persian, from there into Arabic, from there into Catalan, from there into French, and from there into English – and in the course of that process, it managed to go from something along the lines of vaṟutuṇai to the very different aubergine, although the individual changes were not drastic at any stage. The whole thing illustrates how developments in language can go with cultural change, in that words sometimes spread together with the things they refer to. In the same way, tea reached Europe via two routes originating in different Chinese dialect zones, and that is what gave rise to the split between ‘tea’-type and ‘chai’-type words in European languages:

[Map created by Wikimedia user Poulpy, licensed CC BY-SA 3.0, cropped for use here]
This still leaves a lot of aubergine words unaccounted for. But now that we have played the tape backwards all the way from aubergine back to something-like-vaṟutuṇai, we can run it forwards again, and see what different historical paths we could follow instead. For example, Arabic had an influence all over the Mediterranean, and so it is no surprise to see that about a thousand years ago, versions of bādhinjān start appearing in Greece as well as Iberia. Greek words could not begin with b- at the time, so what we see instead are things like matizanion and melintzana, and melitzana is the Greek for aubergine to this day. There is no good pronunciation-based reason for the Greek word to have ended up beginning with mel-, but what must have happened is that faced with this foreign string of sounds, speakers thought it would be sensible for it to sound more like melanos ‘dark, black’, to match its appearance. That is, they injected a bit of meaning into what was originally just an arbitrary label.

Meanwhile the word turns up in medieval Latin as melongena (giving the antiquated English melongene) and in Italian as melanzana, and a similar thing happened: here mel- has nothing to do with the dark colour of the fruit, but it did remind speakers of the word for ‘apple’, mela. We know this because melanzana was subsequently reinterpreted as the expression mela insana, ‘insane apple’. To produce this interpretation, it must have helped that the aubergine (like the equally suspicious tomato) belongs to the ‘deadly’ nightshade family, whose traditional European representatives are famously toxic. So, again, something that was originally just a word, with no deeper meaning inside, was reimagined so that it ‘made sense’. As a direct translation, English started calling the aubergine a mad-apple in the 1500s.

Parody of the "Keep Calm and Carry On" posters, reading "You don't have to be mad to work here but it helps"
Poster from a 16th c. aubergine factory

There are many more developments we could trace. For example, I have not talked at all about the branch of this aubergine ‘tree’ that entered the Ottoman Empire and from there spread widely across Europe and Asia. But instead I will return now to the Arab conquest of Iberia. This brought bādhinjān into Portuguese in the form beringela, and then when the Portuguese started making conquests of their own, versions of beringela appeared around the world. Notably, briñjal was borrowed into Gujarati and brinjal into Indian English, meaning that something-like-vaṟutuṇai ultimately came full circle, returning in this heavy disguise to its ancestral home of India. And to end on a particularly happy note, when the same form brinjal reached the Caribbean, English speakers there saw their own opportunity to ‘make sense’ of it – this time by adapting it into brown-jolly.

Brown-jolly is pretty close to the mark in terms of colour, and it is much better marketing than mela insana. But from the linguist’s point of view, they both reinforce a point which has often been made: speakers are always alive to the possibility that the expressions they use are not just arbitrary, but can be analysed, even if that means coming up with new meanings which were not originally there. To illustrate the power of ‘folk etymology’ of this kind, linguists traditionally turn to the word asparagus, reinterpreted in some varieties of English as sparrow-grass. But perhaps it is time for us to give the brown-jolly its moment in the sun.

The linguistic archaeology of feet

The linguistic archaeology of feet

There’s been excitement recently about evidence that humans had set foot in the Americas as much as 22,500 years ago, pushing back the previous best estimate by almost ten thousand years. And by ‘set foot’, I mean literally. The tell-tale new evidence comes to us in the form of imprints left by human feet in a particularly well-preserved mudflat in New Mexico. So far, the humans themselves have not been uncovered by archaeologists, but their characteristic mark upon the mud has endured.

When linguists peer into the past, we also will occasionally use the imprints, left by something which has otherwise been lost, to infer its presence long ago — all of which brings us to the topic of feet, and not the kind that you’d use to walk across a mudflat, but the literal English word ‘feet’, which itself contains a wonderful imprint of a long-lost vowel.

Our story begins with the fact that in English, the word ‘feet’ is a little odd. It’s a plural that doesn’t end in ‘s’. As any child will tell you, you can’t get away with saying ‘foots’ for the plural of ‘foot’ for very long before someone bigger than you corrects it to ‘feet’. However, given that most English nouns do use an ‘s’ plural, it’s entirely sensible to ask why ‘feet’ is different. (Of course, ‘feet’ isn’t absolutely unique: English contains a select club of other, similar plurals like ‘geese’ and ‘teeth’, to which we’ll return in a minute.)

The tale of ‘feet’ begins around two millennia ago, when it was in fact a regular plural word. In proto-Germanic, the singular form would have been ‘fōt-s’ (pronounced approximately as fohts, where ‘ō’ is a long ‘o’ sound) and its corresponding plural ‘fōt-iz’, constructed with a simple plural suffix ‘-iz’. Over the following centuries, the sounds at the end of the plural form were worn away and eventually lost, as often happens during language change. However, before the suffix disappeared entirely, the ‘i’ vowel in it left its imprint on the ‘ō’ vowel, changing it to ‘ȫ’, which is to say ‘fōtiz’ became ‘fōti’ then ‘fȫti’ then ‘fȫt’ which by Old English had become ‘fēt’ and is now ‘feet’. In the meantime, the singular form ‘fōts’, which contained no ‘i’ vowel, changed very little indeed: it lost its suffix ‘-s’, becoming ‘fōt’ and then modern English ‘foot’. A similar story lies behind the plurals ‘geese’ and ‘teeth’: an original suffixal vowel ‘i’ changed ‘ō’ into ‘ȫ’, before disappearing, then ‘ȫ’ became ‘ē’.

You might say that the ‘i’ vowel left its imprint upon original ‘ō’ in the form of the altered vowel ‘ȫ’. One tool which linguistic archaeologists put to good use, is our knowledge of the characteristic imprints that one sound can leave upon another. In the case of the long-lost ‘i’ vowel, the imprint even has a name, umlaut. Historical umlaut is also what lies behind plurals like ‘mice’ and ‘men’.

Armed with the background knowledge that lost ‘i’ vowels changed ‘ō’ into ‘ȫ’, and in doing so gave rise to modern English alternations between ‘oo’ and ‘ee’, we can now go fossicking through the vocabulary for more lost ‘i’ vowels. Another suffix that was lost over the centuries was a causative suffix, which related nouns to verbs, such as ‘blood’ to ‘bleed’, or ‘food’ to ‘feed’: as you’ll have guessed, the verbs once contained a now-lost ‘i’. In some cases, pairs of sibling words such as these have grown apart over time. For instance, if you were to decide someone’s fate (or their ‘doom’) then you’d be judging them (or ‘deeming’ them), though as you can see, I had to produce a fairly contrived context to highlight the relatedness of ‘doom’ and ‘deem’.

Umlaut caused by a now-lost ‘i’ also crops up in several nouns ending in ‘-th’: compare not only ‘strong’ with ‘strength’, ‘long’ with ‘length’, or ‘broad’ with ‘breadth’, but also ‘hale’ with ‘health’ and ‘foul’ with ‘filth’.

feet made filthy by umlaut!

Over decades of meticulous work, linguists have uncovered much about how languages around the world change over time, though much more still remains to be accounted for. One of the many lingering questions is what the conditions are, which favour the continued survival of idiosyncratic word forms like ‘feet’, long after they have lost their regularity. We know that many irregular words, such as the Old English plural ‘bēc’ for ‘books’ (corresponding to singular ‘bōc’), get removed over time, yet others persist for millennia. It’s an ongoing task for linguists to understand why some footprints remain while others get washed away.

How to break an impasse

How to break an impasse

Have Brexit negotiations met an impasse (where the first vowel sounds like the vowel in ‘him’), or an impasse where the vowel is like the initial sound in the French word bain /bɛ̃/? Or is it something in between?

If it is the former, congratulations! This borrowing from French has been successfully integrated into your native phonology, whilst simultaneously making a nod to its orthography.

If you opt to French-it-up then you have recognised that this word is not an Anglo-Saxon one, and that it should be flagged as such by keeping the pronunciation classic. Or you are French.

If you are somewhere between these two extremes, you are in good company. This highly topical word has no less than 12 British variants listed in the OED, reflecting various solutions to integrating the nasalized French vowel /ɛ̃/ and stress pattern into English:

Choosing which pronunciation to use for impasse is both a linguistic and social minefield, with every utterance revealing something about your education and social networks. No pressure then.

Recent news reports are providing a very rich corpus of data on the pronunciation of this specific word, with many variants being used within the same news report by different speakers, and perhaps even the same speaker.

For those yet to commit, choosing which to pick may be bewildering. So how do we avoid this impasse? Perhaps unsurprisingly, one tactic speakers use is to avoid using a word they aren’t confident pronouncing altogether. It might be safer to stick to deadlock.

Watch BBC Political Editor Laura Kuenssberg translate deadlock into German, Spanish and French.

Ultimately, our cousins across the pond may have some influence in resolving this issue in the long term. The OED lists only two variants for U.S. English, with variation based on stress, not vowel quality, and U.S. variants of words (e.g. schedule, U.S. /skɛdjuːl/ vs U.K. /ˈʃedʒ.uːl/) are widely adopted in the speech of the UK public. But this will not necessarily be the case and the multiple UK variants may continue for some time.

The impasse goes to show that languages tend to tolerate a whole lot of diversity, even when the world of politics doesn’t.

Sense and polarity, or why meaning can drive language change

Sense and polarity, or why meaning can drive language change

Generally a sentence can be negative or positive depending on what one actually wants to express. Thus if I’m asked whether I think that John’s new hobby – say climbing – is a good idea, I can say It’s not a good idea; conversely, if I do think it is a good idea, I can remove the negation not to make the sentence positive and say It’s a good idea. Both sentences are perfectly acceptable in this context.

From such an example, we might therefore conclude that any sentence can be made positive by removing the relevant negative word – most often not – from the sentence. But if that is the case, why is the non-negative response I like it one bit not acceptable, odd when its negative counterpart I don’t like it one bit is perfectly acceptable and natural?

This contrast has to do with the expression one bit: notice that if it is removed, then both negative and positive responses are perfectly fine: I could respond I don’t like it or, if I do like it, I (do) like it.

It seems that there is something special about the phrase one bit: it wants to be in a negative sentence. But why? It turns out that this question is a very big puzzle, not only for English grammar but for the grammar of most (all?) languages. For instance in French, the expression bouger/lever le petit doigt `lift a finger’ must appear in a negative sentence. Thus if I know that John wanted to help with your house move and I ask you how it went, you could say Il n’a pas levé le petit doigt `lit. He didn’t lift the small finger’ if he didn’t help at all, but I could not say Il a levé le petit doigt lit. ‘He lifted the small finger’ even if he did help to some extent.

Expressions like lever le petit doigt `lift a finger’, one bit, care/give a damn, own a red cent are said to be polarity sensitive: they only really make sense if used in negative sentences. But this in itself is not the most interesting property.

What is much more interesting is why they have this property. There is a lot of research on this question in theoretical linguistics. The proposals are quite technical but they all start from the observation that most expressions that need to be in a negative context to be acceptable are expressions of minimal degrees and measures. For instance, a finger or le petit doigt `the small finger’ is the smallest body part one can lift to do something, a drop (in the expression I didn’t drink a drop of vodka yesterday) is the smallest observable quantity of vodka, etc.

Regine Eckardt, who has worked on this topic, formulates the following intuition: ‘speakers know that in the context of drinking, an event of drinking a drop can never occur on its own – even though a lot of drops usually will be consumed after a drinking of some larger quantity.’ (Eckardt 2006, p. 158). However the intuition goes, the occurrence of this expression in a negative sentence is acceptable because it denies the existence of events that consist of just drinking one drop.

What this means is that if Mary drank a small glass of vodka yesterday, although it is technically true to say She drank a drop of vodka (since the glass contains many drops) it would not be very informative, certainly not as informative as saying the equally true She drank a glass of vodka.

However imagine now that Mary didn’t drink any alcohol at all yesterday. In this context, I would be telling the truth if I said either one of the following sentences: Mary didn’t drink a glass of vodka or Mary didn’t drink a drop of vodka. But now it is much more informative to say the latter. To see this consider the following: saying Mary didn’t drink a glass of vodka could describe a situation in which Mary didn’t drink a glass of vodka yesterday but she still drank some vodka, maybe just a spoonful. If however I say Mary didn’t drink a drop of vodka then this can only describe a situation where Mary didn’t drink a glass or even a little bit of vodka. In other words, saying Mary didn’t drink a drop of vodka yesterday is more informative than saying Mary didn’t drink a glass of vodka yesterday because the former sentence describes a very precise situation whereas the latter is a lot less specific as to what it describes (i.e. it could be uttered in a situation in which Mary drank a spoonful of vodka or maybe a cocktail that contains 2ml of vodka, etc)

By using expressions of minimal degrees/measures in negative environments, the sentences become a lot more informative. This, it seems, is part of the reason why languages like English have changed such that these words are now only usable in negative sentences.

Adventures in Historical Linguistics

Adventures in Historical Linguistics

While linguistics do not cut the same kind of glamorous profile in fiction as, say, international espionage or organized crime, it does pop up now and again. Even historical linguistics. Having stumbled across a couple older examples recently (thus, historical fictional historical linguistics), I commend them to our readers as an alternative to the cheap thrills that might otherwise tempt them.

Leon Groc’s Le deux mille ans sous la mer (‘2000 years under the sea’), from 1924, starts out with our heroes supervising the construction of a tunnel under the English Channel. They discover a mysterious inscription on a rock face. Fortunately, one of the party is a philologist, and identifies it as Chaldean (i.e. a form of Aramaic)! And a particularly archaic variety at that. This impresses the rest of the party, at least as much as the content of the inscription itself: Impious invaders, you shall not go any further. However, a subsequent mining accident forces them to break through the rock, where they discover a cavern inhabited by race of pale blind people, descendants of Chaldeans (or to be more precise, speakers of Chaldean) who had sought refuge in that cavern from some long-forgotten disaster, only to discover they couldn’t find a way out. The learned philologist applies his practical knowledge of Chaldean in communicating them. I won’t spoil the fun for those of you planning to read it; but it does not go well.

James De Mille’s A Strange Manuscript Found in a Copper Cylinder from 1888 features members of a British expedition surveying the South Pacific becoming stranded in an unknown country with – once again – some cave dwellers, who call themselves Kosekin and speak a Semitic language. In the usual fashion of such stories in this period, there is a narrative within a narrative, in this case the manuscript directly relating the adventure, and the commentary of the members of the yacht party who discovered it. While the core narrator (named More) merely recognizes some affinity to Arabic, one of the members of the yacht party just so happens – once again – to have a philological background, which, after a lengthy digression on the comparative method and Grimm’s law, leads him to conclude that the underground race speaks a language descended from Hebrew:

I can give you word after word that More has mentioned which corresponds to a kindred Hebrew word in accordance with ‘Grimm’s Law.’ For instance, Kosekin ‘Op,’ Hebrew ‘Oph;’ Kosekin ‘Athon,’ Hebrew ‘Adon;’ Kosekin ‘Salon,’ Hebrew ‘Shalom.’ They are more like Hebrew than Arabic, just as Anglo-Saxon words are more like Latin or Greek than Sanscrit.

Further proof of the power of historical linguistics in a tight situation comes from  E. Charles Vivian’s City of Wonder (1923). Again in the South Pacific, a group of adventurers is attacked by a strange woman (speaking, of course, a strange language) in charge of a monkey army. Taking stock after having slaughtered the attackers, the narrator asks one of his companions:

“What is the language she used?” I asked.

“The nearest I can tell you, so far, is that it’s a sort of bastard Persian,” he answered. “It’s a dialect built on a Sanskrit foundation—in my youth I studied Sanskrit, for it’s the key to every Aryan language or dialect in the East, and I always meant to come East. I must stuff you two.”

“Stuff us?” Bent asked.

“Fill you up with words that will be useful—it’s astonishing what you can do in a language if you know three or four hundred words in common use. If you hear it and have to make yourself understood in it, the construction of sentences very soon comes to you. That is, if the language is built on an Aryan foundation, as this is.”

It’s that easy! You just need to learn the method.

Back underground, Howard De Vere’s A Trip to the Center of the Earth, first published in New York Boys’ Weekly in 1878, is a story I haven’t been able to track it down yet, but from the description in E.F. Bleiler’s Science Fiction: The Early Years, it promises to be one of the high points in early dime novel treatments of historical linguistics. A pair of boys exploring Kentucky’s Mammoth Cave come across an underground world where

pallid underground people speak English of a sort, in which inflections have disappeared and certain alterations have taken place.

What could those certain alterations be? As an added bonus, the story is of culinary interest, as the next sentence of Bleiler’s description goes:

Geophagists, they live on a nourishing clay, access to which is sometimes barred by gigantic spiders of extraordinary venomosity.

Alongside lost race fantasies, futuristic science fiction is another obvious vehicle for literary forays into historical linguistics. Régis Messac’s Quinzinzinzili from 1935 is a particularly interesting variant, being – as far as I know – the only serious fictional treatment of contact linguistics. (Admittedly I haven’t looked elsewhere.) Set in the period after a fictional World War II which everybody in this interwar period seemed to be expecting anyway), its narrator is trapped in a post-apocalyptic world alone with a particularly annoying handful of pre-teens. (And thus probably the most gruesome post-apocalyptic story ever written.) They are largely French speakers, but there are Portuguese speakers and English speakers among them as well. They develop a sort of pidginized French, colored by a spontaneous sound changes such as the nasalization of all vowels, along with curious semantic shifts. The title Quinzinzinzili reflects this all, being their rendition of the second clause in the Lord’s Prayer in Latin (qui es in cœlis ‘who art in Heaven’), used as a name for their inchoate deity. I won’t say any more because I think everybody should read it. Way better than Lord of the Flies, which it preceded and superficially resembles. (And which has no noteworthy linguistic content.)

And if anybody knows a good source for back issues of  New York Boys’ Weekly, our lines are open.