Browsed by
Category: Prescriptivism

Is twote the past of tweet?

Is twote the past of tweet?

Have you ever encountered the form twote as a past tense of the verb to tweet? It is something of a meme on Twitter, and a live example of analogy (and its mysteries). However surprising the form may sound if you have never encountered it, it has been the prescribed one for a long time:

https://twitter.com/Twitter/status/47851852070522880?s=20

Ten years later, the question popped up among a linguisty Twitter crowd, where a poll again elected twote as the correct form:

It is clear that this unusual form replacing tweeted is some sort of form, but why specifically twote? I saw here and there a reference to the verb to yeet, a slang verb very popular on the internet and meaning more or less “to throw”. Rather than a regular form yeeted, the past for to yeet is often taken to be yote. The choice of an irregular form is probably meant to produce a comedic effect.

This, precisely, is analogical production: creating a new form (twote) by extending a contrast seen in other words (yeet/yote). Analogy is a central topic in my research. I have been trying to answer questions such as: How do we decide what form to use ? How difficult is it to guess? How does this contribute to language change?

But first, have you answered the poll?

What is the past tense of “to tweet”?

To investigate further why we would say twote rather than tweeted, I took out my PhD software (Qumin). Based on 6064 examples of English verbs1, I asked Qumin to produce and rank possible past forms of tweet2. To do so, it read through examples to construct analogical rules (I call them patterns), then evaluated the probability of each rule among the words which sound like tweet.

https://twitter.com/cavaticat/status/1212056421082251265

Qumin found four options3: tweeted (/twiːtɪd/), by analogy with 32 similar words, such as greet/greeted; twet (/twɛt/), by analogy with words like meet/met; tweet (/twiːt/) by analogy with words like beat/beat, finally twote (/twəˑʊt/), by analogy with yeet. Figure 1 provides their ranking (in ascending order) according to Qumin, with the associated probabilities.

Twote 0.028 < tweet 0.056 < twet 0.056 < tweeted 0.86
Figure 1. Qumin’s ranking of the probability for potential past forms of to tweet

As we can see, Qumin finds twote to be the least likely solution. This is a reasonable position overall (indeed, tweeted is the regular form), so why would both the official Twitter account and many Twitter users (including several linguists) prefer twote to tweeted?

But Qumin has no idea what is cool, a factor which makes yeet/yote (already a slang word, used on the internet) a particularly appealing choice. Moreover, Qumin has no access to semantic similarity, which could also play a role. Verbs that have similar meanings can be preferred as support for the analogy. In the current case, both speak/spoke and write/wrote have similar pasts to twote, which might help make it sound acceptable. Some speakers seem to be aware of these factors, as seen in the tweet above.

What about usage?

Are most speakers aware of the variant twote and using it? Before concluding that the model is mistaken, we need to observe what speakers actually use. Indeed, only usage truly determines “what is the past of tweet”. For this, I turn to (automatically) sifting through Twitter data.

Speakers must choose between tweeted or twote: what a dilemna !

A few problems: first, the form “tweet” is also a noun, and identical to the present tense of the verb. Second, “twet” is attested (sometimes as “twett”), but mostly as a synonym for the noun “tweet” (often in a playful “lolcat” style), or as a present verbal form, with a few exceptions, usually of a meta nature (see tweets below). I couldn’t find a way to automatically distinguish these from past forms while also managing within the Twitter API limits. Thus, I left out both from the search entirely. This leaves only our two main contestants.

 

I extracted as many recent tweets containing tweeted or twote as Twitter would let me — around 300 000 tweets twotten between the 26th of August and the 3rd of September. 186777 tweets remained after refining the search4. Of these, less than 5000 contain twote:

There were more than 180000 occurences of tweeted and less than 5000 of twote in the past few days.
Counts of tweets containing either of two possible pasts for the verb “to tweet” in the past few days on twitter (mentions excluded).

As you can see, the tweeted bar completely dwarfs the other one. However amusing and fitting twote may be, and despite @Twitter’s prescription (but conforming with Qumin’s prediction), the regular past form is by far the most used, even on the platform itself, which lends itself to playful and impactful statements. This easily closes this particular English Past Tense Debate. If only it were always this simple!

  1. The English verb data I used includes only the present and past tenses, and is derived from the CELEX 2 dataset, as used in my PhD dissertation and manually supplemented by the forms for “yeet”. The CELEX2 dataset is commercial, and I can not distribute it. []
  2. The code I used for this blog post is available here, but not the dataset itself. Note that for scientific reasons I won’t discuss here, this software works on sounds, not orthography. []
  3. One last possibility has been ignored by this polite software, a form which follows the pattern of sit/sat. I see it used from time to time for its comic effect, but it does not seem at all frequent enough to be a real contestant (and I do not recommend searching this keyword on Twitter). []
  4. Since there has been a lot of discussion on the correct form, I exclude all clear cases of mentions. I count as mentions any occurrences wrapped in quotations, co-occurring with alternate forms, mentioning past tense, or with a hashtag. Moreover, with the forms in –ed, it is likely that the past participle would be identical, but for twote, the past participle could well be twotten. To reduce the bias due to the presence of more past participles in the usage of tweeted, I also exclude all contexts where the word is preceded by the auxiliary forms has, have, had, is, are, was, were, possibly separated by an adverb. []
Christmas Gifts

Christmas Gifts

Recently, a friend of mine received an email saying that because of their hard work in difficult circumstances this year, he and his colleagues would all be “gifted” a few extra days off over Christmas. And the other day I saw someone else wondering on Facebook: ‘when did the word “given” cease to exist, and why is everything “gifted” now?’ So with the festive season fast approaching, it seems like a good time to ask: is there really something funny going on with the word gift?

Once you gift it a bit of thought, I don’t think I am gifting anything away by pointing out that the verb to give is still very much with us. But the rise of a rival verb to gift, in some contexts where you’d expect to give, has been receiving attention for a while now: in recent years it has been discussed on National Public Radio in the US (The Season of Gifting) and in The Atlantic magazine (‘Gift’ is Not a Verb). Whether or not it bothers you personally, you may well have noticed the trend. The existence of gift as a noun is just a mundane fact of life, but apparently the corresponding verb gets people talking.

Gifted children

Now, nobody would be surprised to learn that English changes over time, or even that it has pairs of words that mean more or less the same thing… how much difference is there between liberty and freedom, or between little and small? And in fact, synonyms have an important role to play in language change. If we look back and notice that one expression has been replaced by another – a historical change in the vocabulary, as when the Shakespearian anon gave way to at once – then there must have been an intervening period when they were both around with pretty much the same meaning, and people had a choice of which one to use.

Does that mean that we do now find ourselves in the very early stages of a long historical process which will eventually result in to gift replacing to give altogether? If that’s the case, in a few generations’ time people will be saying things like ‘Never gift up!’ or ‘Could you gift me a hand?’.

Frankly, my dear, I don’t gift a damn

But whatever happens in the future, that clearly isn’t the situation now. So if English often provides multiple ways of saying the same thing, why have people taken the coexistence of to give and to gift as something to get worked up about – and can linguistics shed any light on what is going on here?

One thing that makes this specific pairing stand out is that the two words are just so similar. Gift is obviously connected with give in the first place: that makes it easy to wonder why anyone would bother to avoid the obvious word, only to pick an almost identical one. Another factor (as the title of The Atlantic article makes clear) is the idea that gift is really a noun, and so people shouldn’t go around using it as a verb.

But if we take a broader view, it turns out that what is happening with to gift is not out of the ordinary. Instead, it fits neatly with some things that linguists have already noticed about English and about language change more generally. For one thing, English is very good at ‘using nouns as verbs’ – which is why we can hammer (verb) with a hammer (noun), fish (verb) for fish (noun), and so on. So a verb gift, meaning ‘give as a gift’, goes well with what the language already does. What often happens is that when a new verb of this kind starts to take off, not all speakers are happy about it, but after a while it gains acceptance. For example, the twentieth century saw complaints about verbs-from-nouns such as to host, to access or to showcase, but they grate less on people nowadays.

You could even try hammering with a fish!

Ultimately, the ability to create words like this is just an ‘accidental’ fact about English, which also has various other ways of making verbs from nouns – for example, turning X into ‘X-ify’ (person-ify, object-ify) or ‘be-X’ (be-friend, be-witch). The bigger question may be: as we already have the verb give, why would anyone bother to make a verb gift in the first place, and why would it ever catch on? It might seem that by definition, a gift is something you give, so inventing a term meaning ‘give as a gift’ is pointless.

But that is not how things really are. Gifts are given, but that doesn’t mean that everything that can be given counts as a gift: a traffic warden might give you a parking ticket and in return you might give him a piece of your mind, but the noun gift doesn’t cover either of those things. Among other restrictions on its use, it is generally associated with positive feelings: if you give something as a gift, it is usually something tangible that you expect to be warmly received, and that carries over into the verb to gift itself.

This subtle difference between to give and to gift explains why for the moment it is impossible to gift someone a sidelong glance, or lots of extra work to do. But apparently it is becoming possible to gift an employee some time off, even though that is not a physical present that can be handed over and unwrapped. Evidently, the writer just felt like using a verb that sounded a bit more interesting and positive than to give, and the ‘warmly received’ part of the meaning was enough to outweigh the lack of any tangible object involved.

This is an example of something that happens all the time in language change. Naturally, while a word is still restricted in its use, it is more noticeable and interesting than a word you hear regularly. As a result, sometimes people decide to go for the less common word even where it doesn’t quite belong, to achieve some kind of extra effect… but over time, this process makes the word sound less and less special, until it eventually becomes the new normal. We don’t even need to look far to find this happening precisely to the word ‘gift’ in other languages: French donner ‘give’ is based on don ‘gift’, and it has totally wiped out the normal verb for give that ‘should’ have been inherited from Latin.

So if speakers and writers of English continue to chip away at the restrictions on gift as a verb, maybe one day it really will replace give altogether. Of course, that idea sounds totally outlandish at the moment – but then, I’m sure the ancient Romans would have thought much the same thing. You never know what will happen next: language change truly is the gift that keeps on giving!

What’s the good of ‘would of’?

What’s the good of ‘would of’?

As schoolteachers the English-speaking world over know well, the use of of instead of have after modal verbs like would, should and must is a very common feature in the writing of children (and many adults). Some take this an omen of the demise of the English language,  and would perhaps agree with Fowler’s colourful assertion in A Dictionary of Modern English Usage (1926) that “of shares with another word of the same length, as, the evil glory of being accessory to more crimes against grammar than any other” (though admittedly this use of of has been hanging around for a while without doing any apparent harm: this study finds one example as early as 1773, and another almost half a century later in a letter of the poet Keats).

According to the usual explanation, this is nothing more than a spelling mistake. Following ‘would’, ‘could’ etc., the verb have is usually pronounced in a reduced form as [əv], usually spelt would’ve, must’ve, and so on. It can even be reduced further to [ə], as in shoulda, woulda, coulda. This kind of phonetic reduction is a normal part of grammaticalisation, the process by which grammatical markers evolve out of full words. Given the famous unreliability of English spelling, and the fact that these reduced forms of have sound identical to reduced forms of the preposition of (as in a cuppa tea), writers can be forgiven for mistakenly inferring the following rule:

‘what you hear/say as [əv] or [ə], write as of’.

But if it’s just a spelling mistake, this use of ‘of’ is surprisingly common in respectable literature. The examples below (from this blog post documenting the phenomenon) are typical:

‘If I hadn’t of got my tubes tied, it could of been me, say I was ten years younger.’ (Margaret Atwood, The Handmaid’s Tale)

Couldn’t you of – oh, he was ignorant in his speech – couldn’t you of prevented it?’ (Hilary Mantel, Beyond Black)

Clearly neither these authors nor their editors make careless errors. They consciously use ‘of’ instead of ‘have’ in these examples for stylistic effect. This is typically found in dialogue to imply something about the speaker, be it positive (i.e. they’re authentic and unpretentious) or negative (they are illiterate or unsophisticated).

 

These examples look like ‘eye dialect’: the use of nonstandard spellings that correspond to a standard pronunciation, and so seem ‘dialecty’ to the eye but not the ear. This is often seen in news headlines, like the Sun newspaper’s famous proclamation “it’s the Sun wot won it!” announcing the surprise victory of the conservatives in the 1992 general election. But what about sentences like the following from the British National Corpus?

“If we’d of accepted it would of meant we would have to of sold every stick of furniture because the rooms were not large enough”

The BNC is intended as a neutral record of the English language in the late 20th century, containing 100 million words of carefully transcribed and spellchecked text. As such, we expect it to have minimal errors, and there is certainly no reason it should contain eye dialect. As Geoffrey Sampson explains in this article:

“I had taken the of spelling to represent a simple orthographic confusion… I took this to imply that cases like could of should be corrected to could’ve; but two researchers with whom I discussed the issue on separate occasions felt that this was inappropriate – one, with a language-teaching background, protested vigorously that could of should be retained because, for the speakers, the word ‘really is’ of rather than have.”

In other words, some speakers have not just reinterpreted the rules of English spelling, but the rules of English grammar itself. As a result, they understand expressions like should’ve been and must’ve gone as instances of a construction containing the preposition of instead of the verb have:

Modal verb (e.g. must, would…) + of + past participle (e.g. had, been, driven…)

One way of testing this theory is to look at pronunciation. Of can receive a full pronunciation [ɒv] (with the same vowel as in hot) when it occurs at the end of a sentence, for example ‘what are you dreaming of?’. So if the word ‘really is’ of for some speakers, we ought to hear [ɒv] in utterances where of/have appears at the end, such as the sentence below. To my mind’s ear, this pronunciation sounds okay, and I think I even use it sometimes (although intuition isn’t always a reliable guide to your own speech).

I didn’t think I left the door open, but I must of.

The examples below from the Audio BNC, both from the same speaker, are transcribed as of but clearly pronounced as [ə] or [əv]. In the second example, of appears to be at the end of the utterance, where we might expect to hear [ɒv], although the amount of background noise makes it hard to tell for sure.

 “Should of done it last night when it was empty then” (audio) (pronounced [ə], i.e. shoulda)

(phone rings) “Should of.” (audio) (pronounced [əv], i.e. should’ve)

When carefully interpreted, writing can also be a source of clues on how speakers make sense of their language. If writing have as of is just a linguistically meaningless spelling mistake, why do we never see spellings like pint’ve beer or a man’ve his word? (Though we do, occasionally, see sort’ve or kind’ve). This otherwise puzzling asymmetry is explained if the spelling of in should of etc. is supported by a genuine linguistic change, at least for some speakers. Furthermore, have only gets spelt of when it follows a modal verb, but never in sentences like the dogs have been fed, although the pronunciation [əv] is just as acceptable here as in the dogs must have been fed (and in both cases have can be written ‘ve).

If this nonstandard spelling reflects a real linguistic variant (as this paper argues), this is quite a departure from the usual role of a preposition like of, which is typically followed by a noun rather than a verb. The preposition to is a partial exception, because while it is followed by a noun in sentences like we went to the party, it can also be followed by a verb in sentences like we like to party. But with to, the verb must appear in its basic infinitive form (party) rather than the past participle (we must’ve partied too hard), making it a bit different from modal of, if such a thing exists.

She must’ve partied too hard

Whether or not we’re convinced by the modal-of theory, it’s remarkable how often we make idiosyncratic analyses of the language we hear spoken around us. Sometimes these are corrected by exposure to the written language: I remember as a young child having my spelling corrected from storbry to strawberry, which led to a small epiphany for me, as that was the first time I realised the word had anything to do with either straw or berry. But many more examples slip under the radar. When these new analyses lead to permanent changes in spelling or pronunciation we sometimes call them folk etymology, as when the Spanish word cucaracha was misheard by English speakers as containing the words cock and roach, and became cockroach (you can read more about folk etymology in earlier posts by Briana and Matthew).

Meanwhile, if any readers can find clear evidence of modal of with the full pronunciation as  [ɒv], please comment below! I’m quite sure I’ve heard it, but solid evidence has proven surprisingly elusive…

Today’s vocabulary, tomorrow’s grammar

Today’s vocabulary, tomorrow’s grammar

If an alien scientist were designing a communication system from scratch, they would probably decide on a single way of conveying grammatical information like whether an event happened in the past, present or future. But this is not the case in human languages, which is a major clue that they are the product of evolution, rather than design. Consider the way tense is expressed in English. To indicate that something happened in the past, we alter the form of the verb (it is cold today, but it was cold yesterday), but to express that something will happen in the future we add the word will. The same type of variation can also be seen across languages: French changes the form of the verb to express future tense (il fera froid demain, ‘it will be cold tomorrow’, vs il fait froid aujourd’hui, ‘it is cold today’).

The future construction using will is a relatively recent development. In the earliest English, there was no grammatical means of expressing future time: present and future sentences had identical verb forms, and any ambiguity was resolved by context. This is also how many modern languages operate. In Finnish huomenna on kylmää ‘it will be cold tomorrow’, the only clue that the sentence refers to a future state of affairs is the word huomenna ‘tomorrow’.

How, then, do languages acquire new grammatical categories like tense? Occasionally they get them from another language. Tok Pisin, a creole language spoken in Papua New Guinea, uses the word bin (from English been) to express past tense, and bai (from English by and by) to express future. More often, though, grammatical words evolve gradually out of native material. The Old English predecessor of will was the verb wyllan, ‘wish, want’, which could be followed by a noun as direct object (in sentences like I want money) as well as another verb (I want to sleep). While the original sense of the verb can still be seen in its German cousin (Ich will schwimmen means ‘I want to swim’, not ‘I will swim’), English will has lost it in all but a few set expressions like say what you will. From there it developed a somewhat altered sense of expressing that the subject intends to perform the action of the verb, or at least, that they do not object to doing so (giving us the modern sense of the adjective ‘willing’). And from there, it became a mere marker of future time: you can now say “I don’t want to do it, but I will anyway” without any contradiction.

This drift from lexical to grammatical meaning is known as grammaticalisation. As the meaning of a word gets reduced in this way, its form often gets reduced too. Words undergoing grammaticalisation tend to gradually get shorter and fuse with adjacent words, just as I will can be reduced to I‘ll. A close parallel exists in in the Greek verb thélō, which still survives in its original sense ‘want’, but has also developed into a reduced form, tha, which precedes the verb as a marker of future tense. Another future construction in English, going to, can be reduced to gonna only when it’s used as a future marker (you can say I’m gonna go to France, but not *I’m gonna France). This phonetic reduction and fusion can eventually lead to the kind of grammatical marking within words that we saw with French fera, which has arisen through the gradual fusion of earlier  ferre habet ‘it has to bear’.

Words meaning ‘want’ or ‘wish’ are a common source of future tense markers cross-linguistically. This is no coincidence: if someone wants to perform an action, you can often be reasonably confident that the action will actually take place. For speakers of a language lacking an established convention for expressing future tense, using a word for ‘want’ is a clever way of exploiting this inference. Over the course of many repetitions, the construction eventually gets reinterpreted as a grammatical marker by children learning the language. For similar reasons, another common source of future tense markers is words expressing obligation on the part of the subject. We can see this in Basque, where behar ‘need’ has developed an additional use as a marker of the immediate future:

ikusi    behar   dut

see       need     aux

‘I need to see’/ ‘I am about to see’

This is also the origin of the English future with shall. This started life as Old English sceal, ‘owe (e.g. money)’. From there it developed a more general sense of obligation, best translated by should (itself originally the past tense of shall) or must, as in thou shalt not kill. Eventually, like will, it came to be used as a neutral way of indicating future time.

But how do we know whether to use will or shall, if both indicate future tense? According to a curious rule of prescriptive grammar, you should use shall in the first person (with ‘I’ or ‘we’), and will otherwise, unless you are being particularly emphatic, in which case the rule is reversed (which is why the fairy godmother tells Cindarella ‘you shall go to the ball!’). The dangers of deviating from this rule are illustrated by an old story in which a Frenchman, ignorant of the distinction between will and shall, proclaimed “I will drown; nobody shall save me!”. His English companions, misunderstanding his cry as a declaration of suicidal intent, offered no aid.

This rule was originally codified by Bishop John Wallis in 1653, and repeated with increasing consensus by grammarians throughout the 18th and early 19th centuries. However, it doesn’t appear to reflect the way the words were actually used at any point in time. For a long time shall and will competed on fairly equal terms – shall substantially outnumbers will in Shakespeare, for example – but now shall has given way almost entirely to will, especially in American English, with the exception of deliberative questions like shall we dance? You can see below how will has gradually displaced shall over the last few centuries, mitigated only slightly by the effect of the prescriptive rule, which is perhaps responsible for the slight resurgence of shall in the 1st person from approximately 1830-1920:

Until the eventual victory of will in the late 18th century, these charts (from this study) actually show the reverse of what Wallis’s rule would predict: will is preferred in the 1st person and shall in the 2nd , while the two are more or less equally popular in the 3rd person. Perhaps this can be explained by the different origins of the two futures. At the time when will still retained an echo of its earlier meaning ‘want’, we might expect it to be more frequent with ‘I’, because the speaker is in the best position to know what he or she wants to do. Likewise, when shall still carried a shade of its original meaning ‘ought’, we might expect it to be most frequent with ‘you’, because a word expressing obligation is particularly useful for trying to influence the action of the person you are speaking to. Wallis’ rule may have been an attempt to be extra-polite: someone who is constantly giving orders and asserting their own will comes across as a bit strident at best. Hence the advice to use shall (which never had any connotations of ‘want’) in the first person, and will (without any implication of ‘ought’) in the second, to avoid any risk of being mistaken for such a character, unless you actually want to imply volition or obligation.

Words apart: when one word becomes two

Words apart: when one word becomes two

As any person working with language knows, the list of words from which we build our sentences is not a fixed one but rather is in a state of constant flux. Words (or lexemes in linguists’ terminology) are constantly being borrowed (such as ‘sauté’ from French), coined (such as ‘brexit’ from a blend of ‘Britain’ and ‘exit’) or lost (such as ‘asunder’, a synonym for ‘apart’). These happen all the time. However, two more logical processes exist that can alter the total number of entries in the dictionary of our language. Occasionally, lexemes may also merge, if two or more become one; or split, if one becomes two. These more exotic cases constitute a window into the fascinating workings of the grammar. In this blog I will present the story of one of these splitting events. It involves the Spanish verb saber, from Latin sapiō.

The verb’s original meaning must have been ‘taste’ in the sense of ‘having a certain flavour’, as in the sentence “Marmite tastes awful”. At some point it also began to be used figuratively to mean ‘come to know something’, not only by means of the sense of taste but also for knowledge arrived at by means of other senses. It is interesting that in the Germanic languages it seems that it was sight rather that taste that was traditionally used in the same way. Consider, for instance, the common use, in English, of the verb ‘see’ in contexts like “I see what you mean”, where it is interchangeable with ‘know’. Whether the source verb can be explained by the differences between traditional Mediterranean and Anglo-Saxon cuisines I’d rather not suggest for fear of deportation.

In any case, what must have been once a figurative use of the verb ‘taste’ became at some point the default way of expressing ‘know’. These are the two main senses of saber in contemporary Spanish and of its equivalents in most other Romance languages. The question I ask here is: do speakers of Spanish today categorize this as one word with two meanings? Or do they feel they are two different words that just happen to sound the same? There may be a way to tell.

In Spanish, unlike in English, a verb can take dozens of different forms. The shape of a verb changes depending on who is doing the action of the verb, whether the action is a fact or a wish etc. Thus, for example, speakers of Spanish say yo sé ‘I know’ but t sabes ‘you know’. They also use one form (so-called ‘indicative’) in sentences like yo veo que t sabes inglés ‘I see that you know English’ but a different form (so-called ‘subjunctive’) in yo espero que t sepas inglés ‘I hope that you know English’. The Real Academia Española, the prescriptive authority in the Spanish language, has ruled that, because saber is a single verb, it should have the same forms (sé, sabes etc.) regardless of its particular sense. Speakers, however, have trouble to abide by this rule, which is probably the reason why the need for a rule was felt in the first place. My native speaker intuition, and that of other speakers of Spanish, is that the verb may have a different form depending on its sense:

Forms of Spanish saber (forms starting with sab– in light gray, forms starting with sep– in dark gray)

The most obvious explanation for why this change could happen is that, when the two main senses of saber drifted sufficiently away from each other, speakers ceased to make the generalization that they were part of the same lexeme. When this happened, the necessity to have the same forms for the two meanings of saber dissappeared. But, why sepo?

Because cannibalism is on the wane (also in Spain) we hardly ever speak about how people taste. As a result, the first and second person forms of saber (e.g. irregular ) are only ever encountered by speakers under their meaning ‘know’. Because of this, they do not count as evidence for language users’ deduction of the full array of forms of saber. This meant that the first and second person forms of saber₂ ‘taste’, when needed (imagine someone saying sepo salado ‘I taste salty’ after coming out of the sea), had to be formed on the fly on evidence exclusive to its sense ‘taste’ (i.e. third persons and impersonal forms):

Because of the evidence available to speakers, at first sight it might seem strange that this ‘fill-in-the-gaps’ exercise did not result in the apparently more regular 1SG indicative form sabo. This would have resulted in a straightforward indicative vs subjunctive distinction in the stem. The chosen form, however, makes more sense when one observes the patterns of alternation present in other Spanish verbs:

Verbs that have a difference in the stem in the third person forms between indicative and subjunctive (cab- vs quep- or ca- vs caig-) overwhelmingly use the form of the subjunctive also in the formation of the first person singular indicative. This is a quirk of many Spanish verbs. It appears that, by sheer force of numbers, the pattern is spotted by native speakers and occasionally extended to other verbs which, like saber look like could well belong in this class.

In this way, the tiny change from to sepo allows us linguists to see that patterns like those of caber and caer are part of the grammatical knowledge of speakers and are not simply learnt by heart for each verb. In addition, it gives us crucial evidence to conclude that, today, there are in Spanish not one but two different verbs whose infinitive form is saber. Much like the T-Rex in Jurassic Park, we linguists can sometimes only see some things when they ‘move’.

What happened to whom (and why)?

What happened to whom (and why)?

Wh- words like which, whom and why get a lot of knickers in a twist, as attested by this oatmeal comic on when to use who vs whom, or the age-old debate about the correct use of which vs that (on which see this blog post by Geoffrey Pullum). But in Old English the wh- words formed a complete and regular system which would have been easy to get the hang of. They were used strictly as interrogative pronouns – words that we use for asking questions like who ate all the pies? – rather than relative pronouns, which give extra information about an item in the sentence (Jane, who ate all the pies, is a prolific blogger) or narrow down the reference of a noun (women who eat pies are prolific bloggers). They developed their modern relative use in Middle English, via reinterpretation of indirect questions – in other words, sentences like she asked who ate all the pies, containing the question who ate all the pies?, served as the template for new sentences like she knew who ate all the pies, where who functions as a relative.

Who ate all the pies? They did.

Originally, the new relative pronoun whom (in its Middle English form hwām) functioned as the dative case form of who, used when the person in question is the indirect object of a verb or after prepositions like for. For direct objects, the accusative form hwone was used instead. So to early Middle English ears, the man for whom I baked a pie would be fine, while the man whom I baked in a pie would be objectionable (on grammatical as well as ethical grounds). Because nouns also had distinct nominative, dative and accusative forms, the wh- words would have posed no special difficulty for speakers. But as English lost distinct case forms for nouns, the pronoun system was also simplified, and the originally dative forms started to replace accusative forms, just as who is now replacing whom. This created a two-way opposition between subject and non-subject which is best preserved in our system of personal pronouns: we say he/she/they baked a pie, but I baked him/her/them (in) a pie.

Thus hwone disappeared the way of hine, the old accusative form of he. Without the support of a fully-functioning case system in the nouns, other case forms of pronouns were reinterpreted. Genitive pronouns like my and his were transformed into possessive adjectives (his pie is equivalent to the pie of him, but you can no longer say things like I thought his to mean ‘I thought of him’). The wh- words also used to have an instrumental case form, hwȳ, meaning ‘by/through what?’, which became an autonomous word why.

Although him and them are still going strong, whom has been experiencing a steady decline. Defenders of ‘whom’ will tell you that the rule for deciding whether to use who or whom is exactly the same as that for he and him, but outside the most formal English, whom is now mainly confined to fixed phrases like ‘to whom it may concern’. For many speakers, though, it has swapped its syntactic function for a sociolinguistic one by becoming merely a ‘posh’ variant of who: in the words of James Harding, creator of the ‘Whom’ Appreciation Society, “those who abandon ‘whom’ too soon will regret it when they next find themselves in need of sounding like a butler.”