Necessary conditions for a morphological relationship

The aim of this blog post is to establish sufficiency conditions for the subconscious establishment of a morphological relationship in a speaker’s lexicon.

Ford & Singh (2003) state that “morphology is the study of formal relationships between words”. The word is defined as possessing a phonological structure, a “category”, and a meaning. It is then stated that “any morphological relationship between a non-unique pair of words in a language can be described by a rule”. With this seemingly common-sense definition of morphology, it is argued the notion of a morpheme is not useful. In my view, this definition of morphology also makes morphological relations arbitrary. A language speaker may relate any two words on the basis of similarity between either phonological form, syntactic category, semantic meaning, or some permutation of two of these components, or some combination of all three. A language speaker may also NOT do this. The act of relating two words may be linguistic, but the factors which propel a speaker to relate two words is, apparently, not linguistic.

According to Ford & Singh (2003:25), the only relevant component to the study of morphological relations is the physical form of a word. Yet, their definition of morphology specifies “formal relationships”. If this definition of morphology is applied to words in a Parallel Architecture framework, the very interesting consequence is that formal relationships between words may occur in one of three components (Phonology, Semantics, Syntax), only one of which is physical.

What exactly is a morphological relation? Ford & Singh (2003) do not provide a definition even for a “formal” relation. Implicitly in their analysis, two words may have a morphological relation if they share some phonological and semantic information, and if the information which they don’t share follows a general pattern visible throughout pairs of words in the lexicon.

According to Jackendoff (1975), the basic condition that two words have a morphological relation is that “knowing one of them makes it easier to learn the other”. Jackendoff’s (1975) evaluation procedure is summed up here in the most basic possible terms: take two words, take their lexical entries, work out which information is shared in the lexical entries. Jackendoff uses the term “redundancy” to describe shared information, but states that it is “obvious” that shared information which is semantic is not redundancy. This is because if two words share the same meaning, knowing one does not make it easier to learn the other, since there is no schema that would allow one to predict the phonology of a word from the semantics of another word (at least, not without the help of a third-party morphological relation – see Ramscar (2001)).

Yet, if we accept Ford & Singh’s (2003) definition of morphology, any shared information between two words which can be formalized could be considered redundancy. Jackendoff’s Parallel Architecture specifies three linguistic components of a word which can be formalized: Phonology, Syntax and Semantics. Hence, shared semantic information is redundancy.

Is the mere existence of redundancy a sufficient basis for a morphological relation? Should we propose a limit beneath which redundancy is just too small for a speaker to form a morphological relation subconsciously?¹

Since this blog has taken the Parallel Architecture viewpoint, let’s propose that redundancy in less than two linguistic components is insufficient for a morphological relation. Hence, there is no morphological relation in English between the verb trip and the noun trip, since the only redundant information is located in a single linguistic component, the Phonology. This leaves us with four permutations of redundancy which provides a sufficient condition for a morphological relation. Each of these will now be discussed.

1. Redundancy in Phonology and Semantics

An English example would be the words marry and marriage, which share the redundant phonological information /mæri/ and the redundant semantic information [MARRY]. The following morphological schema may be inferred from this morphological relation.

PHON: [ X ] <-> [ X [ dʒ ] ]

SEM: [ A ] <-> [ A ]

SYN: [ verb, trans ] <-> [ noun ]

The pair of words carry/carriage seems to follow this schema in the PHON and the SYN components, but not SEM. In fact, the only redundant information in the pair carry/carriage is in the PHON component. The noun carriage does not refer to an event of the verb carry, but instead to an object of transport. Although one may superficially claim that a carriage “carries” its passengers, this is not an inherent feature in the SEM component of the word. (When a carriage is sitting in a garage, it carries nothing.) Hence, the conditions for a morphological relation are insufficient and hence there is no (subconscious) morphological relation between the words carry and carriage. Note that, for the moment, this does not invalidate the morphological relation between marry and marriage.

2. Redundancy in Phonology and Syntax

(1) was a relatively familiar account of morphological relations, given experience in previous morphological descriptions and frameworks. Beyond (1), the sufficiency conditions given earlier result in the proposition of some startling morphological relations.

Let’s begin with the familiar: English words ending in -ing all share a morphological relation – playing, swimming, beginning, etc – since they have redundant information in the component SYN, [verb], and redundant information in the component PHON, /iŋ/. This provides a tremendously useful generalization for an English speaker, allowing one to form novel verbs in participle or gerundive constructions from any novel verb root.

Now the unfamiliar: there is a morphological relation between the words carriage and garage. They are both nouns, and they both shared the phonological information /ærɪdʒ/. This morphological relation is tremendously unhelpful to an English speaker, as the generalization it infers cannot assist in word-formation, nor in memorization, nor in lexical access. Not every noun ends in /ærɪdʒ/, and only a very small set of /ærɪdʒ/ words are nouns (e.g. barage is a verb). The only use this morphological relation serves is in linguistic creativity – poetry or comedy – which is seen to be irrelevant to this study (see footnote 1).

3. Redundancy in Semantics and Syntax

Shared phonological information is at the centre of morphological studies by Jackendoff (1975), Booij (2010), and Ford & Singh (2003). These authors would say that a lexical relation without shared phonological information does not constitute as a morphological relation. However, according to our sufficiency conditions, it does. In other words, the phenomena of ontological relations in the lexicon is morphological, and may be generalized using the same sort of schemas as the one given in (1).

For instance, there is a morphological relation between the English verbs bird and parrot. They share the redundant SYN information [ noun ], and the redundant SEM information [ BIRD ]. This relation is generalized in the following schema:

PHON: / X / <-> / Y /

SEM: [ A ] <-> [ A [ B ] ]

SYN: [ noun ]

This is essentially a schema describing hyperonymy. The X form is a category with the semantic content A. Since the Y form is a subcategory of X, the semantics of Y are simply the semantics of X (summed up as A) plus the specialized features of Y (summed up as B).

4. Redundancy in Phonology, Semantics and Syntax

When there is redundancy in all three components of language, you have a paradigm. Redundant information is shared in all three components in the English words walk, walked, walking, walks, etc. These redundancies are traditionally summed up as a paradigm and the generalizations inferred may be referred to as inflectional morphology. Any useful information


It is useful for a language speaker to infer generalizations from morphological relations established by redundancy in conditions (1), (3) and (4). (2) is not useful for subconscious linguistic activity. Therefore, assuming that the establishment of morphological relations and the inference of generalizations is a biologically real process in the acquistion and production of language, sufficiency conditions should be revised as follows:

Sufficiency conditions for establishment of morphological relations:

1. Redundancy must occur in at least two of the three components of language, Phonology, Syntax or Semantics.

2. Redundancy must occur in the Semantics component.

The implication of this is that a speaker can only infer the -ing ending from its regular occurrence in semantically related words in the lexicon. If a speaker is only given the words hunting, losing and yawning, they will not derive the suffix -ing. However, if a speaker is given the words hunting, chasing, and pursuing, then they will derive the suffix -ing. I am uncertain about this conclusion. However, Ramscar (2001) may be interpreted as supporting this conclusion, since it found that speakers tended to infer different English past tense forms for novel words by analogy from semantic priming (e.g. when the word frink is in a context where one would expect the word drink to be used, English speakers by analogy will produce the past tense form frank instead of frinked).


  1. The assumption here is that a line should be drawn between subconsciously derived morphological relations, which aid speech production, parsing and acquistion, and consciously derived morphological relations, which are employed for artistic purposes. A deeper assumption is that a line should be drawn between the psychology of communication and the psychology of creativity.


Booij, Geert. 2010. Construction morphology. New York: Oxford University Press

Ford, Alan & Rajendra Singh. 2003. Prolegomena to a theory of non-Paninian morphology. In Singh & Starosta (eds). Explorations in seamless morphology. New Delhi: Sage Publications. pp18-42

Jackendoff, Ray. 1975. Morphological and semantic regularities in the lexicon. Language 51(3):639-671

Ramscar, Michael. 2001. The role of meaning in inflection: why the past tense does not require a rule. Cognitive Psychology 45:45-94

Posted in Uncategorized | Tagged , , , , , , , , , , | Leave a comment

Codes of spoken communication

Nearly all studies in linguistics break down human speech into thousands of individual “languages”. This blog post investigates this statement, adopting the style of a stream of consciousness, while making little to no attempt to justify its claims or conclusions.

First of all, individual people can deliberately adapt their code to match the codes of the people around them. Individual people can also deliberately obscure their code from the people that surround them. Much of the processing involved is usually based on exposure to and/or experience with using different codes in one’s life-time, however, an individual person is also capable of pure invention.

In a setting like New Delhi, what was historically two codes, referrable as “English” and “Hindi”, has developed into one code, referrable as “a mixture of English and Hindi”. When in the presence of foreigners, an individual person from New Delhi may wish to match their code with the code of the foreigner in order to ease communication. If the foreigner wishes to use “English”, the New Delhi speaker must do one of two things: (1) they must learn the code “English” from scratch; (2) they must somehow extrapolate the code “English” from the code “a mixture of English and Hindi”. What seems to happen is a mixture of both. The lexicon and syntax of “English” are a fundamental topic in the education system of New Delhi – with this knowledge, a person from New Delhi can subtract the elements of “a mixture of English and Hindi” which are not “English”-like, and proceed accordingly.

The question follows: what are the elements of a code of linguistic communication? Clearly the most important element are words: those elements that combine some recognizable phonetic structure with some recognizable meaning, such as a concept or the property of a concept. Two people engaged in some co-operative manual labour can get by using only words which both of them are familiar with, or words which one of them can easily learn, whose meaning refers to an object or action (or property of an object, or property of an action) that can be pointed to. Words can also express emotional reactions. A combination of words can express opinions.

When individual people get together and form a community, a consensus on the arrangement of words often arises. For instance, native people of South Holland generally agree that, when speaking to other natives, the second word that comes out of a speaker’s mouth should be a verb. However, breaking this “rule” does not hamper understanding. The arrangement of words in a speech code is merely a social construct of the community which uses that code. Thus, if an individual person wants to integrate into a foreign community, he/she must be familiar not only with the words preferred by the speech community, but also the preferred method of arranging these words. In northern and insular Europe, speech communities tend to have strict rules concerning the arrangement of words in their codes; whereas in most of the indigenous communities of Australia, speech communities prefer the arrangement of words to be free, and instead are strict about the selection of words which one can use.

The use of the word “code” is confusing, and should probably be replaced with a term such as “language”. One would be hard-pressed to find a linguist who is willing to apply the label “language” to the code “a mixture of English and Hindi”, which contains both words and rules of arrangement from the traditional codes “English” and “Hindi”. Yet, “English” contains words and rules of arrangement from “Old French”, and “Hindi” contains words and rules of arrangement from “Proto-Dravidian”. The term “language” has become an empty vessel for inserting one’s political ideals. Yet, the pervasive use of the term has confused linguists into coining the terms “monolingual” and “bilingual”. All human beings, if they can speak at all, can speak more than one speech code, since each community has a separate code, and each community has sub-communities, and each sub-community has separate codes, and so on. Furthermore, speech codes constantly change over time – speakers internalize the changes, but they also have a memory of what the speech code used to be. The number of speech codes known by a single person may be infinite.

It is a circular fallacy for a linguist to observe a community that speaks “a mixture of English and Hindi”, and then attempt to divide this code into two different codes, “English” and “Hindi”. (Even worse, if this linguist claims that the speech community is speaking two “languages”.) Speakers of “a mixture of English and Hindi” only make the distinction between “English” and “Hindi” when they are speaking to outsiders – that is, they adopt different codes, which have only a superficial link to the code they use with their neighbours and family members.

The view given earlier was that two codes, such as “English” and “Hindi”, may be combined, but that this combination is uni-directional. That is, if a person in New Delhi wants to speak “English” or “Hindi”, he/she must learn the two codes from scratch, albeit with some analogous input from his/her native code “a mixture of English and Hindi”. Much is often made in the linguistic world about the mass language extinction which is predicted, under the current empty and false definition of a language. In fact, new speech codes arise all the time, as codes are constantly being combined in the minds of individual speakers. An important thing to note, from the above example, is that combining codes does not result in the death of the two original codes. Rather, their words and arrangement features are copied into a new code; the original codes remain in the memories of the creators, and may live on as the creators speak to outsiders or teach “language” to their children. Eventually, if one of the original codes falls out of use, then aspects of its programming will remain in the new codes. Gradually, as the new codes are copied again and again into new combinations, those aspects of the original code will be copied less and less, until finally its traces are barely detectable in contemporary speech.

Comparative linguistics may uncover traces of ancient lost codes. However, this model implies that family trees follow an upward order of two, where every code has two parents, four grandparents, eight great-grandparents, sixteen great-great-grandparents, and so on. Current models of historical linguistics show branching trees of languages leading up to a single ancestor. Under the code-combination model, this is clearly false. In light of reality, the upwards-order-of-two family tree is also false, since metaphorical “inbreeding” is bound to occur. Nevertheless, Proto-Indo-European is clearly only one of potentially hundreds of ancestors of any one of the speech codes found in Europe, Persia and northern India. The fact that traces of Proto-Indo-European are the most prominent of any of the ancient ancestors speaks only for social and cultural factors in the development of Europe.

Much more may be written about this code-combination model of language, which, despite being in its primeval stages in this blog post, is probably not the first proposal of its kind in general linguistics. Nevertheless, this blog post concludes by asserting these opinions, which have guided the model, without cementing them as empirical facts:

1) Linguistics is the study of coded forms of verbal communication. These are called speech codes.

2) Whatever the same speakers use regularly for communication in the same situations for the same purposes, this is one single speech code.

3) Speech codes are not constrained by human cognition, but rather by cultural norms and expectations.

4) Two people in the same speech community cannot speak the same speech codes. Speech codes are inferred, transformed, combined and lost in the minds of individual speakers. Speech communities merely reflect an attempt between people to converge their codes. This implies that a speaker does not learn a language – he/she attempts to copy the code which appears to be shared by the speech community with which he/she is attempting to integrate into. These principles apply to infants as much as to adults.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment