Download
Bibliographical
Pre-conference version of a plenary session at Depling2, Prague, August 2013.
Abstract
Suppose we assume that language is a part of cognition, and therefore has similar properties to the rest of cognition – memory for people and events, categorization, inference, attention and so on. In that case, we may also assume that we use the full range of our general cognitive abilities in learning and processing language. These abilities include at least the following:
- Complex networks – for example, we recognise extremely complex networks of social relations, with classified asymmetrical relations (e.g. ‘father’), multiple roots (e.g. x is child of x’s mother as well as of x’s father), and even mutual relations (e.g. x and y are each other’s brothers).
- Default inheritance – we can generalise while also allowing exceptions; e.g. we generalise across all birds while allowing for exceptions such as penguins. Default inheritance applies across the ‘is-a’ relation (e.g. ‘penguin is-a bird’).
- Definition by network – we define concepts by their network of relations to other concepts; e.g. ‘bird’ is defined in relation to ‘wing’, ‘feather’, ‘fly’ and so on. Concepts have no other ‘content’.
The aim of my lecture will be to show how these assumptions affect our view of syntactic structure. The conclusion will on the whole support the Word Grammar version of dependency grammar, but even Word Grammar needs some changes.
- Notation: If all of cognition, including language, is a network, then we need network notation – arrows rather than stemmas.
- Wh-words: the network notation is supported by the mutual dependencies found with wh-words (e.g. in Who came, each word depends on the other.)
- Dependency structure or phrase structure?: If cognition allows complex networks, then it must be wrong to reject word-word relations as phrase-structure theory does. On the other hand, for the same reason it would also be wrong to rule out phrases, as ‘pure’ dependency theory does. These seem to be needed for some ‘edge’ phenomena such as Welsh mutation.
- Word types and word tokens. Word tokens must be distinct concepts from the types with which they’re associated because they have different properties. But for the same reason, each dependent must create a distinct token of the parent word, which combines distinct syntax with a distinct meaning. For instance, in Cats purr, we must recognise three separate concepts for the word purr: the word type, the first token recognised, and then the sub-token which is modified by cats to mean ‘cats purr’. The result is an analysis which is remarkably similar to a phrase structure. It is fully supported by examples such as typical French house, which imply some kind of ‘phrasing’.
- Raising and extraction. The ideas in 3 are further supported by the complex patterns found in raising (e.g. It was raining) and extraction (e.g. Who did you invite?). They explain why in each case the ‘higher’ dependency wins: because it creates a sub-token of the ‘normal’ dependent token, which (by default inheritance) wins.
- The raising principle. The general principle in 5 (the ‘Raising principle’) is just a learned generalisation so it too allows exceptions, such as German Partial VP Fronting (e.g. Eine Concorde gelandet ist hier nie. ‘A Concorde landed has here never.’ = ‘A Concorde has never landed here.’).