4. Generative linguistics
Problem. The problem evolved from the research done within the American School of Descriptive Linguistics. Having described phonology and morphology of language, it attempted to extend the same methodologies to the description of syntax. The failure of this attempt demonstrated the need to develop some other formal methods for the syntactic analysis. Such methods, introduced by Zellig Harris, aimed to provide an account of the grammatically correct utterances. This account, however, turned out to be endless. Besides, it included ‘old’, actual utterances that already existed in language, and it did not explain how new utterances emerge. Therefore, Harris’ student Noam Chomsky in the 1950s suggested that a grammar should be more than a description of ‘old’ utterances; it should encompass possible future utterances. So, Chomsky shifted attention away from detailed description of actual utterances and started asking questions about the nature of the system that produces the output.
Hypothesis. According to Chomsky’s hypothesis, language has a special mechanism for generating an unlimited number of utterances out of a limited number of initial units. Anyone who knows some language must have internalized (retained in the mind) a set of rules which specify the sequences permitted in this language, and prescribe the syntactic, or combinatory structures of all languages. A mental grammar which consists of a set of statements, or rules that specify which sequences of a language are possible, and which are impossible, is a generative grammar. Such grammar includes a finite number of rules, yet is capable of generating an infinite number of sentences from such rules due to their ability to refer back to each other repeatedly (known as recursion). Under a generative approach, human beings are assumed to be prewired for language, beginning life not with a blank slate but rather with a linguistic template or blueprint that they flesh out upon exposure to specific linguistic data.
Objective. The task of linguistics was to find out what rules specify creativity of language, and to describe language as an ‘internalized’ (mental) phenomenon.
Data. Since, for Chomsky, the inborn ‘language faculty’ is one and the same for all humans, and all languages have a common foundation, he maintained that the study of syntax of one language could be enough to shape an idea about the nature of language in general. Thus, the analyzed data were the syntactic structures of English.
Method. The syntactic structures were modeled with tree diagrams that demonstrated the nature of initial ‘deep (internalized) structures’ and their ‘transformations’, or possible changes, which were further reflected in ‘surface (externalized) structures’ of language. It should be noted that the initial methodologies developed by the generative paradigm in the 50s were later reconsidered by Chomsky himself, and not all of them are applied at present.
Result. The formal models used in syntactic analysis contributed to making linguistics a precise discipline bordering on formal logic. Such formal models were applicable in computer technologies thus bringing together linguistics and computer science. The question about the “internalized”, or mental forms of language linked linguistics to psychology. Integration of linguistics and psychology resulted in the appearance of psycholinguistics (1951). In general, linguistics has become a major social science of direct relevance to psychologists, neurologists, anthropologists, philosophers and others.
Generative linguistics has undergone considerable changes since its birth in 1957 when Chomsky’s book Syntactic Structures appeared.
In the 1960s, the emphasis was on the distinction between a deep structure, which conveyed the semantic properties of a sentence, and a surface structure, which supplied its pronunciation. In the 1970s, the emphasis shifted to finding the set of transformations used to derive the various syntactic patterns of each language. The list included transformations for passives, yes–no questions, and wh-questions (‘Which book did the student read?’). While the number of sentences in a language is potentially infinite, it was assumed that the number of transformations could be reduced to a finite set. However, it soon became clear that there were many more transformations than it was possible to enumerate. Emphasis then shifted to narrowing down the transformations by type. Two general types were established: noun phrase (NP) movement, which moved a phrase within a sentence, as in passives, and wh- movement, which moved a phrase outside a sentence to a presentential landing site, as in wh-questions. Eventually, these two transformations were collapsed into one general transformation, ‘move alpha’, which allowed movement of any constituent anywhere, although with certain constraints.
In the 1980s, the model was flipped on its head. Rather than looking for the possible structures in a language, the emphasis instead shifted to determining the impossible structures. This represented a significant evolution in the conceptual model: whereas the number of patterns possible in a language is potentially infinite, the number of constraints is thought to be very small. In addition, it also made it possible to shift the emphasis to universal aspects of language, rather than simply to those properties that an individual language possessed. The grammar was now taken to consist not of a set of rules, but rather of a set of autonomous modules that interacted with each other. One conception of such a model became known as government-and-binding theory (GB), named after two of the modules. The name refers to two central subtheories of the theory: government, which is an abstract syntactic relation applicable, among other things, to the assignment of case; and binding, which deals chiefly with the relationships between pronouns and the expressions with which they are co-referential. GB was the first theory to be based on the principles and parameters model of language, which also underlies the later developments of the minimalist program.
The overall conceptual model became known as the principles-and-parameters model since it considered language to consist of a set of wired-in principles that all languages shared, along with a set of parameters that they also shared, but whose values varied cross-linguistically and needed to be set upon exposure to language-particular data.The principles-and-parameters approach suggests that children are born with a Universal Grammar (UG), which means they are (unconsciously) anticipating those features that are common to all languages (the principles), as well as limited options for those things that differ among languages (the parameters). Upon exposure to actual input from a given language, children are able to ‘decide’ which sort of language they have encountered. So, for example, some languages have basic subject–verb–object organization in which complements are attached to the right of the heads of phrases (thus objects follow verbs, relative clauses follow noun heads, and noun phrases follow prepositions), while other languages are subject–object–verb where the reverse order of complements is found. A child exposed to a language of the first type need only process a simple structure (say one with a verb followed by an object) and it will trigger the expectation that all the other head-complement structures will be in the same order. When all the open parameters have been set, the child possesses the ‘core’ grammar of the specific language he or she is exposed to. At the same time, however, the child has also been acquiring those aspects of the language that are not anticipated by UG, using experience-dependent learning. These aspects are often together referred to as ‘peripheral’ grammar. Some researchers in this paradigm have assumed that all principles and parameters are operational or sensitive to the input from the beginning of life. Others have suggested that some may at least emerge with maturation.
In the 1990s, the emphasis turned to making the model even simpler conceptually. The new approach, called minimalism, assumed a much more limited role for the syntactic component. It now was seen as a computational device that simply checked that sentences were formed correctly. All morphology was assumed to be attached directly in the lexicon; the syntactic component then checked to see that features on the words matched. If so, the derivation was said to converge, otherwise, it crashed. Thus, the sentence ‘The student enjoys the book’ would be acceptable since ‘student’ and ‘enjoys’ are both third-person singular, whereas ‘The student enjoy the book’ would be ungrammatical as the number agreement on the noun and verb do not match. The formation of sentences was now assumed to occur by means of a few basic operations such as merge, used to generate basic declarative sentences, and move, used to derive patterns such as passives and questions.
Another conceptual model that developed during this decade was optimality theory (OT), which suggests that the observed forms of language arise from the interaction between conflicting constraints, and variation among languages depends on different sets of universal constraints. OT differs from other approaches, which typically use rules rather than constraints.
While the conceptual details of the generative grammar model have changed greatly over half a century, the basic underlying tenet, that language is a species-specific property for which human beings come prewired, has remained constant. The present research conducted intensively throughout the world is believed to yield new insights into the specific shape of the grammar, while remaining true to the model’s belief in a wired-in blueprint.
(+ Aitchison, pp. 25-27)
