3rd Conference Abstracts |
University of Washington
fjn@u.washington.edu
abstract
Most linguistic research in this century has been guided by a key assumption, namely, that of ‘uniformitarianism’. In a nutshell, uniformitarianism embodies the idea that all languages are, and always have been, cut from the same mold. The assumption has both political and methodological implications (for discussion, see ), though in this paper I will focus only on the latter. If uniformitarianism is right, then the grammatical theorist and the researcher into the evolution of grammar are free to ignore a host of factors that might (a priori) be thought to be relevant to their tasks. Among these are sociocultural facts about the speakers of the languages under investigation and the historical periods in which the languages are or were spoken.
There are, however, two ways that uniformitarianism might be mistaken (for full discussion, see : ch. 6). Let's call the first 'weak-non-u'. In this scenario, the functional forces responsible for the observed properties of language (and the correlations among them) have remained constant throughout human history, but they are, so to speak, 'lopsided'. That is, they are propelling language in a particular overall direction as far as its distribution of typological features are concerned. The second I will call 'strong-non-u'. In this scenario, the functional forces themselves have changed indeterminately throughout human history. Such could be the case, for example, if there is, contrary to the mainstream view, a non-accidental correlation between 'purely' grammatical features and aspects of culture, climate, and so on.
In this paper I will first review the evidence for both weak-non-u and strong-non-u and then discuss the implications of the fact that both might be wrong for studies of language evolution.
The idea of 'lopsided functional forces' and weak-non-u is explicit in much of the work that posits that there has been a general mostly unidirectional 'drift' from OV order to VO order (; ). In a rather different way, weak-non-u is implicit in ), where it is suggested that typologically rare features are concentrated in languages with small numbers of speakers. The most extensive marshalling of evidence for strong-non-u is found in ), a book that argues that less complex cultures tend to have more complex deixis systems.
The farther back we go in historical time (and the closer that we get to the 'event' that created true human language), the more plausible become both versions of non-uniformitarianism and the more dramatic their probable effects. Take parsing-dictated grammatical consequences, for example, such as the principle of subjacency or the statistical correlation between the order of grammatical relations and adpositionality. Would they have been manifest in 'early human language'? It is not obvious that they would have been. One can easily imagine that in the historical infancy of human language the influence of parsing would have been submerged by more pressing functional needs and that subordination would have been so rare that principles such as subjacency could not have emerged (note that it is claimed that today's preliterate societies use fewer subordinate clauses than do literate ones: ; ).
I go on to demonstrate that the great majority of published work in language origins and evolution presupposes both versions of uniformitarianism. Uniformitarianism is implicit, I would say, in the debate over the degree to which grammar is innate and therefore what a theory of biological evolution of language has to explain. For example, as I understand them, the computational simulations in work such as ) take uniformitarianism for granted, as do more ‘catastrophic’ scenarios for the emergence of grammar such as ) and ). The last part of the paper is a general discussion of the extent to which the central conclusions of such work might be maintained in the light of the probable incorrectness of a central assumption underlying it.
Conference site: http://www.infres.enst.fr/confs/evolang/