3rd Conference
The Evolution of Language
April 3rd - 6th , 2000

Abstracts

 

 

Modeling discourse complexity

Ichiro Igari & Takashi Ikegami

Institute of Physics, The Graduate School of Arts and Sciences
University of Tokyo
3-8-1 Komaba, Meguro-ku, Tokyo 153-8902
{igari, ikeg}@sacral.c.u-tokyo.ac.jp

abstract

We here try to unify the studies of discourse analysis and syntax analysis. There is a tradition started by Chomsky to take syntax and semantics to be independent of each other and to analyze syntax as a closed formal system. On the other hand, discourse analysis has focused more on the dynamic and open-ended aspects of a language system, such as entrainment and bifurcation of context flow during conversation. The unification of the two subjects is certainly required now (Steels, 1998).

Our approach towards the unified theory is based on dynamical systems theories and simulations. We propose coupled dynamical recognizers as a candidate model for studying discourse complexity theoretically. A dynamical recognizer is a class of recurrent neural network, which is capable of mimicking some formal language systems (Pollack, 1995). But at the same time, the dynamical recognizer has a rich structure as a novel dynamical system. It is now widely used as a basis for robot navigations (Tani and Fukumura 1994) and natural language processing (Elman 1995). A characteristic feature of the dynamical recognizer is not its computational ability but rather its dynamic nature of perception and manipulation of given information data sets. Due to a kind of chaotic attractor that exists in dynamical recognizers, we can formalize a language system not as a rigid formal system but as an autonomous evolving system. Coupling those dynamical recognizers adds a new level of complexity (Ikegami and Taiji 1998 1999, Taiji and Ikegami 1999). Our perspective has a close connection with recent developments of cognitive linguistics by Langacker (1987, 1991) and Lakoff (1987). In particular, Langacker's maximalism and his way of taking a growing structure of abstraction/extension network as "syntax" has stimulated our approach.

In the present model, as an initial setup, each dynamical recognizer is trained separately by a given set of words to learn certain syntax behind. A single dynamical recognizer is known to learn a given set of words not syntactically but "semantically". To put it another way, words are not categorized in an alphabetical way or by living/non-living criteria. But they are learned as a set of elements that constitute the given context. For example, a set of words like cats, cheese and mouse have similar internal state patterns.

Then we study the conversation situation where two agents try to speak to each other by predicting what the other speaker expects to hear in the next turn by the given context. The prediction is based on its own dynamical recognizer. Namely, in each conversation step, agents update their dynamical recognizer structures to mimic the previous behaviors of the other agent. Since what one agent expects will perturb the other agent's dynamical recognizer structure mutually and indirectly, their dynamical recognizer tends to change their structures in time. We argue that all the complexity of the discourse pattern will be generated by this mutually predicting dynamics.

Our preliminary result shows that two dynamical recognizers are developed to have different syntactic structures from each other. In this sense, the mechanism which agent "learns" dynamically from another agent is not a mere entrainment. We call the new learning dynamics interactive learning. In some cases dynamical recognizers do not converge on static structures. This is consistent with our picture of language in which a language is an evolving system in its own right.

References

[1] Elman, J., Language as a Dynamical System. In R.F. Port and T. van Gelder(Eds.) Mind as Motion. MIT Press, 1995

[2] Ikegami, T., Taiji, M. Structures of Possible Worlds in a Game of Players with Internal Models. Acta Poly. Scan. Ma. 91. 283-292, 1998

[3] Ikegami, T., and Taiji, M., Imitation and Cooperation in Coupled Dynamical Recognizers In Advances in Artificial Life (Eds. D.Floreano et al. Springer, 1999) pp. 545-554

[4] Lakoff, G., Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. The University of Chicago Press, 1987

[5] Langacker, R.W., Foundations of Cognitive Grammar, i, Theoretical

Prerequisites. Stanford University Press, 1987

[6] Langacker, R.W., Foundations of Cognitive Grammar, ii. Stanford

University Press, 1991

[7] Pollack, J.B., The Induction of Dynamical Recognizers. In R.F. Port and T. van Gelder(Eds.) Mind as Motion. MIT Press, 1995

[8] Steels, L., The origin of linguistic categories. In "The Evolution of Language" (Selected papers from 2nd International Conference on the Evolution of Language, London), 1998

[9] Tani, J. and Fukumura, N, Learning Goal-Directed Sensory-Based Navigation of a Mobile Robot. Neural Networks, vol.7, no.3, pp.553-563, 1994

[10] Taiji, M. and Ikegami, T., Dynamics of Internal Models in Game Players. Physica D 143 (1999) pp.253-266.

 

 

 Conference site: http://www.infres.enst.fr/confs/evolang/