Research Biography of Jeffrey L. Elman

Jeff ElmanJeffrey L. Elman has made several major contributions to the theoretical foundations of human cognition, most notably in the areas of language and development. His work has had an immense impact across fields as diverse as cognitive science, psycholinguistics, developmental psychology, evolutionary theory, computer science and linguistics. Elman’s 1990 paper Finding Structure in Time [1] introduced a new way of thinking about language knowledge, language processing, and language learning based on distributed representations in connectionist networks. The paper is listed as one of the 10 most-cited papers in the field of psychology between 1990 and 1994, and the most often cited paper in psycholinguistics in that period. This work, together with earlier Elman’s earlier work on speech perception and subsequent work on learnability, representation, innateness, and development, continues to shape the research agendas of researchers in cognitive science, psycholinguistics, and many other fields.

Elman received his Bachelor’s degree from Harvard in 1969 and his Ph.D. in Linguistics from the University of Texas in 1977. That same year he joined the faculty at UCSD, where he has remained ever since, first in the department of Linguistics and now in the Department of Cognitive Science. He is now Distinguished Professor in the Department of Cognitive Science, as well as Acting Dean of the Division of Social Sciences and Co-Director of the Kavli Institute for Mind and Brain.

In the early 1980’s, Jeff was among the first to apply the principles of graded constraint satisfaction, interactive processing, distributed representation, and connection-based learning that arose in the connectionist framework to fundamental problems in language processing and learning. His early work concentrated on speech perception and word recognition, leading to the co-development (with Jay McClelland) of TRACE [2,3], an interactive-activation model that addressed a wide range of findings on the role of context in the perception of speech. Elman and McClelland conducted their simulations in conjunction with experimental studies of speech recognition, predicting novel results that provided strong empirical support for the central tenet of the model [4]. The key finding — that contextual and lexical influences can reach down into and retune the perceptual mechanisms that assign an initial perceptual representation to spoken words — has been the focus of intense ongoing investigation. More generally, there is a large body of ongoing computational and experimental research addressing the principles embodied in the TRACE model, and a dedicated web site with a complete implementation of nearly all published TRACE simulations.

For Elman, TRACE was only the beginning of an important new way of construing the nature of spoken language knowledge, language learning, and language processing. Elman’s subsequent work on language learning in Simple Recurrent Networks has been revolutionary. In this work, Elman lets go of all of the commitments previous researchers have made about language. First, instead of treating time explicitly as a variable that must be encoded in the information fed to a neural network, the “Elman net” (as it is often called) actually lives in time, combining its memory of past events with the current stimulus and generating a prediction about “what will come next.” Second, instead of treating language knowledge as a system of rules operating over abstract categories, the Elman net acquired representational and structure-processing capabilities as a result of exposure to a corpus of sentences embodying language regularities. These issues were first developed in the Finding structure in time [1] and elaborated in a subsequent paper on distributed representations and grammatical structure in simple recurrent networks [5]; many subsequent investigations have been spawned by these two papers.

The next major development in Elman’s work appeared in a subsequent paper on “The importance of starting small” [6]. This has had as much impact in developmental psychology as it has had in linguistics. In the 1993 paper, Elman showed that successful learning of grammatical structure depends, not on innate knowledge of grammar, but on starting with a limited architecture that is at first quite restricted in complexity, but then expands its resources gradually as it learns. The demonstration in Starting Small is of central importance as it stands in stark contrast to earlier claims that the acquisition of grammar requires innate language-specific endowments. It is also crucial for developmental psychology, because it illustrates the adaptive value of starting, as human infants do, with a simpler initial state, and then building on that to develop more an more sophisticated representations of structure. It seems that starting simply may be a very good thing, making it possible for us to learn what might otherwise prove to be unlearnable, in the absence of detailed linguistic knowledge.

More recent applications of these ideas include a paper capturing historical language change in English grammar across generations of neural network simulations, and fundamental studies of the computational properties of recurrent nets, showing how these systems are able to solve problems of recursive embedding [7,8]. This latter work lays the groundwork for a formal theory of neural computation in recurrent networks that might ultimately do for what neural networks what the Chomsky hierarchy has done for discrete automata. The results from these studies (with Paul Rodriguez and Janet Wiles) suggest that there are indeed important differences in the way recursive structure is encoded in recurrent networks and in discrete automata. Furthermore, these differences, which revolve around the context- and content-sensitivity of recurrent networks’ encoding of constituency, seem highly relevant for explaining natural language phenomena.

Many of Elman’s ideas about ontogeny were worked out in detail with several colleagues in the 1996 book, Rethinking innateness: A connectionist perspective on development [9], where the Nature-Nurture controversy is redefined in new terms [9]. The volume lays the theoretical foundations for what may prove to be a new framework for the study of behavior and development, synthesizing insights from developmental neurobiology and connectionist modeling. It acknowledges that evolution may have provided biases that guide the developmental process, while eschewing the notion that it does so by building in specific substantive constraints as such, and while still leaving experience as the engine that drives the emergence of competence in language, perception, and other aspects of human cognitive processes.

Elman’s most recent research extends the themes of this earlier work in several ways, focusing on new ways of thinking about that nature of the mental lexicon and of the role of lexical constraints in sentence processing [10, 11]. This work involves a three-pronged effort, using corpus analysis, simulations, psycholinguistic experiments to understand the temporal dynamics of language processing at the sentence level.

In addition to his research, Elman is also an exemplary teacher and scientific citizen. In fact, before he went to graduate school himself, Jeff spent several years as a high school teacher (teaching history, French, and social studies; in Spanish, in a Boston immigrant community). He learned how to teach, how to make his material maximally accessible without distorting or betraying the content. This is a lesson that has stayed with him all his life, and his colleagues and students are all the richer for it. In fact, Rethinking Innateness and the companion handbook grew out of a teaching initiative, a five-year experimental training program funded by the MacArthur Foundation, designed to introduce developmental psychologists (from graduate students to senior scientists) to the ideas and techniques of connectionist modeling. Jeff has been especially concerned with graduate student and postdoc mentoring, and in 1995-96, he developed a course on Ethics and Survival Skills in Academe for graduate students for his department at UCSD.

Elman has also been a leading contributor as a scientific citizen, working continually to build bridges between the disciplines that contribute to the field of Cognitive Science. For many years, Jeff directed the UCSD Center for Research in Language, where he turned a local resource into an internationally renowned research unit. At the international level, Jeff has been an active member of the Governing Board for the Cognitive Science Society. He has served as President of the society, and serves as consultant and advisory board member of many departments and institutions, and on the editorial board of numerous journals. He is in great demand throughout the world as a keynote speaker, and gives generously of his time with little recompense, including generous commitments to the international Cognitive Science Summer School at the New Bulgarian University, which awarded him an honorary Doctorate in 2002. In the same year, Elman was also chosen as one of five inaugural Fellows of the Cognitive Science Society.

In short, Jeff exemplifies the kind of model that David Rumelhart set for our field, not only in the quality and depth of his science, but in the degree of compassion, leadership and generosity that he provides to his colleagues around the world.

Selected Publications

[1] Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14, 179-211.
[2] Elman, J. L., & McClelland, J. L. (1986). Exploiting the lawful variability in the speech wave. In J. S. Perkell and D. H. Klatt (Eds.), Invariance and variability of speech processes. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.
[3] McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1-86.
[4] Elman, J. L., & McClelland, J. L. (1988). Cognitive penetration of the mechanisms of perception: Compensation for coarticulation of lexically restored phonemes. Journal of Memory and Language, 27, 143-165.
[5] Elman, J. L. (1991). Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7, 195-224.
[6] Elman, J.L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48, 71-99.
[7] Hare, M., & Elman, J.L. (1995). Learning and morphological change. Cognition, 56, 61-98.
[8] Rodriguez, P., Wiles, J. and Elman, J. (1999) A Recurrent neural network that learns to count. Connection Science, 11, 5-40.
[9] Elman, J.L., Bates, E.A., Johnson, M.H., Karmiloff-Smith, A., Parisi, D., Plunkett, K. (1996). Rethinking Innateness: A Connectionist Perspective on Development. Cambridge, MA: MIT Press.
[10] Elman, J.L. (2004). An alternative view of the mental lexicon. Trends in Cognitive Science, 8, 301-306.
[11] McRae, K., Hare, M., Elman, J.L., & Ferretti, T.R. (2006). A basis for generating expectancies for verbs from nouns. Memory and Cognition, 33, 1174-1184.