Explorations in Language Learnability Using Probabilistic Grammars and Child"directed Speech
Joshua Tenenbaum, PhD '99, Paul E. Newton Career Development Professor, Department of Brain and Cognitive Sciences, MIT
Description: How do kids manage to figure out that the word "dog" applies to a whole category of animals, not just one creature? Joshua Tenenbaum wants to understand how children and adults manage to solve such classic problems of induction. Throughout cognition, wherever you look, he says "we see places where we know more than we have a reasonable right to know about the world, places where we come to abstractions, generalizations, models of the world that go beyond our sparse, noisy, limited experience." Tenenbaum's goal is to come up with "general purpose computational tools for understanding how people solve these problems so successfully."
He's creating a set of hierarchical, probabilistic models that will help explain how humans make inductive leaps _ how abstract knowledge that "guides and constrains our inferences" helps us acquire language from our earliest days. While his models can apply to many areas of cognition, Tenenbaum focuses on recent work with syntax. From very simple data, children manage to turn a complex declarative like "The girl who is sleeping is happy," to a complex interrogative: "Is the girl who is sleeping happy?" They don't say, "Is the girl who sleeping is happy?" Tenenbaum suggests that humans somehow identify the hierarchical phrase structure of language, and use this as an "inductive constraint to guide acquisition of a particular piece of syntax."
Tenenbaum and his colleagues have built representative grammars using data from child"directed speech --2300 sentences that correspond to 20 thousand"plus utterances. He deconstructs these sentences so that each word is replaced by a syntactic category. "The baby bear discovers Goldilocks in his bed" becomes "det adj n v prop pre adj n." He's explored these grammars for their capacity to balance complexity, generalize appropriately, and ability to fit the data. His results indicate that "by having the right kind of inductive bias, the idea of hierarchical phrase structure, you can make generalizations which you have no evidence for"
By probing what seem to be "innate domain general capacities," says Tenenbaum, "we're trying to formalize these arguments and use them as a tool to diagnose what has to be innate, or what is more or less plausibly part of Universal Grammar." Tenenbaum sees his use of statistical inference methods as bolstering classical linguistics in its attempt to map out how humans learn from real data, and helping devise machine systems that might approach the capacities of human learners.
About the Speaker(s): Joshua Tenenbaum is also a member of the Computer Science and Artificial Intelligence Laboratory. He received his Ph.D. from MIT in 1999 and after a brief postdoc with the MIT AI Lab, he joined the Stanford University faculty as Assistant Professor of Psychology and (by courtesy) Computer Science. He returned to MIT as a faculty member in 2002.
He currently serves as Associate Editor of the journal Cognitive Science, and he has been active on the program committees of the Neural Information Processing Systems (NIPS) and Cognitive Science (CogSci) conferences. He held a Howard Hughes Medical Institute (HHMI) Predoctoral Fellowship from 1993"1998, and won the Outstanding Paper Award, IEEE Conference on Computer Vision and Pattern Recognition, 1997, for "Learning bilinear models for two"factor problems in vision", with William T. Freeman.
Host(s): School of Engineering, Laboratory for Information and Decision Systems
It looks like no one has posted a comment yet. You can be the first!
You need to log in, in order to post comments.
More from MIT World — special events and lectures
Added over 4 years ago | 02:02:00 | 7221 views
Added over 4 years ago | 00:53:04 | 6162 views
Added over 4 years ago | 01:24:00 | 2511 views
Added over 4 years ago | 01:05:00 | 5693 views
Added over 4 years ago | 00:27:04 | 7353 views
Added over 4 years ago | 00:53:09 | 4215 views