IEEE, SPS, ACL, ISCA

Organizing Chairs:

Dilek Hakkani-Tür, Microsoft

Mari Ostendorf, U. Washington

Finance Chair:

Gokhan Tur, Microsoft

Advisory Board:

Mazin Gilbert, AT&T Labs - Research

Srinivas Bangalore, AT&T Labs - Research

Giuseppe Riccardi, U. Trento

Technical Chairs:

Isabel Trancoso, INESC-ID, Portugal

Tim Paek, Microsoft Research

Area Chairs:

Julia Hirschberg, Columbia Univ.

Hermann Ney, RWTH Aachen

Andreas Stolcke, SRI International/ICSI

Ye-Yi Wang, Microsoft Research

Demo Chairs:

Alex Potamianos, Tech. U. of Crete

Mikko Kurimo, Helsinki U. of Tech.

Publicity Chair:

Bhuvana Ramabhadran, IBM

Benoit Favre, U. Le Mans

Panel Chairs:

Sadaoki Furui, Tokyo Inst. Of Tech.

Eric Fosler-Lussier, Ohio State U.

Publication Chair:

Yang Liu, U. Texas, Dallas

Local Organizers:

Dimitra Vergryi, SRI International

Murat Akbacak, SRI International

Arindam Mandal, SRI International

Sibel Yaman, IBM Research

Europe Liaisons:

Frederic Bechet, U. Marseille

Philipp Koehn, U. Edinburgh

Asia Liaisons:

Helen Meng, C. U. Hong Kong

Gary Geunbae Lee, POSTECH

Keynote Speakers

Michael Jordan, University of California, Berkeley

Tentative Title: Bayesian Nonparametrics for Speech Diarization and Related Problems

Abstract

Bayesian nonparametric statistics involves replacing the “prior distributions” of classical Bayesian analysis with “prior stochastic processes.” Of particular value are the class of “combinatorial stochastic processes,” which make it possible to express uncertainty (and perform inference) over structural aspects of models, including cardinalities, couplings and segmentations. This has allowed upgrades to many classical probabilistic models, including hidden Markov models, mixture models and tree-based models, and it has allowed the design of entirely new models. I overview some of the basics of combinatorial stochastic processes and present applications to several problems in and around speech and language technology, including speaker diarization and multiple time series segmentation.

Speaker Biography

Michael Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters from Arizona State University, and earned his PhD in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. He has published over 300 articles in statistics, electrical engineering, computer science, statistical genetics, computational biology and cognitive science. His research in recent years has focused on nonparametric Bayesian analysis, probabilistic graphical models, spectral methods, kernel machines and applications to problems in computational biology, information retrieval, signal processing and speech processing. Prof. Jordan was named to the National Academy of Engineering in 2010 and the American Association for the Advancement of Science in 2006. He was named a Medallion Lecturer of the Institute of Mathematical Statistics (IMS) in 2004. He is a Fellow of the IMS, a Fellow of the IEEE, a Fellow of the AAAI and a Fellow of the ASA.

James W. Pennebaker, University of Texas at Austin

Title: How Junk Words Reveal Personality and Social Behavior

Abstract

Most computer applications of language focus on content-heavy words such as nouns, regular verbs, and some adjectives. Many other word categories, such as pronouns, articles, and prepositions, are treated as junk and ignored. In fact, the ways people use these junk or function words can provide insight into personality, mental health, leadership, honesty, and basic social processes. Psychologically, junk words reflect how people are relating to their topic and their audience. Implications of junk word analyses will be discussed for advertising, author identification, social network analyses, and other applications.

Speaker Biography

James W. Pennebaker is Professor and Departmental Chair of Psychology at the University of Texas at Austin, where he received his PhD in 1977. He and his students have authored multiple books and 250 research articles on the links among psychological state, social behavior, and natural language use. His current language research is funded by National Science Foundation, Army Research Institute, National Institutes of Health, and various agencies within the Defense Department and Homeland Security.

Chris Manning, Stanford University

Title: Holistic models of linguistic structure: Discriminative joint models and domain adaptation

Abstract

The main mass natural language applications for regular people are high-level, semantically oriented ones which have to just work in diverse contexts: question answering, machine translation, machine reading, speech dialog interfaces for robots and machines, etc. Humans are very good at these types of tasks, in part because they naturally employ holistic language processing. They effortlessly keep track of many layers of low-level information, while simultaneously adapting across topics and accents and integrating in long distance information from elsewhere in the conversation or document. In contrast, much NLP research not only focuses on lower-level tasks, like parsing, named entity recognition, and part-of-speech tagging, but, for the sake of efficiency, the models of these phenomena make extremely strong independence assumptions, which completely decouple these tasks. Models commonly only look at local context when making decisions and have no models or only very crude models for domain adaptation.

I will present some of our recent work that begins to address these problems. I present a discriminative parsing model (a CRF-CFG) which can incorporate many source of evidence in decision making, and then show how that can be extended into a joint model of parsing and named entity recognition. This joint model improves the performance of both independent NER and parsing models, showing that joint information effectively flows between the tasks, and also supports the recognition of hierarchical named entities. This basic model does not address domain adaptation, and a practical complication of training such joint models is that they require jointly annotated data, and there is much less jointly annotated data available than data annotated with just parse trees or just named entities. In the last part of the talk I will describe the use of hierarchical models which allow us to address both these problems. I show that we can substantially improve the performance of our joint model by using domain-specific and non-jointly annotated data.

Speaker Biography

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work concentrates on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, grammar induction, and large-scale joint inference for NLP. He has won several best paper awards; most recently his paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. His Ph.D. is from Stanford in 1995, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

This talk presents joint work with Jenny Finkel.