22 March 2017

Invited Speakers

Dec, 12: 14.30 – 15.30

Marco Baroni – Spectacular successes and failures of recurrent neural networks applied to language

Abstract. Recurrent neural networks (RNNs) are attractive computational models for language, due to their generality and ability to track sequential information in raw data. In this talk, I will first report the results of an experiment suggesting that RNNs are extracting surprisingly abstract grammatical generalizations from corpora (they correctly predict that “the colorless green ideas you slept yesterday” should continue with a verb in the plural form). I will then report a second experiment suggesting that RNNs are not “systematic” in Fodor’s sense: when explicitly trained to execute the commands “run”, “run twice” and “dax”, at test time they fail to correctly execute the new composed command “dax twice”. If time allows, I will conclude with some ideas about how RNNs could be extended to handle systematic compositionality.

Short Bio. Marco Baroni received a PhD in Linguistics from the University of California, Los Angeles, in the year 2000. After several experiences in research and industry, he joined the Center for Mind/Brain Sciences of the University of Trento, where he’s been associate professor since 2013. In 2016, Marco joined the Facebook Artificial Intelligence Research team. Marco’s work in the areas of multimodal and compositional distributed semantics has received widespread recognition, including a Google Research Award, an ERC Starting Grant and and the ICAI-JAIR best paper prize. Marco’s current research focuses on developing computational systems that can flexibly adapt to new situations just like living beings do.


Dec, 11: 17.30 – 18.30

Rada Mihalcea – Computational Sociolinguistics – An Emerging Partnership

Abstract. Computational linguistics has come a long way, with many exciting achievements along several research directions, ranging from morphology and syntax to semantics and pragmatics. Simultaneously, there has been a tremendous growth in the amount of social media data available on web sites such as Blogger, Twitter, or Facebook, with all of these data streams being rich in explicit demographic information, such as the age, gender, industry, or location of the writer, as well as implicit personal dimensions such as personality and values. In this talk, I will describe recent research work undertaken in the Language and Information Technologies group at the University of Michigan, under the broad umbrella of computational sociolinguistics,  where language processing is used to gain new insights into people’s values, behaviors, and world views. I will share the lessons learned along the way, and take a look into the future of this new exciting research area.

Short Bio. Rada Mihalcea is a Professor in the Computer Science and Engineering department at the University of Michigan. Her research interests are in computational linguistics, with a focus on lexical semantics, multilingual natural language processing, and computational social sciences. She serves or has served on the editorial boards of the Journals of Computational Linguistics, Language Resources and Evaluations, Natural Language Engineering, Research in Language in Computation, IEEE Transactions on Affective Computing, and  Transactions of the Association for Computational Linguistics. She was a program co-chair for the Conference of the Association for Computational  Linguistics (2011) and the Conference on Empirical Methods in Natural Language Processing (2009), and a general chair for the Conference of the North American  Chapter of the Association for Computational Linguistics (2015). She is the recipient of a National Science Foundation CAREER award (2008) and a Presidential Early Career Award for Scientists and Engineers (2009). In 2013, she was made an honorary citizen of her hometown of Cluj-Napoca, Romania.


Dec, 13: 09.00 – 10.00

Yoav Goldberg – Doing Stuff with Long Short Term Memory networks

Abstract. While deep learning methods in Natural Language Processing are arguably overhyped, recurrent neural networks (RNNs), and in particular LSTM networks, emerge as very capable learners for sequential data. Thus, my group started using them everywhere. After briefly explaining what they are and why they are cool, I will describe some recent work in which we use LSTMs as a building block.
Depending on my mood (and considering audience requests via email before the talk), I will discuss some of the following: learning a shared representation in a multi-task setting; learning to disambiguate English prepositions using multi-lingual data; learning feature representations for syntactic parsing; representing trees as vectors; learning to disambiguate coordinating conjunctions; learning morphological inflections; and learning to detect hypernyms in a large corpus. All of these achieve state of the art results. Other potential topics include work in which we try to shed some light on what’s being captured by LSTM-based sentence representations, as well as the ability of LSTMs to learn hierarchical structures

Slides

Short Bio. Yoav Goldberg has been working in natural language processing for over a decade. He is a Senior Lecturer at the Computer Science Department at Bar-Ilan University, Israel. Prior to that he was a researcher at Google Research, New York. He received his PhD in Computer Science and Natural Language Processing from Ben Gurion University. He regularly reviews for NLP and Machine Learning venues, and serves at the editorial board of Computational Linguistics. He published over 50 research papers and received best paper and outstanding paper awards at major natural language processing conferences. His research interests include machine learning for natural language, structured prediction, syntactic parsing, processing of morphologically rich languages, and, in the past two years, neural network models with a focus on recurrent neural networks.