{"id":210,"date":"2017-03-22T19:39:08","date_gmt":"2017-03-22T18:39:08","guid":{"rendered":"http:\/\/sag.art.uniroma2.it\/clic2017\/?page_id=210"},"modified":"2018-01-04T16:31:36","modified_gmt":"2018-01-04T15:31:36","slug":"invited-speakers","status":"publish","type":"page","link":"http:\/\/sag.art.uniroma2.it\/clic2017\/en\/invited-speakers\/","title":{"rendered":"Invited Speakers"},"content":{"rendered":"<p style=\"text-align: right;\"><span style=\"color: #808080;\">Dec, 12: 14.30 &#8211; 15.30<\/span><\/p>\n<p><strong><a class=\"wpsal-anchor\" name=\"keynote_marco\" id=\"keynote_marco\"><\/a><\/strong><\/p>\n<p><strong>Marco Baroni &#8211;\u00a0Spectacular successes and failures of recurrent neural networks applied to language<br \/>\n<\/strong><\/p>\n<p><img loading=\"lazy\" class=\"size-thumbnail wp-image-303 alignleft\" src=\"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/03\/marco_baroni-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/03\/marco_baroni-150x150.jpg 150w, http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/03\/marco_baroni-250x250.jpg 250w, http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/03\/marco_baroni-174x174.jpg 174w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/><\/p>\n<p><strong>Abstract. <\/strong>Recurrent neural networks (RNNs) are attractive computational models for language, due to their generality and ability to track sequential information in raw data. In this talk, I will first report the results of an experiment suggesting that RNNs are extracting surprisingly abstract grammatical generalizations from corpora (they correctly predict that &#8220;the colorless green ideas you slept yesterday&#8221; should continue with a verb in the plural form). I will then report a second experiment suggesting that RNNs are not &#8220;systematic&#8221; in Fodor&#8217;s sense: when explicitly trained to execute the commands &#8220;run&#8221;, &#8220;run twice&#8221; and &#8220;dax&#8221;, at test time they fail to correctly execute the new composed command &#8220;dax twice&#8221;. If time allows, I will conclude with some ideas about how RNNs could be extended to handle systematic compositionality.<\/p>\n<p><strong>Short Bio.<\/strong> Marco Baroni received a PhD in Linguistics from the University of California, Los Angeles, in the year 2000. After several experiences in research and industry, he joined the Center for Mind\/Brain Sciences of the University of Trento, where he&#8217;s been associate professor since 2013. In 2016, Marco joined the Facebook Artificial Intelligence Research team. Marco&#8217;s work in the areas of multimodal and compositional distributed semantics has received widespread recognition, including a Google Research Award, an ERC Starting Grant and and the ICAI-JAIR best paper prize. Marco&#8217;s current research focuses on developing computational systems that can flexibly adapt to new situations just like living beings do.<\/p>\n<hr \/>\n<p style=\"text-align: right;\"><span style=\"color: #808080;\">Dec, 11: 17.30 &#8211; 18.30<\/span><\/p>\n<p><strong><a class=\"wpsal-anchor\" name=\"keynote_rada\" id=\"keynote_rada\"><\/a><\/strong><\/p>\n<p><strong>Rada Mihalcea &#8211;\u00a0Computational Sociolinguistics &#8211; An Emerging Partnership<br \/>\n<\/strong><\/p>\n<p><img loading=\"lazy\" class=\"size-thumbnail wp-image-387 alignleft\" src=\"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/05\/rada_mihalcea-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/05\/rada_mihalcea-150x150.jpg 150w, http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/05\/rada_mihalcea-174x174.jpg 174w, http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/05\/rada_mihalcea.jpg 199w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/><\/p>\n<p><strong>Abstract.\u00a0<\/strong>Computational linguistics has come a long way, with many exciting achievements along several research directions, ranging from morphology and syntax to semantics and pragmatics. Simultaneously, there has been a tremendous growth in the amount of social media data available on web sites such as Blogger, Twitter, or Facebook, with all of these data streams being rich in explicit demographic information, such as the age, gender, industry, or location of the writer, as well as implicit personal dimensions such as personality and values. In this talk, I will describe recent research work undertaken in the Language and Information Technologies group at the University of Michigan, under the broad umbrella of computational sociolinguistics,\u00a0 where language processing is used to gain new insights into people\u2019s values, behaviors, and world views. I will share the lessons learned along the way, and take a look into the future of this new exciting research area.<\/p>\n<p><strong>Short Bio.\u00a0<\/strong>Rada Mihalcea is a Professor in the Computer Science and Engineering department at the University of Michigan. Her research interests are in computational linguistics, with a focus on lexical semantics, multilingual natural language processing, and computational social sciences. She serves or has served on the editorial boards of the Journals of Computational Linguistics, Language Resources and Evaluations, Natural Language Engineering, Research in Language in Computation, IEEE Transactions on Affective Computing, and\u00a0 Transactions of the Association for Computational Linguistics. She was a program co-chair for the Conference of the Association for Computational\u00a0 Linguistics (2011) and the Conference on Empirical Methods in Natural Language Processing (2009), and a general chair for the Conference of the North American\u00a0 Chapter of the Association for Computational Linguistics (2015). She is the recipient of a National Science Foundation CAREER award (2008) and a Presidential Early Career Award for Scientists and Engineers (2009). In 2013, she was made an honorary citizen of her hometown of Cluj-Napoca, Romania.<\/p>\n<hr \/>\n<p style=\"text-align: right;\"><span style=\"color: #808080;\">Dec, 13: 09.00 &#8211; 10.00<\/span><\/p>\n<p><strong><a class=\"wpsal-anchor\" name=\"keynote_yoav\" id=\"keynote_yoav\"><\/a><\/strong><\/p>\n<p><strong>Yoav Goldberg &#8211; Doing Stuff with Long Short Term Memory networks<\/strong><\/p>\n<p><img loading=\"lazy\" class=\"wp-image-498 alignleft\" src=\"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/07\/yoav_goldberg.jpg\" alt=\"\" width=\"150\" height=\"153\" srcset=\"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/07\/yoav_goldberg.jpg 426w, http:\/\/sag.art.uniroma2.it\/clic2017\/wp-content\/uploads\/2017\/07\/yoav_goldberg-294x300.jpg 294w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/><\/p>\n<p><strong>Abstract.<\/strong>\u00a0While deep learning methods in Natural Language Processing are arguably overhyped, recurrent neural networks (RNNs), and in particular LSTM networks, emerge as very capable learners for sequential data. Thus, my group started using them everywhere. After briefly explaining what they are and why they are cool, I will describe some recent work in which we use\u00a0<span class=\"m_-2465428126920185043gmail-il\">LSTMs<\/span>\u00a0as a building block.<br \/>\nDepending on my mood (and considering audience requests via email before the talk), I will discuss some of the following: learning a shared representation in a multi-task setting; learning to disambiguate English prepositions using multi-lingual data; learning feature representations for syntactic parsing; representing trees as vectors; learning to disambiguate coordinating conjunctions; learning morphological inflections; and learning to detect hypernyms in a large corpus. All of these achieve state of the art results. Other potential topics include work in which we try to shed some light on what&#8217;s being captured by LSTM-based sentence representations, as well as the ability of\u00a0<span class=\"m_-2465428126920185043gmail-il\">LSTMs<\/span>\u00a0to learn hierarchical structures<\/p>\n<p><a href=\"http:\/\/sag.art.uniroma2.it\/clic2017\/clic-2017_goldberg_keynote.pdf\"><strong>Slides<\/strong><\/a><\/p>\n<p><strong>Short Bio.<\/strong>\u00a0Yoav Goldberg has been working in natural language processing for over a decade. He is a Senior Lecturer at the Computer Science Department at Bar-Ilan University, Israel. Prior to that he was a researcher at Google Research, New York. He received his PhD in Computer Science and Natural Language Processing from Ben Gurion University. He regularly reviews for NLP and Machine Learning venues, and serves at the editorial board of Computational Linguistics. He published over 50 research papers and received best paper and outstanding paper awards at major natural language processing conferences. His research interests include machine learning for natural language, structured prediction, syntactic parsing, processing of morphologically rich languages, and, in the past two years, neural network models with a focus on recurrent neural networks.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dec, 12: 14.30 &#8211; 15.30 Marco Baroni &#8211;\u00a0Spectacular successes and failures of recurrent neural networks applied to language Abstract. Recurrent neural networks (RNNs) are attractive computational models for language, due to their generality and ability to track sequential information in raw data. In this talk, I will first report the results of an experiment suggesting <a href=\"http:\/\/sag.art.uniroma2.it\/clic2017\/en\/invited-speakers\/\" rel=\"nofollow\"><span class=\"sr-only\">Leggi di pi\u00f9Invited Speakers<\/span>[&hellip;]<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/pages\/210"}],"collection":[{"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/comments?post=210"}],"version-history":[{"count":21,"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/pages\/210\/revisions"}],"predecessor-version":[{"id":1089,"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/pages\/210\/revisions\/1089"}],"wp:attachment":[{"href":"http:\/\/sag.art.uniroma2.it\/clic2017\/wp-json\/wp\/v2\/media?parent=210"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}