22 Marzo 2017

Tutorial

Dec, 11: 9.30 – 12.30

Elisabetta Jezek – Stretching the Meaning of Words: Inputs for Lexical Resources and Lexical Semantic Models

Abstract. The lexicon is today at the core of several challenges in computational linguistics and in theoretical investigations of the structure of language. Nevertheless, few researchers have an overarching view about lexical structure, lexical semantics and the representational problems which lay at the core of the observed property of language to be semantically flexible and context-sensitive, as disclosed by e.g. distributional analysis. The tutorial provides an overview of main properties of words, and how we use them to create meaning. It offers a description of the structure of the lexicon in terms of word types, word classes and word relations, and introduces the categories that are needed to classify the types of meaning variation that word display in composition; it also examines the interconnection between these variations and syntax, cognition and pragmatics. We use empirical evidence from corpora and human judgements to evaluate formalisms and methodologies developed in the field of linguistics, cognitive science, and natural language processing – particularly distributional semantics – to account for lexical-related phenomena. The tutorial merges evidence-based theoretical accounts with computational perspectives and proposes linguistic principles for the construction of lexical resources.

Tutorial Slides

Short Bio. Elisabetta Jezek is Associate Professor of Linguistics in the Department of Humanities, University of Pavia, where she has taught Syntax and Semantics and Applied Linguistics since 2001. Her research interests and areas of expertise include lexical semantics, verb classification, theory of argument structure, event structure in syntax and semantics, lexicon/cognition interplay, and language technology. She has edited a number of major works in lexicography and published contributions focusing on the interplay between data analysis, research methodology, and linguistic theory. Her publications include: Classi di Verbi tra Semantica e Sintassi, ETS, 2003; Lessico: Classi di Parole, Strutture, Combinazioni, Il Mulino, 2005 (2nd ed. 2011); The Lexicon: An Introduction, OUP, 2016.


Dec, 13: 15.30 – 18.30

Yoav Goldberg – Implementing dynamic neural networks for language with DyNet

Abstract. Neural networks work very well for many learning-based applications, and I assume you are already familiar with them. Programming neural network models is rather easy, thanks to software libraries such as Theano, TensorFlow and Keras that let you define and train complex network structures. However, these libraries assume a fixed (static) graph structure, and are tailored for the GPU. I will introduce a radically different approach, in which the graphs are dynamic, and constructed from scratch for every training example. This makes programming of complex networks with structure that depend on the input very easy. I will introduce the DyNet neural networks package, that supports this dynamic graph creation, and which also works very well on the CPU. The tutorial assumes basic familiarity with neural network models, and will focus on how to implement them with DyNet. We will explore several common NLP models and their implementation using the DyNet package.

Tutorial Slides – Tutorial Additional Material

Short Bio. Yoav Goldberg has been working in natural language processing for over a decade. He is a Senior Lecturer at the Computer Science Department at Bar-Ilan University, Israel. Prior to that he was a researcher at Google Research, New York. He received his PhD in Computer Science and Natural Language Processing from Ben Gurion University. He regularly reviews for NLP and Machine Learning venues, and serves at the editorial board of Computational Linguistics. He published over 50 research papers and received best paper and outstanding paper awards at major natural language processing conferences. His research interests include machine learning for natural language, structured prediction, syntactic parsing, processing of morphologically rich languages, and, in the past two years, neural network models with a focus on recurrent neural networks.