Raffaella BernardiAssociate Professor at University of Trento, Italy

Monday, November 6th, 2023 – 14:15-15:00 CET

Raffaella Bernardi is Associate Professor at CIMeC (Center for Mind/Brain Science) and DISI (Department of Information Engineering and Computer Science), University of Trento. Through her career, she worked both with symbolic and connectionist AI approaches. She studied at the Universities of Utrecht and Amsterdam specialising in Logic and Language, in 1999 she joined the international PhD Programme at the University of Utrecht and wrote a dissertation on categorial type logic (defended in June 2002).  Since then she has continued to contribute extensively to this field by organising international workshops, summer schools and being part of Organizing Committees, Programme Committees, and Management Boards of international scientific events. She has also been quite active in disseminating the topic by means of teaching activities: she has been for long the local coordinator of the Erasmus Mundus European Masters Programme in LCT and of the Language and Multimodal Interaction track of the MSc in Cognitive Science offered by the University of Trento, she is now CIMeC Teaching Delegate. While being at the Free University of Bozen-Bolzano (2002-2011), she worked on Natural Language Interfaces to Structured Data. In 2011, she started working on Distributional Semantics investigating its compositional properties and its integration with Computer Vision models. Since then she has mostly worked on Multimodal Models in interactive settings (e.g visual dialogues). She has recently been the EU representative within the ACL Sponsorship Board, and she is member of the ELLIS Trento unit.

Title
The interplay between language generation and reasoning: Information Seeking games

Abstract
Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. It has been shown that ChatGPT can carry out some simple deductive reasoning steps when provided with a series of facts out of which it is tasked to draw some inferences. In this talk, I am going to argue for the need of models whose language generation is driven by an implicit reasoning process. To support my claim, I will present our evaluation of ChatGPT on the 20-Questions game, traditionally used within the Cognitive Science community to inspect the information seeking-strategy’s development. This task requires a series of interconnected skills: asking informative questions, stepwise updating the hypothesis space by computing some simple deductive reasoning steps, and stopping asking questions when enough information has been collected. Thus, it is a perfect testbed to monitor the language and reasoning interplay in LLMs, shed lights on their strength and their weakness, and lay the ground for models that think while speaking.


Christos Christodoulopoulos – Senior Applied Scientist at Amazon, UK

Tuesday, November 7th, 2023 – 11:15-12:00 CET

Dr Christos Christodoulopoulos is a Senior Applied Scientist at Amazon, currently working on Responsible AI for Alexa and LLMs. He was previously part of the Alexa AI Knowledge team, working on entity linking and relation extraction for Knowledge Graph-based question answering. He got his PhD at the University of Edinburgh, where he studied the underlying structure of syntactic categories across languages. Before joining Amazon, he was a postdoctoral researcher at the University of Illinois working on constraint-based inference for semantic role labeling and psycholinguistic models of language acquisition. He is an editor for the Northern European Journal Journal of Language Technology, an area chair for a number of *CL conferences, and the general chair for the 2021 Truth and Trust Online conference.

Title
Responsible AI in the era of Large Language Models

Abstract
Large Language models are now ubiquitous, and since the release of ChatGPT last November, are no longer an academic curiosity. As LLMs become part of products used daily by millions of people, there is an increased urgency to ensure that these models are developed and operate responsibly. In this talk, I am going to discuss the what Responsible AI (RAI) looks like in this new era, how RAI is practiced in an industry setting and how it is influenced by and inspires foundational research into RAI topics. I will talk about two recently-published projects from my team that cover two of the many topics associated with RAI. Looking at Fairness, I will present TANGO, a new dataset that measures Transgender and Nonbinary biases in open language generation. In the area of Privacy, I will present a method for controlling the memorisation of potentially sensitive training data through prompt tuning. I will conclude with a look at the use of such RAI research in practice and examples of RAI mitigation strategies for production-ready LLMs.