CEUR Proceedings are Online!

DAY 1: Monday, November 6th, 2023 – Room DPT

14:00-14:15 CET

14:00-14:15 – Opening DAY 1: Welcome to NL4AI 2023!

14:15-15:00 CET

14:15-15:00 – Invited talk: Raffaella Bernardi (University of Trento, Italy) – “The interplay between language generation and reasoning: Information Seeking games”

Abstract: Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. It has been shown that ChatGPT can carry out some simple deductive reasoning steps when provided with a series of facts out of which it is tasked to draw some inferences. In this talk, I am going to argue for the need of models whose language generation is driven by an implicit reasoning process. To support my claim, I will present our evaluation of ChatGPT on the 20-Questions game, traditionally used within the Cognitive Science community to inspect the information seeking-strategy’s development. This task requires a series of interconnected skills: asking informative questions, stepwise updating the hypothesis space by computing some simple deductive reasoning steps, and stopping asking questions when enough information has been collected. Thus, it is a perfect testbed to monitor the language and reasoning interplay in LLMs, shed lights on their strength and their weakness, and lay the ground for models that think while speaking.

15:00-16:00 CET

SESSION 1: Novel directions of NLP
Chair: Dominique Brunato

15:00-15:15 – Jia Cheng Hu, Roberto Cavicchioli, Giulia Berardinelli and Alessandro Capotondi. Heterogeneous Encoders Scaling In The Transformer For Neural Machine Translation [paper]

15:15-15:30 – Stefano Scotta and Alberto Messina. Experimenting Task-Specific LLMs [paper]

15:30-15:45 – Luca Bacco, Felice Dell’Orletta and Mario Merone. Natural Language Processing in Healthcare: a Bird’s Eye View [paper]

15:45-16:00 – Simona Mazzarino, Andrea Minieri and Luca Gilli. NERPII: a Python library to perform Named Entity Recognition and generate Personal Identifiable Information [paper]

16:00-16:30 CET

16:00-16.30 – Coffee Break

16:30-18:00 CET

SESSION 2: Authorship, sentiment, and ideology in NLP
Chair: Alan Ramponi

16:30-16:45 – Helene F. L. Eriksen, Christopher M. J. André, Emil J. Jakobsen, Luca C. B. Mingolla and Nicolai B. Thomsen. Detecting AI Authorship: Analyzing Descriptive Features for AI detection [paper]

16:45-17:00 – Soumick Chatterjee, Sowmya Prakash and Andreas Nürnberger. Flavours of Convolution for Unsupervised Aspect Extraction and Aspect-based Sentiment Analysis [paper]

17:00-17:15 – Loris Di Quilio and Fabio Fioravanti. Evaluating the Aspect-Category-Opinion-Sentiment analysis task on a custom dataset [paper]

17:15-17:30 – Franco Demarco, Juan Manuel Ortiz de Zarate and Esteban Feuerstein. Measuring ideological spectrum through NLP [paper]

17:30-17:45 – Shivatmica Murgai. From Looks to Essence: A Shift in Perspective with Physical Appearance Debiasing [paper]

17:45-18:00 – Mostafa Rahgouy, Hamed Babaei Giglou, Dongji Feng, Taher Rahgooy, Gerry Dozier and Cheryl D. Seals. Navigating the Fermi Multiverse: Assessing LLMs for Complex Multi-hop Queries [paper]

DAY 2: Tuesday, November 7th, 2023 – Room DPT

11:00-11:15 CET

11:00-11:15 – Opening DAY 2: Welcome back to NL4AI 2023!

11:15-12:00 CET

11:15-12:00 – Invited talk: Christos Christodoulopoulos (Amazon, UK)“Responsible AI in the era of Large Language Models”

Abstract: Large Language models are now ubiquitous, and since the release of ChatGPT last November, are no longer an academic curiosity. As LLMs become part of products used daily by millions of people, there is an increased urgency to ensure that these models are developed and operate responsibly. In this talk, I am going to discuss the what Responsible AI (RAI) looks like in this new era, how RAI is practiced in an industry setting and how it is influenced by and inspires foundational research into RAI topics. I will talk about two recently-published projects from my team that cover two of the many topics associated with RAI. Looking at Fairness, I will present TANGO, a new dataset that measures Transgender and Nonbinary biases in open language generation. In the area of Privacy, I will present a method for controlling the memorisation of potentially sensitive training data through prompt tuning. I will conclude with a look at the use of such RAI research in practice and examples of RAI mitigation strategies for production-ready LLMs.

12:00-13:00 CET

SESSION 3: Applied NLP
Chair: Elisa Bassignana

12:00-12:15 – Kabir Manandhar Shrestha, Katie Wood, David Goodman and Meladel Mistica. Do we need Subject Matter Experts? A Case Study of Measuring Up GPT-4 Against Scholars in Topic Evaluation [paper]

12:15-12:30 – Nicola Arici, Luca Putelli, Alfonso Emilio Gerevini, Luca Sigalini and Ivan Serina. LLM-based Approaches for Automatic Ticket Assignment: A Real-World Italian Application [paper]

12:30-12:45 – Andrea Gatti, Viviana Mascardi and Domenico Pellegrini. Mining Information from Legal Sentences in KlonDikE [paper]

12:45-13:00 – Monica Consolandi, Simone Magnolini and Mauro Dragoni. Misunderstanding and Risk Communication in Healthcare [paper]

13:00-14:00 CET

13:00-14:00 – Lunch Break

14:00-15:40 CET

SESSION 4: Selected papers
Chair: Marco Polignano

14:00-14:20 – Claudiu Daniel Hromei, Daniele Margiotta, Danilo Croce and Roberto Basili. An End-to-end Transformer-based Model for Interactive Grounded Language Understanding [paper]

14:20-14:40 – Irene Siragusa and Roberto Pirrone. Conditioning Chat-GPT for information retrieval: the Unipa-GPT case study [paper]

14:40-15:00 – Amaury Fierens and Sébastien Jodogne. BERTinchamps: Cost-Effective Training of Large Language Models for Medical Tasks in French [paper]

15:00-15:20 – Roberto Zamparelli. One picture and a thousand words. Generative language+images models and how to train them [paper]

15:20-15:40 – Panel discussion among selected papers’ authors

15:40-15:55 CET

15:40-15:55 – Closing remarks