5 March |
|
9:30 – 10:00 | Registration |
10:00 – 10:30 | Welcome |
10:30 – 12:00 | Overview lecture: Conversational AI (Prof. Günter Neumann, DFKI) Download slides |
12:00 – 13:00 | Lunch break |
13:00 – 14:30 | Keynote: Prof. Milica Gašić (University Düsseldorf)
Continually learning Conversational AI (see abstract)Large language models have achieved impressive performance across the NLP task spectrum and even appear to have superhuman conversational capabilities. However, they suffer from hallucinations, lack of transparency, and limited ability to improve over time without needing full and rather expensive retraining. Modular dialogue systems, on the other hand, offer an interpretable underlying state and action and a set-up for long-term reward optimisation via reinforcement learning.This talk will explain the steps taken towards building a continually learning task-oriented dialogue system consisting of a dynamic policy model, a data-driven user simulator, and a challenging environment to study the ability of the system to learn in a world that is continuously changing. |
14:30 – 15:00 | Coffee break |
15:00 – 15:30 | Lightning presentations |
15:30 – 17:00 | Keynote: Prof. Mikel L. Forcada (University of Alicante)
Large, neural probabilistic language models (see abstract)Many call them “artificial intelligence“ but large language models (LLMs) such as ChatGPT or Google Bard are basically very large probabilistic language models that have been first trained to generate a continuation for a text prompt, and then fine-tuned to behave as conversational models providing acceptable, useful responses.Starting from traditional n-gram models, this talk will move on to neural LLMs. It will give the possibility to discuss their neural architecture, how text is represented in them, and how they are trained and fine-tuned. Finally, it will briefly present existing language models, discuss their availability and usage rights, and some of the ethical and environmental issues associated to training and using them. |
17:00 – 19:00 | Posters and Reception |
6 March |
|
10:00 – 10:30 | Takeaways from poster session |
10:30 – 12:00 | Overview lecture: Low-Resource NLP (Dr. Simon Ostermann, DFKI) Download slides |
12:00 – 13:00 | Lunch break |
13:00 – 14:30 | Overview lecture: Machine Translation (Prof. Josef van Genabith, DFKI) |
14:30 – 15:00 | Coffee break |
15:00 – 17:30 | Hands-on Lab: Low-Resource NLP with Adapters and Prompts
(Dr. Simon Ostermann, Tatiana Anikina, DFKI) Link to the tutorial material |
18:30 | Social Dinner |
7 March |
|
10:00 – 11:00 | Tanja Bäumel (DFKI)
A short introduction to Explainable AI (see abstract)The rise of deep learning in AI has dramatically increased the performance of models across many sub-fields such as natural language processing or computer vision. In the last 5 years, large petrained language models (LLMs) and their variants (BERT, ChatGPT etc.) have changed the NLP landscape drastically. Such models got larger and larger over the last years, reaching increasingly impressive performance peeks, sometimes even surpassing humans. A central issue with deep learning models with millions or billions of parameters is that the are essentially black boxes: From the model’s parameters, it is not inherently clear why a model exhibits a certain behavior or makes certain classification decision. Understanding the inner workings of such large models is however extremely important, especially when AI takes on critical tasks e.g. in the medical or financial domain. Trustworthiness and fairness are important dimensions that such large models should adhere to, that are often not taken into account.In this talk, we will try to shine a spotlight on the rapidly growing field of interpretable and explainable AI (XAI), that develops methods to peak into the black box that LLMs are. I will focus on general methods used in XAI insights and touch on insights on BERT and its variants gained from applying these methods. |
11:00 – 12:00 | Prof. Josef van Genabith (DFKI)
Measuring Spurious Correlation in Classification: “Clever Hans” in Translationese – joint work with Angana Borah, Daria Pylypenko, Cristina España i Bonet. (see abstract)Recent work has shown evidence of ”Clever Hans” behavior in high-performance neural translationese classifiers, where BERT-based classifiers capitalize on spurious correlations, in particular topic information, between data and target classification labels, rather than genuine translationese signals. Translationese signals are subtle (especially for professional translation) and compete with many other signals in the data such as genre, style, author, and, in particular, topic. This raises the general question of how much of the performance of a classifier is really due to spurious correlations in the data versus the signals actually targeted for by the classifier, especially for subtle target signals and in challenging (low resource) data settings. We focus on topic-based spurious correlation and approach the question from two directions: (i) where we have no knowledge about spurious topic information and its distribution in the data, (ii) where we have some indication about the nature of spurious topic correlations.For (i) we present a novel measure capturing alignment of unsupervised topics with target classification labels as an indication of spurious topic information in the data and, based on this, propose a ”topic floor” (as in a ”noise floor”) for classification. For (ii) we investigate masking of known spurious topic carriers in classification. Both (i) and (ii) contribute to quantifying and (ii) to mitigating spurious correlations. |
12:00 – 13:00 | Lunch break |
13:00 – 14:30 | Keynote: Mikel Artetxe (Reka)
Revisiting Cross-Lingual Transfer Learning (see abstract)Given downstream training data in one language (typically English), the goal of cross-lingual transfer learning is to perform the task in another language. Existing approaches have been broadly classified into 3 categories: zero-shot (fine-tune a multilingual language model in English and zero-shot transfer into the target language), translate-train (translate the training data into the target language through machine translation and fine-tune a multilingual language model), and translate-test (translate the evaluation data into English through machine translation and use an English model).In this talk, we will critically revisit some the fundamentals of this problem, with a special focus on the interaction between cross-lingual transfer learning and machine translation. |
14:30 – 18:00 | Excursion to the World Heritage Site Völklinger Hütte |
8 March |
|
10:00 – 12:00 | Hands-on Lab: Machine Translation pt. 1 (Dr. Cristina España i Bonet, DFKI) Tutorial documentation |
12:00 – 13:00 | Lunch break |
13:00 – 15:00 | Hands-on Lab: Machine Translation pt. 2 (Dr. Cristina España i Bonet, DFKI) |
15:00 – 15:30 | Closing remarks |