The past few years have witnessed significant leaps in capabilities of Large Language Models (LLMs). LLMs of today can perform a variety of tasks such as summarization, information retrieval and even mathematical reasoning with impressive accuracy. What is even more impressive is LLMs’ ability to follow natural language instructions without needing dedicated training datasets. However, issues like bias, hallucinations and lack of transparency pose a major impediment to wide adoption of these models. In this talk, I will review how we got from “traditional NLP” to today’s LLMs, and some of the reasons behind trustworthiness issues surrounding LLMs. I will then focus on a single issue — hallucinations in factual question answering — and show how artifacts associated with model generations can provide hints that the generation contains a hallucination.
- Home
- Events
- Colloquium
- On Trustworthiness of Large Language Models

This event has passed.
In the colloquium, our fellows, working group shadows and invited speakers regularly present their research projects. Guests are welcome. For better planning, please send your registration to kolleg@cais-research.de.
For directions, please see Contact.