Science Workshop

Standards for Belief Representations in AIs

Featured Event
Date & Time
Friday, March 13, 2026
1:00 PM - 2:00 PM
Location
Dickinson 232
Speaker
Daniel Herrmann, PhD

As AIs in the form of large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning belief representation. However, the field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs.


I will present work that begins to fill this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. Drawing from insights in philosophy and contemporary practices of machine learning, I’ll establish criteria informed by theoretical considerations and practical constraints. These conditions help lay the groundwork for a comprehensive understanding of belief representation in LLMs.

About the Speaker

Daniel Herrmann, PhD

Daniel Herrmann, PhD

Daniel Herrmann is a decision theorist, formal epistemologist, and philosopher of AI. He develops mathematical and computational models of optimal reasoning and learning, with an eye towards understanding artificial agents, as well as agents who reason about themselves and how they are embedded in their world. Some of Daniel’s recent work investigates belief-like representations in large language models, as well as how one should use evidence in decision making. He also uses evolutionary models to explain how conventions and meaningful linguistic systems emerge in populations. Daniel completed his doctoral degree in Logic and Philosophy of Science at the University of California, Irvine, and did his postdoctoral research at the University of Groningen.