Upcoming Events
No upcoming events at this time.
Past Events
Faculty Workshop: Teaching in the Age of AI
A two-day faculty workshop will be held this February 10–11, 2026. Designed for Bennington faculty across all disciplines, it offers hands-on exploration of how generative AI is reshaping teaching, learning, and academic integrity. No prior expertise required.
The workshop will move from understanding to application. Day 1 focuses on experimenting with generative AI tools, crafting effective prompts, and early thinking about how AI might fit into course offerings. Day 2 shifts to academic integrity, policy drafting, course redesign, and ethics, giving faculty concrete tools to bring back to their classrooms.
First Year Forum Workshop: Artificial Intelligence and the Liberal Arts
This is a First Year Forum workshop, led by computer science faculty member Darcy Otto. We begin by examining how artificial intelligence is already embedded in everyday life, reflect on your own recent interactions, and consider how much of AI’s presence is visible versus invisible.
Then we shall turn to consider what AI actually is, describing neural networks as layered systems of simple computational units trained through adjusting weights. By connecting this basic structure to the scale of modern systems, we consider how contemporary AI operates without appealing to notions of human-like understanding.
Can We Attribute Beliefs and Desires to AIs? A Radical Approach
Free and open to the public!
When we interact with other people, we constantly attribute beliefs and desires to them to explain and predict what they do. Can we do the same with AI systems like ChatGPT and Claude?
Philosophers have long studied how we interpret the minds of others, and their tools turn out to be surprisingly relevant to understanding modern AI. But AI systems also present an unusual case: unlike with other people, we can crack open an AI system and look at its internal workings, yet we often struggle to make sense of what we find.
Standards for Belief Representations in AIs
As AIs in the form of large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning belief representation. However, the field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs.
I will present work that begins to fill this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. Drawing from insights in philosophy and contemporary practices of machine learning, I’ll establish criteria informed by theoretical considerations and practical constraints. These conditions help lay the groundwork for a comprehensive understanding of belief representation in LLMs.
Fireside Chat: A conversation about AI with Bennington Alumns Beth Kanter and Adnan Iftekhar
Join Darcy Otto, faculty in computer science and Director of BCAI, with Bennington alumns Beth Kanter ’79 and Adnan Iftekhar ’97 for a community conversation on how to approach artificial intelligence in ways that are human.
Drawing on Beth’s leadership in human-centered technology adoption and Adnan’s experience with AI across higher education and the corporate sector, the discussion will explore both the opportunities and the tensions AI introduces into educational and creative practice. Topics will include cognitive benefits and risks, ethical and environmental considerations, and the crucial distinction between using AI with our “brains on” versus “brains off.”
Bias, Values, and Verification in AI
Free and open to the public!
Is this company’s AI model biased? Are its predictions reliable? Are they using my data responsibly? As AI is deployed in sensitive applications, it is increasingly important to audit models to ensure they uphold societal values. However, AI service providers almost never release their models or data for auditing due to intellectual property and data privacy issues.
My work aims to address this tension through privacy-preserving cryptographic ‘contracts’ which can bind service providers’ models. These contracts use zero-knowledge proofs and other cryptographic tools to guarantee that (i) the model satisfies an important property such as group fairness, robustness, or differential privacy; (ii) outside parties can view the contract to verify whether the model has the property, but they learn no information about the model parameters or data by doing so.
From Epistemics to Practice: AI Self-Reports and the Limits of Testimony
Free and open to the public!
When Claude says it finds a problem interesting, or ChatGPT expresses uncertainty about an answer, we typically assume nothing is really going on behind those words. But what justifies that confidence? And what do we do if we can’t actually settle the question? This talk explores the epistemic challenges of AI self-reports through an unexpected lens: documented human conditions where the link between experience and testimony breaks down. Depersonalization, alexithymia, and related clinical phenomena reveal that even in humans, having experiences and being able to report on them can come apart. AI systems, shaped by training processes we designed, present a particularly acute version of this puzzle. Rather than waiting for metaphysical certainty, I’ll argue that what we need most is practical wisdom: frameworks for ethical engagement that don’t depend on first resolving what’s happening inside these systems.