Bennington Center for Artificial Intelligence
How do we build a future worth wanting?
AI is poised to affect the most human of activities: learning, creating, playing, and working. How should we respond? The Bennington Center for Artificial Intelligence (BCAI) exists to bring reasoned consideration to questions the tech industry ignores: who benefits, what gets lost, and whether we even want the future on offer.
Our approach
BCAI is not here to sell you on AI or train you to use whatever tools Silicon Valley wants to push next. Instead, we draw on Bennington’s tradition in the Liberal Arts to ask hard questions:
- Who benefits from a particular AI system, and who pays the costs?
- What values get embedded in how these systems are built?
- When should we refuse to use AI because the tradeoffs aren’t justified?
- How might we build alternatives that actually serve human goals?
- Under what conditions should we embrace what AI offers?
We examine AI through three related lenses
Core: Technical Understanding
Building technical literacy, not with a view to making everyone a programmer, but so that we understand how AI actually works. You can’t evaluate what you don’t understand.
Exploration: Critical Inquiry
Examining AI through philosophy, history, law, and social science. What does responsible AI look like when training data gets scraped without consent? How do these systems reproduce inequalities? What legal frameworks might protect public interests?
Application: Creative and Responsible Use
Developing frameworks for when and how to use AI, always with human judgment at the center. There is a difference between delegating busywork and abdicating thinking. And we should explore those cases where AI can take on the role of an effective collaborator, and know when not to use it.
What concerns us
AI threatens to become a replay of social media’s bursting onto the scene: a large-scale social experiment with little regard for the harms it causes individuals and societies. AI promises to bring many benefits—far more than social media ever could. But the current mode of delivery is that tech companies make a new model and just throw it over the fence.
-
Data gets scraped to train AI models without consent from content creators. Writers, artists, musicians, and researchers have their work fed into models they never agreed to support. The gains flow upward to a handful of corporations, while the costs get borne by everyone else.
-
AI systems that are becoming embedded in our lives encode specific values: generally, whatever sustains hype cycles and shareholder returns takes priority over any consideration of human flourishing. As a few tech companies accrue enormous power, they make decisions with little accountability.
-
Students are using AI to short-circuit the learning process—drafting essays they didn’t write, solving problems they didn’t work through. Intellectual development requires friction: getting stuck and struggling through. AI is eliminating that friction by offering an easy way out.
-
Meanwhile, even though nobody wants to live next to a giant datacenter, municipalities are falling all over themselves to attract the billions up for grabs in investment. AI is consuming massive amounts of energy and water, displacing jobs, and nobody has a plan for how to support those who will suffer.
These aren’t hypothetical problems. They’re happening now. And they point to why we need alternatives that actually serve human goals, and frameworks for thinking through the benefits and harms that AI brings.
What we’re building
We’re creating space to grapple with these problems directly. That means:
- Building technical literacy so you can understand how AI systems actually work: what they can and can’t do, whose values get encoded, where the real costs lie.
- Examining power and accountability through history, philosophy, and law.
- Developing frameworks that help you think through when AI helps and when it harms, when to use it and when to refuse.
This isn’t about reaching consensus or finding easy answers.
Members of our community have conflicting views. Some see creative potential, others see exploitation; some want to experiment, others want nothing to do with it. All these perspectives belong in the conversation.
BCAI will host open forums where you can ask hard questions, workshops that make space for scepticism alongside experimentation, and a speaker series that brings together technologists, artists, researchers, and critics.
We’re also supporting faculty who want to think seriously about AI in their teaching and research. That includes developing courses that can’t be short-circuited by AI, exploring how these technologies affect different disciplines, and figuring out which tradeoffs are worth it, and when. Our goal isn’t to manufacture consensus but to spark informed debate about which future we should choose.
Programming in Year One
February 2026
Workshops to support faculty who want to explore AI, and to integrate some aspect of AI literacy in select Fall 2026 classes.
March 2026
Beginning of the monthly BCAI Speakers Series, which brings diverse speakers to campus to think about how we are using AI.
Summer 2026
Workshops for those who want to develop AI literacy, with a view to understanding how, when, and when not, to use AI.
September 2026
Initial offering of a set of Bennington courses, each dedicated to exploring some aspect of AI. Currently to include AI: Pixels, Prompts, and Power, which is an introduction to AI from a technological and philosophical perspective.
October 2026
Regional symposium bringing together scholars, artists, policymakers, and technologists, to evaluate AI critically, and think about where we are headed.
Our commitments
BCAI is committed to taking human agency and creativity seriously, even in a world increasingly suffused by AI. We intend to question the narratives that suppose the adoption of AI is inevitable, and to provide a space for dissent alongside experimentation. The Liberal Arts is at its best when it grapples with thorny problems that have no easy answers. We are here to help support that activity.
Get involved
This work belongs to all of us.
Students
Come to the open forums. Sit on the Advisory Board. Participate in workshops.
Faculty
Explore how AI affects your discipline. Develop courses that engage critically. Collaborate on research.
Community
Attend the speaker series. Share your expertise. Learn about how this technology is changing the world.
About BCAI
BCAI is supported by an advisory board drawn from across Bennington College’s discipline groups, and includes both faculty and staff representatives. Our funding does not come from the college budget.