To what extent can an artificial system be rational?
A new MIT course, 6.S044/24.S00 (AI and Rationality), does not try to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, the concepts of rationality and agency may prove integral to AI decision making, especially when influenced by how humans understand their cognitive limitations and their constrained, subjective ideas about what is or is not rational.
This inquiry is rooted in a deep relationship between computer science and philosophy, which have long been formally collaborating on the study of making rational beliefs, learning from experience, and making rational decisions in pursuit of one’s goals.
“You would imagine that computer science and philosophy are very far apart, but they’ve always intersected. The technical parts of philosophy actually overlap with AI, especially with early AI,” says course instructor Leslie Kelbling, Panasonic Professor of Computer Science and Engineering at MIT, who brings to mind Alan Turing, who was both a computer scientist and philosopher. Kelbling himself has a bachelor’s degree in philosophy from Stanford University, noting that computer science was not available as a major at the time.
Brian Hayden, a professor in the Department of Linguistics and Philosophy, a shared position with the Department of Electrical Engineering and Computer Science (EECS) in the MIT Schwarzman College of Computing, who teaches a class with Kelbling, notes that the two disciplines are more aligned than people imagine, adding that “the differences are in emphasis and perspective.”
Tools for further theoretical thinkingYes
First introduced in fall 2025, Kelbling and Hayden created AI and Rationality as part of Common Ground for Computing Education, a cross-cutting initiative in the MIT Schwarzman College of Computing that brings together multiple departments to develop and teach new courses and launch new programs that blend computing with other disciplines.
With over two dozen students registered, AI and Rationality is one of two Common Ground classes based on philosophy, the other being 6.C40/24.C40 (Ethics of Computing).
While the ethics of computing explores concerns about the social impacts of rapidly advancing technology, AI and rationality examines the contested definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the imposition of beliefs and desires on these systems.
Because AI is extremely broad in its implementation and each use case raises different issues, Kelbling and Hayden brainstormed topics that could provide useful discussion and connections between the two approaches to computer science and philosophy.
“When I work with students studying machine learning or robotics, it’s important that they step back a bit and examine the assumptions they are making,” says Kelbling. “Thinking about things from a philosophical perspective helps people better understand and understand how to present their work in a real context.”
Both instructors emphasize that this is not a course that provides concrete answers to questions on what it means to engineer a rational agent.
Hayden says, “I see the curriculum as building their foundation. We’re not giving them a set of theories to learn and memorize and then apply. We’re equipping them with the tools to think critically about things as they go into their chosen careers, whether they’re in research or industry or government.”
The rapid progress of AI also presents a new set of challenges in the education sector. Predicting what students might need to know five years from now, Kelbling sees as an impossible task. She says, “What we need to do is give them the tools at a higher level – habits of mind, ways of thinking – that will help them access what we can’t really predict right now.”
Blending disciplines and questioning assumptions
So far, the class has attracted students from a wide variety of disciplines – from those firmly rooted in computing to others interested in learning how AI intersects with their fields of study.
Throughout the semester’s readings and discussions, students grappled with different definitions of rationality and how they ran counter to the assumptions in their field.
On what surprised her about the course, Amanda Paredes Riobu, a senior in EECS, says, “We are taught that mathematics and logic are the gold standard or truth. This class showed us many examples of humans acting inconsistently with these mathematical and logical frameworks. We opened up this whole issue of what, are these humans irrational? Are the machine learning systems we design irrational? Is it just the math and logic?”
Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, appreciated the challenges of the class and the ways in which the definition of a rational agent can change depending on the discipline. “Representing what each region means by rationality in a formal framework made it clear which assumptions were to be shared, and which were different across regions.”
Like all Common Ground efforts, the co-teaching, collaborative structure of the course gave students and instructors the opportunity to hear diverse perspectives in real time.
For Paredes Ryobu, this is his third Common Ground course. She says, “I really like the interdisciplinary aspect. They always feel like there’s a good mix of theoretical and practical from the fact that they need to cut across different areas.”
According to Okoroafor, Kelbling and Hayden demonstrated a clear synergy between the fields, saying they felt as if they were connecting and learning right along with the class. How computer science and philosophy can be used to inform each other helped them understand their similarities and invaluable perspectives on interconnected issues.
He adds, “Philosophy also has a way of surprising you.”