Term: Fall 2025
Offered Under: 6.S044 and 24.S00
Instructors: Professors Brian Hedden (Philosophy and EECS) and Leslie Kaelbling (EECS)

How can we design artificial systems to be rational agents, capable of learning about the world and pursuing goals in sensible ways? And what can AI research, where computational and memory limitations are front and center, tell us about human rationality? This course presents theories of “ideal” rationality while observing the ways in which they demand things (e.g., instantaneous probability updating and logical omniscience) which are unattainable by humans and intractable for computational systems. Topics include Bayesian probability, the relation between belief and probability, expected utility theory, sequential decision-making under uncertainty, belief and goal inference, and multi-agent settings.