Group LeaderDepartmentProject DescriptionMeeting Times
Dr. Xinlan Emily HuInstitute for Data, Systems, and SocietySynergy: How does a team become more than the sum of its parts? Does using AI to complete a task enhance a person’s working capabilities, or does it replace human ingenuity with a merely passable output? Can computational tools help detect negative dynamics in a group before they happen—and prompt its members to adjust course? This SERC group focuses on the notion of “synergy:” when a collection of agents (whether human or AI) are able to achieve more together than they do alone. We’ll begin by surveying the literature on group decision-making and teamwork, including the longstanding debate about whether “two heads are better than one.” We will also cover experimental and computational methods for studying groups: comparing qualitative and quantitative analyses; discussing the pitfalls, challenges, and limitations associated with different methods; and introducing specific research tools, such as the use of natural language processing for analyzing group communication. By the end of the year, participants should develop an understanding of the conditions that facilitate synergy, and will propose their own experiments, analyses, or interventions exploring synergy in a domain of their interest.Tuesdays, 10:00 – 11:00 AM
Dr. Anna PappEconomicsAI for Environmental Data and Policy: This reading group will explore the intersection of artificial intelligence/machine learning and the environment/climate, with a focus on applications (and limitations) relevant to the social sciences. High-resolution environmental data and advances in AI have opened new frontiers in modeling, monitoring, and quantifying environmental change. AI-based climate models are starting to simulate Earth systems at unprecedented speed and scale, while AL/ML models are used to detect and monitor environmental risks that are otherwise difficult or costly to observe. We will examine the growing role of AI/ML in turning environmental data into actionable information: What kinds of environmental and social science questions can these methods help us understand? How can they improve measurement, targeting, or prediction in research and policy? And crucially, what are their limitations and risks? Possible projects include exploring whether AI-based climate modeling can accelerate climate adaptation insights, leveraging AI-based satellite embedding datasets for social science questions, and more.Tuesdays, 2:00-3:00 PM
Dr. Elif BircedSloanThe Age of AI and Platforms: Today, questions surrounding the relationship between work and technology remain as vital as ever. Software companies are developing new methods for connecting buyers and sellers of labor online. The rise of big data and machine learning underlies new forms of artificial intelligence and robotics that could affect nearly every aspect of work and employment. How do software algorithms affect how employers hire and monitor their workforces? How are platform companies like Uber changing what it means to be an employee? How are social media and influencer culture transforming creative industries and beyond? Or will these questions soon be irrelevant when intelligent machines take over our jobs? And what, if anything, can be done about it? In this group, we will discuss these questions and explore how technology reshapes the future of work.

In the fall, participants will master key theories and recent research on the impact of platforms and AI on work. The spring semester will cover methods for researching the effects of technology on work, as well as different project formats. Topics include why people want to engage in platform work, varying from ride-hailing to content creation, how individuals integrate technological developments into their work processes, and the challenges and opportunities that platforms and AI will bring. Participants will develop individual projects exploring the relationship between technology and their chosen area of work, while also addressing significant questions about the consequences of platform work and the implementation of AI into work processes by companies, as well as the new opportunities technology brings for workers.
Tuesdays, 2:00-3:00 PM
Dr. Erik SandelinDepartment of Urban Studies and PlanningDesign, Technology, Grace: We usually think of design and innovation in terms of making new things; we create through addition and intervention. But, in a world where data-intensive computational technologies increasingly poison our natural and cultural commons, how can designers and developers cultivate also creative acts of withdrawal, foreclosure and leaving be? How can we decouple action from force and passivity from resignation in the design and development of computational artifacts?

In this group we will employ grace, defined as actively not doing what you are able to do, to explore and populate an emerging landscape of “design and negation”. Together we will collect, analyze and critique moments of grace from fields such as human-computer interaction, product design and architecture. We will discuss when and how it becomes possible to create by not using all the force at your disposal and what opportunities and troubles such design moves can lead to.

Through seminars, hands-on exercises and individual projects, this group strives to provide participants with intuitions, exemplars and tools for carefully crafting vital, effective and beautiful nos and nots in the realm of digital technologies.
Tuesdays, 2:00-3:00 PM
Dr. Virgile RennardPolitical ScienceAI for Democracy: This working group explores how generative AI can be used to reshape democratic culture; both by offering new tools to support participation and government responsiveness, but also by raising awareness of social challenges. Over the course of the year, students will study the way in which AI delivers on its promesses through democratic purposes.

The first two weeks of the group will touch on key concepts from democratic theory and the foundations of large language models (LLMs). The reading group will then examine a range of political use cases; whether it is AI-generated political messaging and the political applications of generative AI on social media platforms, to the detection of bias in AI systems. Students will be trained to read and interpret academic research, and will take turns leading weekly discussions to develop their critical and analytical skills.

In the second semester, students will work in small teams to design and carry out a project exploring how generative AI might be used to strengthen democracy in socially and ethically responsible ways. Projects will be defined collaboratively and may include partnerships with organizations such as MIT GOV/LAB to experiment with applications like online deliberation or public feedback tools.
Tuesdays, 3:00-4:00 PM
Dr. Michal MasnyPhilosophyPhilosophy and the Future of Work: This group will examine the future of work through the lens of moral and political philosophy. In particular, we will learn about philosophical perspectives on work, unemployment, free time, inequality, discrimination, and autonomy. We will then connect them to issues such as technological unemployment, labour market polarisation, working time destandardisation, the use of and collaboration with AI in the workplace, algorithmic governance, the promise of Universal Basic Income, and the possibility of a world without work.Tuesdays, 4:30 – 5:30 PM
Dr. Ziv EpsteinSloanThe Medium is the Mess: Today, algorithmic systems such as social media feeds and generative AI exert influence on us and our societies by directing our attention toward a small subset of the possible content and enabling the production of new and directed forms of media. These systems are supposed to direct our attention in ways that align with our own interests, but because they lack sufficient information about what we want in the long run, they rely on crude proxies (e.g., engagement, chatbot feedback data) The over-indexing on these myopic signal results in anti-social behavior, such as amplification of problematic content, algorithmic overreliance, and monoculture. One understudied explanation for these dynamics is that people are cognitive misers, using systems in a “lazy” way without active, critical engagement. How can we redesign systems to empower users towards more creative, caring and agentic interactions with these systems? This group will explore this question along two axes. The first is the domain of social media, and how we can reinvent the engagement-based “attention machine” of existing newsfeed algorithms to align systems with user’s values. The second is the domain of generative AI for creative application, and how we can foster active and divergent interactions with generative models to foster “serendipity.”Wednesdays, 2:00 – 3:00 PM
Dr. Robert JohanssonBrain and Cognitive SciencesThe Computational Roots of Human Suffering: Why do human minds suffer, and how can understanding computational cognitive science help us find relief? In this SERC Scholar group, we explore how structured generative models developed within computational cognitive science illuminate the sources of persistent emotional distress. Each weekly meeting will blend theoretical discussions with direct experiential practice, guided by contemplative techniques. By integrating theory with meditation practices—including mindfulness, embodied grounding, and cultivating joy—students will experientially examine how rigid expectations, craving, aversion, and self-related processes generate emotional suffering.

Participants will collaboratively investigate the ethical, psychological, and social implications of these cognitive insights, developing projects that span computational modeling, philosophical inquiry, therapeutic interventions, and practical applications aimed at reducing suffering in individuals and communities
Wednesdays, 3:00 – 4:00 PM
Dr. Patrick McKeePhilosophyChatbot Friends: Some people have “interpersonal” relationships with chatbots. They see chatbots as friends, lovers, servants, or even stand-ins for real people, such as dead loved ones. In this working group, we will examine the purpose, presuppositions, and ethics of these relationships. Some questions we will address include: Can we genuinely be friends with chatbots? What sort of things are our chatbot friends (or “friends”): concrete entities, imaginary or fictional beings, or abstractions? If they are concrete entities, can they think about us? Are there any moral restrictions on how we ought to treat them? Should we want to have digital stand-ins after we die? We will have flexibility to pursue the questions that most interest the group.

In the fall semester, we will meet weekly to discuss readings on these topics. We will read mostly philosophy, but also some psychology and case studies. In the spring semester, we will work on projects individually or in groups. A project might be, for example, a research paper, a podcast, a set of open-access instructional materials, or a proposal for the regulation of digital companions.
Wednesdays, 4:00 – 5:00 PM
Dr. Elliott ThornleyPhilosophyTraining Risk-Averse AIs: Future artificial agents may turn out misaligned. If they do, these agents might do various bad things, up to and including trying to take over the world. How can we prevent this? One possibility: make sure such agents are risk-averse and hence too timid to attempt world takeover. This idea has been explored in theoretical work, but the empirical aspect is lacking. We’ll fill this gap. We’ll finetune LLMs to be risk-averse and we’ll test how far this disposition generalizes. We’ll write up our results in a paper and submit it to machine learning conferences like NeurIPS.Wednesdays, 5:00 – 6:00 PM
Dr. Jakob StensekeCSAILAligning AI with Human What?: Most AI alignment research addresses technical challenges of how to make AI systems conform to human values, goals, and preferences. Less attention is paid to a prior, more foundational question: what are these values, goals, and preferences? Human values are diverse, often conflicting, and shaped by a complex mix of biological, psychological, cultural, and ideological factors. Some are tangible (e.g., wealth, health, avoid harm), and others abstract (e.g., justice, liberty, democracy). Individuals and collectives frequently disagree – and are sometimes unsure – about what they prefer, value, and what their goals are. We will investigate the conceptual foundations of value alignment: what human values are, how they arise, and how they might (or might not) be manifested in AI systems. We will begin by examining insights from the natural, human, and social sciences to understand the structure and origins of human values; then assess how state-of-the-art alignment techniques attempt to model and manifest these values; and finally identify conceptual and practical gaps between the two in order to highlight underexplored directions for advancing AI alignment research.Thursdays, 2:00 – 3:00 PM