The MIT Case Studies in Social and Ethical Responsibilities of Computing (SERC) aims to advance new efforts within and beyond the Schwarzman College of Computing. The specially commissioned and peer-reviewed cases are brief and intended to be effective for undergraduate instruction across a range of classes and fields of study, and may also be of interest for computing professionals, policy specialists, and general readers.

The series editors interpret “social and ethical responsibilities of computing” broadly. Some cases focus closely on particular technologies, others on trends across technological platforms. Others examine social, historical, philosophical, legal, and cultural facets that are essential for thinking critically about present-day efforts in computing activities. Special efforts are made to solicit cases on topics ranging beyond the United States and that highlight perspectives of people who are affected by various technologies in addition to perspectives of designers and engineers.

New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year and made available via the Knowledge Futures Group’s PubPub platform. The SERC case studies are made available for free on an open-access basis, under Creative Commons licensing terms. Authors retain copyright, enabling them to re-use and re-publish their work in more specialized scholarly publications.

If you have suggestions for a new case study or comments on a published case, the series editors would like to hear from you! Please reach out to

Summer 2021

Hacking Technology, Hacking Communities: Codes of Conduct and Community Standards in Open Source

A 2015 example of an introduction of a code of conduct and the discussion surrounding it as a way to surface some of the tacit patterns in FLOSS (free/libre and open source software) communities. (Christina Dunbar-Hester)

Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle

In this case study, authors provide a framework that identifies seven distinct potential sources of downstream harm in machine learning, spanning data collection, development, and deployment. (Harini Suresh, John Guttag)

Identity, Advertising, and Algorithmic Targeting: Or How (Not) to Target Your “Ideal User”

Exploring commercial algorithmic profiling, targeting, and advertising systems, this case study considers the extent to which such systems can be ethical. (Tanya Kant)

Wrestling with Killer Robots: The Benefits and Challenges of Artificial Intelligence for National Security

This case study provides background on the use of AI for national security, introduces key debates surrounding the use of these technologies, and presents a scenario-based exercise. (Erik Lin-Greenberg)

Public Debate on Facial Recognition Technologies in China

What dynamics inform popular debates about the use and applications of AI and facial recognition technologies in China, and how do they fit into a more global picture? (Tristan G. Brown, Alexander Statman, Celine Sui)

Winter 2021

The Case of the Nosy Neighbors

This case study asks students to assume the role of a high-ranking ethics-focused employee at a (fictional) neighborhood-focus. (Johanna Gunawan, Woodrow Hartzog)

Who Collects the Data? A Tale of Three Maps

This case study introduces the idea that data may be useful, but they are not neutral. (Catherine D’Ignazio, Lauren Klein)

The Bias in the Machine: Facial Recognition Technology and Racial Disparities

Facial recognition technology appears in uses from providing secure access to smartphones, to identifying criminal suspects from surveillance images as a tool of the justice system. (Sidney Perkowitz)

The Dangers of Risk Prediction in the Criminal Justice System

Courts across the United States are using computer software to predict whether a person will commit a crime, the results of which are incorporated into bail and sentencing decisions. (Julia Dressel, Hany Farid)