Bringing meaning into technology deployment

Olivia Corradin (right), assistant professor of biology and core member of Whitehead Institute, in conversation with an attendee (left) at the MIT Ethics of Computing Research Symposium. Corradin was one of 15 MIT faculty who shared their research that incorporates social, ethical, and technical considerations and expertise. All of the projects presented at the symposium were supported by seed grants established by the Social and Ethical Responsibilities of Computing.
Photo: Gretchen Ertl
Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, discussed their project that examined recent studies on the impact of various labels on AI-generated content.
Photo: Gretchen Ertl
MIT student researchers showcased projects they worked on throughout the year as SERC Scholars during a poster session.
Photo: Gretchen Ertl

The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.

Danna Lorch|MIT Schwarzman College of Computing
May 21, 2025

Watch Videos: MIT Ethics of Computing Research Symposium

In 15 TED-Talk style presentations, MIT faculty discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every school and the college at the Institute convened to select the winning projects that received up to $100,000 in funding.

“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”

“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” shared Caspar Hare, co-associate dean of SERC and professor of philosophy.

The full-day symposium on May 1 was organized around four key themes: responsible healthcare technology, AI governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.

Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, many of which are available to watch on YouTube, included:

Making the kidney transplant system fairer

Policies regulating the organ transplant system in the U.S. are made by a national committee that often takes more than six months to create and then years to implement; a timeline that many on the waiting list simply can’t survive.

Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.

Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:

“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”

The ethics of AI-generated social media content

As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.

In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.

“The big takeaway from our initial set of findings is that one size doesn’t fit all,” shared Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”

Using AI to increase civil discourse online

“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts & Science, and a larger team.

Online deliberative platforms have recently been rising in popularity across the U.S. in both public and private sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say —but doing so can be overwhelming or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”

The group focuses on “How we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the Government of the District of Columbia.

Tsai said to the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”

A public think tank that considers all aspects of AI

When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoctoral researcher at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework. One that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.

In the end, they created Liberatory AI, what they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.

“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” shared D’Ignazio.