Podcast: Is Business Broken? The High Stakes of the AI Economy

Asu Ozdaglar (middle) discusses the need for balance between AI innovation and regulatory framework.
WBUR
Asu Ozdaglar (middle) discusses the need for balance between AI innovation and regulatory framework.
WBUR

Asu Ozdaglar joins a panel of experts at the WBUR Festival to discuss AI regulation and its public impact.

MIT Schwarzman College of Computing
July 16, 2025
Categories: Press & Media

Asu Ozdaglar, deputy dean of MIT Schwarzman College of Computing and head of MIT Electrical Engineering and Computer Science Department, speaks with Is Business Broken? podcast host Curt Nickish to explore AI’s opportunities and risks — and whether it can be regulated without stifling progress.
 
“AI is a very promising and transformative technology,” says Ozdaglar. “But regulation should be designed very carefully so that it does not block or impede the development of the technology.” Given AI’s potential harms or misuses, she added that it’s important to think about the correct regulatory framework. “For it to be successful, it should focus on where harms can come from.”
 
Alongside Ozdaglar, the episode features Nickish in conversation with:

  • Divya Sridhar of BBB National Programs
  • Boston University professor Andrei Hagiu
  • Massachusetts State Senator Barry Finegold

Recorded live at the WBUR Festival in May 2025, each panelist offered a unique perspective on regulation, competition, and the broader public impact of AI.

Is Business Broken? is a production of the Ravi K. Mehrotra Institute for Business, Markets & Society, Boston University Questrom.

Is Business Broken?: The High Stakes of the AI Economy
Live from the WBUR Festival, July 3, 2025


CURT NICKISH: You’re listening to Is Business Broken? a podcast from the Mehrotra Institute for Business, Markets & Society at Boston University, Questrom School of Business. I’m Curt Nickisch.

Artificial intelligence promises massive gains in productivity and a flourishing of goods and services. AI also has consequences for workers and jobs, for corporate power, society, and the environment. Who’s winning in this new economy?

Who’s being left behind? Can we strike a balance between innovation and safety, between competition and policy? Basically, can we regulate AI without choking progress? Or are we already too late?

These are questions we posed at a live event at the WBUR Festival. On stage to speak with me were Barry Finegold, Massachusetts State Senator for Essex and Middlesex Counties, Divya Sridhar, Vice President of the Global Privacy Division and Privacy Operations at the Better Business Bureau National Programs, Asu Ozdaglar, Head of the Department of Electrical Engineering and Computer Science at MIT, she’s also the Deputy Dean of MIT Schwarzman College of Computing, and Andrei Hajju, Professor of Information Systems at Boston University, Questrom School of Business.

Here’s our conversation with some questions from the audience.

Let’s just start with a basic question. I’m just curious from each of you briefly, why should we even think about regulating AI? Should we?

Should we regulate AI? And let’s start with the policy maker over there, Senator.

BARRY FINEGOLD: Thank you. And I appreciate everyone coming here today. I think when we think about AI, when I talk to my son and I tell him his superpower is his imagination, and I think we have to imagine things differently under AI. But at the same time, we have to think about real life practical applications. And I think because of that, we do need to regulate AI. And what’s a specific example of that?

Well, 10, 20 years ago, there’s a big concert coming to town. You see everybody in a long line trying to get tickets when they go on sale. But now what happens if Taylor Swift is coming to town, we had all these bots that would be buying up all the tickets. And a lot of us who want to go to that show can’t go to that. So, what we did last year in the economic development bill, is we banned bots. That’s artificial intelligence. That is one small example of why we need to regulate artificial intelligence.

But if the federal government gets what they want, we’re not going to be able to do anything on the state level. Because at the end of the day, we don’t know what we don’t know. And that’s why I think state level people like myself should have the ability to regulate AI.

CURT NICKISH: Although the House of Representatives this week, right, passed a law saying states should not be able to regulate AI for 10 years.

BARRY FINEGOLD: Which we will get into why that’s not such a good idea.

CURT NICKISH: Yes, exactly. Sridhar.

DIVYA SRIDHAR: Yeah, I think there are two reasons. The first is trust. There was a great KPMG study that came out. It covered 48,000 people in 47 countries, and it found that more than half of the population that took the survey, 66% use AI regularly, but only 46% of people trust in AI systems. So, there’s this massive lack of trust. There’s a gap there.

So first off, we want to make consumers and users feel comfortable. The second important point is accountability. I actually want to push us to think further about whether or not AI truly is living in the wild, wild west.

Is it truly not regulated? We set the baseline, we set the practices, and then we go after companies that aren’t fulfilling those obligations that they’re supposed to be meeting those baselines. So, there is this medium, this in-between zone, between having hard laws that might pass and then be not useful tomorrow and not doing anything at all.

ASU OZDAGLAR: AI is a very promising and transformative technology with wide-ranging implications, and the regulation should be designed very carefully so that it does not block or impede the development of the technology. But at the same time, I think, given its potential harms or misuses, it’s important to think about the correct regulatory framework. For it to be successful, it should focus on where harms can come from.

And in that context, I’d like to maybe add a couple of examples to what was discussed before. The first one is around, of course, bias and misinformation. And that by itself may not be enough of a justification for a formal regulation, because no government should be in the business of defining what is misinformation and what is truth.

But at the same time, I think AI landscape ecosystem should be regulated in such a way that there are incentives for companies to focus on increasing the reliability of information and not amplify or boost unreliable content. One important aspect in this context is digital ads, which have been used on many online platforms to monetize services. And digital ads really depend on attention, user engagement, their profitability, and attention is triggered by sensational content and sometimes extreme content.

So, this immediately sort of, I think, biases many of the platforms. So there needs to be something in there. The second is the labor disruption that we’ll probably talk about more.
And of course, new technologies always come with disruptions. But AI has a great promise in terms of really creating new capabilities or jobs for workers. So, focusing on this aspect would be important.

And finally, I just want to point out health care. Ironically, I think it’s the place where AI will have the greatest promise to improve things, where there needs to be more regulation. If you think about health care with doctors and hospitals, they’re heavily regulated.

You can’t really give advice or diagnosis without proper certification. So, there’s a big problem there, because when you look at chatbots, they give any kind of advice. That’s my mom’s favorite use of a chatbot, and that’s not okay.

So, I think this was also the main key point under some of the policy work we did at MIT. I think a key idea would be AI should be regulated where humans without AI, performing without AI, are already regulated. So that could be a good starting point.

CURT NICKISH: Got it. I love it when we have experts who answer briefly, and that was briefly for the kind of content that we have to talk about today. Andrei, where do you stand?

ANDREI HAJJU: So, I think I’m pretty close to Asu. My first instinct is to be very cautious with regulation in order to preserve innovation and progress in AI. The way I would approach this is to ask, what are the market failures that justify regulation?

So, the overarching principle being, if there’s no clear identifiable market failure, I don’t see why we should intervene. Now, there are some market failures that we can identify. They’re due to AI. And I think the question we want to get to later is to, what are the specific market failures that AI creates that we haven’t seen before?

I’ll just pick an example to make this clear. So, there are some concerns that have existed with other technologies. So, for instance, Divya mentioned trust, right? So, I think if you think about trust generally, so you can say, well, there’s all kinds of service providers. We have regulations that ensure that companies don’t deceive customers in terms of what they’re providing. We already have legal frameworks for this for other services. I don’t necessarily think there’s something specific to AI in that.

Now, if you think about Asu’s example, which is, well, what about the issue that with AI, people tend to trust more because it comes from a chatbot relative to, say, from a Google search. There you might say, okay, maybe we have a market failure. Maybe we need to, like, revise a little bit the regulatory framework.

Generative AI for music, it can be trained on, like, millions of songs. Sometimes it’s unclear how much should the copyright providers be compensated. I’m not necessarily saying it should be regulated, but there should be some regulated framework for this.

CURT NICKISH: Gotcha. All right. We’ve got the central question of how do we regulate AI to protect and serve the public without freezing progress? And you, like, raised a bunch of concerns here. Surveillance, market power, job disruption, misinformation. What keeps you up at night?
Divya, you talk to companies every day. What are they nervous about? Regulators coming in and stopping? And what do they actually think there should be something done because they see a common problem?

DIVYA SRIDHAR: Yeah, and I think one of the challenges has been that AI has bubbled up over the last three or four years. AI has existed all along. It’s just in the last few years where data privacy regulations have also ramped up that AI has become more of a cornerstone and folks are focusing on it much more heavily.

So, we work with companies directly. I mentioned I would speak to some of the co-regulatory programs we offer at both the global level, the federal level, the state level. Basically, we provide the ability to police and enforce against existing federal frameworks like the FTC Section 5 Authority and Safe Harbor Programs for COPPA, which is the Children’s Online Privacy Protection Act. So, I’ll give you an example from that space.

We recently settled a case with Buddy AI. This is a Gen AI app, and it provides language translation services through an EdTech medium for children under the age of 13. It was found to not be providing the ability to give the user and the parent choices and verifiable consent. And all of these were flags that we found on our own monitoring efforts.

So before it got to the FTC, we were able to find this case, open up the inquiry, work with the company to bring them into compliance with the federal guidelines as well as our own guidelines that we’ve written in conjunction with the government, and close out the case quickly, neatly, and not have to worry about the government’s capacity, their resources and time going after this company.

CURT NICKISH: So, it sounds like a lot of this is sort of existing regulatory frameworks that FTC could apply. Is there a need for new laws, or is there the senator is like raising his hand?

BARRY FINEGOLD: I just think it’s important.

CURT NICKISH: I guess I should say not the senator, but the lawmaker.

BARRY FINEGOLD: Yeah, so, but I just want to context some. I chair economic development in the state. All I think about all day long is how to keep Massachusetts competitive. The last thing I want to do is have us not competitive in AI. Per capita, we have the most venture capital funding anywhere in the world. We have a golden goose here, so I don’t want to blow it up.

However, about a mile from here, 20 years ago, there was a young kid with this great idea called Facebook. And no one understood how powerful it could be. But now, Meta will admit that one out of three girls, because of their algorithms, will say they have body image issues because of what Meta puts on there. We should have stepped in. We should have put guardrails in. So, I do believe we can have it both ways.

I do believe that we can set up guardrails, what you should and shouldn’t be doing, and we can have a thriving economy that embraces AI. I don’t think it’s a zero-sum game where it’s either one or the other. And what we’ve been doing here, and let me take one step further, is I think the point was made of, well, how can you have 50 different AI policies?

I completely agree with that. The federal government should be doing it. So, what we’re doing now is, I’m working with a state senator in California, in New York, and we’re hopeful to have a universal AI bill that everybody can work with.

CURT NICKISH: Andrei, let me ask about market power, because you talked about it a little bit. It sounded like you didn’t feel like there was a big threat there, at least at this point. It seems like, to the uninitiated, that you’ve got a couple of big players there, and it could sort of play out where you only have a Meta. Is there a risk of there just being too much market power for this industry and we end up with a TikTokification of AI that none of us would like to imagine, maybe?

ANDREI HAJJU: This is a topic that I feel fairly strongly about. My co-author and I made this observation. It’s interesting that if you ask industry people this question, most industry people say it’s very competitive. You ask regulators or academics, they tend to think it’s concentrated. I think on this one, just having looked at it, I’m much closer to the industry people view. So I don’t think it’s very concentrated.

So you mentioned OpenAI and Gemini, but actually if we go down the list, like just think about all the major GenAI players, there’s quite a few. And some of them are not large companies. There’s Anthropic, Mistral, there’s a few open-source ones in China, all the large tech companies, there’s XAI. So I don’t think it fits any definition of a very concentrated industry.

ASU OZDAGLAR: I somewhat agree and maybe somewhat disagree with Andrei. So right now, there are reasons why the current AI ecosystem may lead to concentration of market power and related in this case I guess concentration of social power because developing foundation models is very data and compute heavy and there are various first-mover advantages. When you sort of put a lot of time into developing a foundation model, you have more data, you have access to more data, and we’ve seen this.

There are a lot of models out there. They vary in their capabilities, and they are used for various different reasons. We know which ones are the best and right now there’s a concern that they are in the hands of a few players.

It still requires tens of millions of dollars to be able to develop the foundation model. It still builds on a lot of knowledge that has already been created so it’s not like everybody will be able to do that but now I think our views are changing that different players may have different models with different characteristics. I think that’s more likely. We don’t know. It will also depend very highly on open source.

CURT NICKISH: Yeah, it’s still taking shape. Divya let’s go to you and I just want to ask about algorithmic transparency maybe with Buddy AI. Can you get into that concern because I think the company is owning the data, that’s one thing. But now what’s baked into that data? Where did it come from? Who’s liable?

DIVYA SRIDHAR: Yeah, sure. I’ll share a little bit about three cases that the FTC has come out with. The first one was with Rite Aid. This was a few years ago, focused on their use of facial recognition technology and that disproportionately impacted certain communities, certain demographics, and was basically mischaracterizing the shoplifters that were coming to the store.

So, a really great example of where AI gone wrong with this type of biometrics in other parts of the world. We see the EU finding ways to regulate the biometric space. I think they have a provision in the EU AI Act that focuses on biometrics and why there shouldn’t be negative impacts on consumers, especially with regard to this disproportionate harm that could happen.

The other two cases I’ll mention very briefly, FTC versus Do Not Pay. This is the world’s first robot lawyer. And actually, the legal spaces are super interesting. I want to actually bring up a couple places where that’s been found to be a major sector where a lot of issues with AI trying to find sources being used in court cases incorrectly, misciting, and even making up citations. I think that’s a fascinating use case where AI has gone wrong. So, the Do Not Pay case, and then that one was busted for deception issues.

And then FTC versus Workado, which is AI detection claims. They were claiming they were 98% accurate in identifying AI-generated text when truly they were about 53% accurate. So, accuracy claims are a big one that we go after in particular that we find to be a challenge when companies claim to be something that they’re actually not. And we think that consumers should get the real numbers. They should get the real data and not be deceived into using a service that doesn’t work.

I think the legal space is one that’s super interesting to me where we’ve found, I think there was a really good piece in Business Insider that talks about 120 public cases over the last year have had false citations being used. I think some attorney generals have actually incorrectly used AI in different forms. There’s just a number of different data points to suggest that the legal profession in particular faces challenges with the accuracy of AI being used in citations and sources. We’re all grappling with where to use AI and where to add regulation to AI where it doesn’t already exist. So those are a couple of points.

CURT NICKISH: Great. Asu, did you want to say something?

ASU OZDAGLAR: This is a topic, I think, very critical to the development of AI algorithmic transparency. Right now, the AI models are black boxes. So, they do not have interpretability and legibility. I mean, even these terms, it’s still being debated. I think this makes it very challenging because one place I mentioned briefly before where AI has great promise is to complement, augment humans. So, this human AI collaboration for us to understand why AI is making certain recommendations and be able to get that, combine it with our expertise and improve upon what we could do on our own.

So that’s the great promise of AI. Right now, this lack of transparency or legibility of these models impede that. And the other aspect is, I think, close to what Divya is talking about for auditing purposes or any kind of proper regulation, this transparency is also very important. So, this is a place, I think, where industry, all AI research and development should put greater emphasis. It may need change in the models, architecture, and really thinking about how to use this property.

CURT NICKISH: I want to follow up on that. I’m just curious, as a computer scientist, these problems you’re identifying, like how technically hard is that? Is there a sort of a computer science information technology way to address those issues? Or is it like a really big cost and economic and technical challenge to make things more transparent, to share that?

ASU OZDAGLAR: That’s a very good question. Right now, I think the biggest maybe issue related to use of AI is hallucinations, that a lot of people are talking about. AI just comes up with beautiful explanations, perfectly articulated, that are wrong. And because they are very excessively authoritative in the way they present this information, many of us just take it for granted. And the examples that Divya gave, I think, are very illustrative of this aspect.

In terms of how to fix this technically, I don’t think this is the main issue. Hallucinations, I think, there are technical ways that complements what is currently being done through, you may be hearing terms like RAGS, Retrieval Augmented Generation. There are tools that can be added to current models that actually improve on the hallucination problem. But that’s also almost like the tip of the iceberg. It’s really about this reliable information.

Giving everyone the incentives to actually focus on that problem is the key. And I think for that, there needs to be a change in vision, mindset, what is prioritized in development of these models and how they’re deployed.

CURT NICKISH: Okay. Andrei, you were making eye contact like you wanted to add something.

ANDREI HAJJU: So very briefly, as regards to algorithmic transparencies, I noticed that we shifted from should we regulate to what should the providers do. So, for example, I am not convinced that the fact that, okay, AI hallucinates. The robo lawyers, there’s other more ridiculous examples of hallucinations. I mean, those companies will be, there’s going to be pressure from their customers or from the markets to correct those. If we go back to the question, should we regulate for that? I’m not convinced at all.

I’m not convinced that, for example, the robo lawyers should be regulated just because they make mistakes. I mean, the mistakes are so ridiculous that if they don’t correct them, people will not come back to them. So, I could be convinced, I think this could be an interesting conversation. You can say there are certain domains, maybe healthcare, maybe say financial, like mortgages or loans, where regulators may say, we require the providers to have algorithmic transparency. But I don’t think we should say, okay, just because they hallucinate, we need to require algorithmic transparency from all AI providers.

Requiring algorithmic transparency is actually very costly and it’s very annoying and may stifle innovation like we talked about. So, I think we would only do that if we really believe there’s a serious problem where they’re not going to do it on their own. And I think in most domains, it’s in their best interest to work very hard to do it.”

CURT NICKISH: You’re raising an interesting point because in a case like that with robo lawyers, like it’s clear who’s liable, right? Where the mistake was made. In a lot of other cases, it’s sort of like unclear where the responsibility is.

I want to bring in the senator because you represent an area north of Boston that has some corporate headquarters, also has a lot of people who work at companies in greater Boston. Are you worried about the pace of change for these emerging patterns in jobs and economic changes?

BARRY FINEGOLD: Well, if you’re asking if I’m worried, yes, I’m worried. When the CEO of Anthropic says that 50 percent of white-collar jobs are going to be eliminated and we can have 10 to 20 percent unemployment, I take it serious. I think that guy is super smart. But I also say to myself, okay, a hundred plus years ago, half the people in this room would have been farmers and we’ve been able to kind of adjust.

But what I’m actually concerned about now is in the last three months, I’ve never got so many calls about, hey, my son or daughter is graduating from college, or my son and daughter needs an internship. I’ve never had so many calls. Now, maybe we’re in the beginning of a recession, but maybe this AI thing is starting to rear its ugly head a little bit. Because what does entry level people do? They do the entry level stuff which ChatGPT or Gemini can do so quickly.

So, is this the beginning? We have to think about our society different. We went through this with the industrial revolution, with the internet revolution, like 25 years ago, if you were a taxi driver or travel agent, I said to you, your job’s in trouble. You’re like, what? Basically eliminated. And there’s going to be a lot of jobs out there that are going to get eliminated. And I think as policymakers, we have to think, how do we retrain people? How do we have them think differently? And how do we adjust this society?

It’s going to be hard. I’m not in the camp that in six months a year, it’s going to be over. I think it’s gradual, but it’s a lot quicker than what we saw at the internet. The internet is 25 years of really changing our workforce. This could be five years.

CURT NICKISH: We have some questions that have come in, and I want to bring them into the conversation. One is just, shouldn’t there be distinctions in the regulations for AI that could benefit scientific discovery, and the limits designed for average consumer? Any reactions there? Andrei?

ANDREI HAJJU: My first reaction is, if it’s for consumers, I mean, of course, the usual liability frameworks apply. So, I think my instinct is, yes, there’s probably more scope for regulation if it’s consumer or customer facing in general. For scientific discovery, my instinct would be like, yeah, you probably want to leave more room for experimentation, let’s call it.

DIVYA SRIDHAR: I actually want to challenge that a little bit because I think the consumer deception, those kinds of claims are already being regulated by the FTC, by self-regulatory bodies. So, I think, rather than look at those, which are more broad use cases, broad sweep, we look at the high-risk sectors and high-risk use cases, the health care sector, right, the financial services sector, where we think there are great benefits and gains, but also could be majorly problematic if they go wrong. That’s how the EU is doing it. I’m not saying we follow in the EU’s footsteps. The EU is actually scaling back GDPR. They’re scaling back the EU AI Act just a year after it was passed.
So, I don’t say that we need tooverregulate. I think Japan just passed a bill in the House this week. It talks about international cooperation. How can that support guardrails and frameworks for loose regulation of AI? I think we look to other countries to see what’s been done, what might be going wrong, and then think through it before we jump to conclusions about what that looks like.
 
CURT NICKISH: Let’s stay in the international space just for a minute. Are we heading towards a world of, kind of, fractional blocks of AI technology and power around the world? Because China is treating it differently than Europe, then the United States so far. What does that future look like? Are you excited by that regulatory future globally or not? Or do you even think about it?

ANDREI HAJJU: I don’t know who gets excited about regulatory futures. I’m excited about AI in general.

Oh yes, fair enough.

It depends on the line of work you’re in.

This is way too broad of a topic. But for instance, I think what’s interesting internationally is competition between different countries.I mean, presumably, different countries will want their own AI companies to succeed. I mean, one implication of this might be a race to deregulate.
I’ll give you an example of what I think is a terrible regulation that was discussed in the US. Senator, correct me if I’m wrong. But there was some discussion, not here, some lawmakers saying we should regulate the number of parameters that AI models are supposed to have. I mean, I think that’s absolutely silly.

And we can do it, and we shoot ourselves in the foot. And of course, for example, China is not going to do that, and they just have more powerful models. It’s not sustainable on a global level because other countries will say, well, we’re not going to limit it, and therefore, our companies will perform better.

DIVYA SRIDHAR: I also think there’s, within the US, it’s not just about laws and regulations and self-regulation. There’s also NIST and other government bodies. They are supporting the national standards. So NIST is the National Institute for Standards Development. And so, what it does is it provides general parameters for what those foundational AI models should be built on and provides the engineers with the guardrails and with the specs to build on. So, I think there’s also a role for those certifications for those standard setting bodies that can really help augment where maybe we don’t have a broad-brush stroke AI bill or a law.

ASU OZDAGLAR: May I come back, maybe go back to the scientific discovery question? It’s I think important because that’s one of the hoped, you know, or one of the biggest promises of AI is supercharging the scientific inquiry process. We saw examples of this.

This was big excitement around AlphaFold, the promise of AI in solving really complex problems. And there is now a lot of research in materials discovery, being able to discover new materials with properties we’ve never seen before or drug discovery or cancer research. So there has not been yet any cures or any really big advances there, but it may be a matter of time.

So, there’s a lot of progress there or promise there. But I’d like to also point out that just like in other cases, this is not just automating this entire process. You know, we have these new scientist AIs that will discover things on their own. I strongly believe it should be still coupled with human scientists because it’s really important to bring in the domain knowledge into the development of these models. Just as an example, there have been many, many papers that actually talked about new materials being discovered with AI, hundreds of them. But then actually when chemists, chemical engineers went into the lab to actually synthesize those, these were not synthesizable. So, it’s not just enough to just get a compound. It’s really important to figure out how you actually produce that. This is not about regulation, but it’s still the same mindset.

I think AI can automate certain entry-level things, but in many places, it will work alongside humans. And how do we actually come up with the right structures and the vision to actually enable that? I think this would be important for us to move forward.

CURT NICKISH: Yeah. Let’s bring it back to the senator. We’re talking about jobs here and humans interacting with it. The entry-level job disruption is fascinating, right? It used to be the traditional way to get started in the company. You hire these interns in entry-level jobs, and if you don’t need them, what happens to that whole pipeline? It’s a little concerning. So, one question we have from the audience from someone named Scott. At what point will we realize that AI is impacting the job market? Is there like a specific unemployment number you’d want to see? Other indicators. How do we know that it’s a problem because jobs are being created and lost all the time?

BARRY FINEGOLD: We study data all day long in my office. We’re looking at the numbers, but there are things I look at, like law school applications are way up. Kids can’t get jobs. So that’s an indicator right now. How many kids are going to graduate school? I said the calls we’re getting right now.

I spent a lot of time talking to CEOs and businesses. What are you concerned about? What are you trying to fill? When they’re saying entry-level jobs are just not on my priority, then who hires these people and ultimately who trains people? I think the challenge we had with COVID, we had young people working remote and they’re not getting the face-to-face. I think it’s just getting harder. I think we found a way to work without the entry-level. Young people are going to have to think out of the box. We’re going to have to help them think out of the box. I think it’s super challenging.

If I could just redirect back to where we were talking about where we think global AI and why we shouldn’t have regulation. I’m a little jaded. I saw Mission Impossible this weekend. So, if you happen to see that movie, you’re like, oh, boy. But I guess I asked the question, why is it so bad if you ever have a kill switch in a model for a major company or something like that? Why is that such a bad thing? Why is it so bad to have whistleblower protection?

Because there are super smart people out there. These aren’t fringe people. These are some of the smartest people in the world that have said, yes, we think in the next 20 years there could be a catastrophic event due to AI. And I will continue to say this. We can have both. We can have innovation. And we can have protection. It’s not a zero-sum game.

Andrei is already ready to go at me. The mic is yours. The mic is yours.

CURT NICKISH: I mean, it’s interesting that there’s talk about super intelligence and artificial general intelligence, AGI. Are we right to be focusing on these short-term risks when some people say there’s a big existential risk? Is that actually the big regulatory thing we should be worrying about or talking about here?

ANDREI HAJJU: On the existential risk, I will just… I mean, I’m happy to discuss this over cocktails. I love science fiction.I honestly have not heard a single, like, reasonably articulated, like, really based-in-science way in which, you know, the existential risk happens. There’s a lot of debate there, so I’ll stay away from that. I’ll stay away from that one.

Thank you.

On the other hand, I do agree with the senator’s point. So, I love the terminology guardrails. Somehow, like, I think guardrails sounds a lot better to me than regulation.

So, when you say, for example, kill switch, I’m much more amenable to this than say, we’ll put a cap on the number of parameters just because we’re afraid of some nebulous risk that we read about in Asimov’s foundation or something, like the AI or whatever, Skynet taking over. So, I totally agree. There’s reasonable ways to say, listen, we want to have certain frameworks, but you’re free to experiment. We can’t pretend that we know exactly what’s going to happen.

DIVYA SRIDHAR: And I think the kill switch is particularly important in the agentic AI space. And you guys have probably been hearing that word, that term is floating around a lot in the last two or three months, where instead of the AI being a support tool, it’s now becoming the decision maker. So, it’s actually not just doing the tasks you tell it to do, but it’s actually doing it on it automatically because it’s being trained to do that.

So, this agentic AI space is one where I think it’s one to watch and it’s also one that might need that kill switch option. Because AI, to be frank, doesn’t have the emotional guardrails that we do. It doesn’t know how to regulate, right? So being able to provide it with that and having a human in the loop is important. And so, there’s been a lot of bills that have debated the human in the loop aspect of AI and whether it should exist on its own or whether there should be a human.

ANDREI HAJJU: Can I ask a question, just again, the provocative question is always, why do we think that we as a society need to impose the kill switch and the companies are not going to figure out on their own that they should put a kill switch? I always come back to this, like if I’m… No, but if you’re a company, like again, there’s significant pressures to not be irresponsible.

I mean, do we think that…

Are you sure? I mean…

I’m talking to the business is broken. I’m talking to the business is broken.

ASU OZDAGLAR: Well, it might be, except that’s not what we are seeing now. There’s a lot of focus on AGI, and there’s sort of this race to the bottom mentality, really competitive in terms of bringing these new capabilities. I think they need to worry about reputational risks, but it’s not clear right now that they are moving with that mindset.

There’s a lot of, I think, rush to be able to get these models out. We’ve seen examples of this when several companies moved AI products in education without satisfactory results. So, you would think that responsibility, that social responsibility also should be embedded in the way things are developing, but I think right now the system is one that there’s a lot of uncertainty, there’s a lot of wanting to move fast, and I’m not sure if that’s being incorporated.

CURT NICKISH: I’m going to end with, we’ll make this lightening roundish, but we’ll go through a couple of questions that we have. Things that we haven’t touched on but are really good points. Could someone touch a little bit on the environmental repercussions and potential harms of AI? We have not talked about that. Should companies be regulated around use of resources?

DIVYA SRIDHAR: Yeah, I mean, I was just reading about that as well. I think there’s two things there. There’s the enormous carbon footprint, and we know how East Asia, other parts of the world feel about it versus us, the Western Hemisphere. And then the data centers impact. The fact that we’re investing so much, this administration, in data centers, which I don’t disagree with, but also is there enough data that shows that those data centers are going to have implications down the road, right, on our carbon footprint? So, I think there’s this balance that needs to be…

CURT NICKISH: Yeah, well, it’s happening in the middle of this feeling of an arms race, right? That there’s this rush everywhere to push forward, okay? So, we acknowledge that.

ASU OZDAGLAR: Yeah, I think we do. There’s a lot of concern around that. There’s work around that to reduce the energy needs of the models. It’s right now really a lot, and it raises concerns.

BARRY FINEGOLD: Can I just say just one thing real quick? So, some we think a lot about it in Massachusetts, like think all of you pay high energy costs. Well, the reason why you do is because we want to change how we get our energy. We export $21 billion dollars a year to other states, other countries for energy. We want to, in theory, grow it here, solar, wind, many others. If we can do that, then I think we’d be a lot more environmentally friendly.

So, we’re working on that as a state, and it’s really, really hard to transform our economy, from a fossil fuel economy to a green economy. We’re trying to do that in Massachusetts, and I think long term, that’ll be helpful for this issue.

ANDREI HAJJU: I mean, I just wanted to echo this. Maybe this will be the impetus because of the huge energy requirements. Maybe this will be the impetus to transition to nuclear or something that’s more sustainable than what we have now, just because we can’t produce the energy we need for AI with what we have.

CURT NICKISH: Let me just close here and ask each of you to just say, if you could implement one AI policy tomorrow, one concrete, impactful, realistic step, what would it be? I’m going to start with a lawmaker who’s probably thought about this.

BARRY FINEGOLD: That’s a tough question, just one thing. But I think having some of the reporting models, having some of the kill switch, allowing there to be whistleblower protections would be some of the things that I would just basically like to have, which I don’t think would disrupt a lot. I think those are guardrails.

CURT NICKISH: So, the self-regulator here, what would you like to see?

DIVYA SRIDHAR: Companies need independent self-regulatory accountability organizations to support them with ensuring they’re meeting the best practices and going above and beyond. So that way they are putting in the defaults that they’re supposed to be putting into these new tools, to agentic AI and so on. So, there is a kill switch at some point, right?

CURT NICKISH: Asu?

ASU OZDAGLAR: Well, it is a very hard question. I think there’s many things to do. Maybe thinking about robust auditing frameworks for safety of these models, and that ties back to the algorithmic transparency. You know, having mechanisms for red teaming at least for certain places for safety guarantees. Thinking about ex-ante, ex-post, different kinds of audit schemes. So that would be, you know, at least in the first run.
 
CURT NICKISH: Yeah, got it. Andrei, tomorrow, do you have one?

ANDREI HAJJU: I don’t, and I’m going to avoid your question by going back to the question about education that the senator mentioned. So, I find that topic very interesting. It’s like, what do we do about entry-level jobs? And can we identify how much of that is due to AI?

At Questrom, we have a new AI initiative. I think this would be, like, sounds like a great project for us to figure out, like, how, yeah, how do you get students to learn skills that can get them into new types of entry-level jobs? Presumably, the previous entry-level jobs don’t work. We need to get them to something different. I think, to me, that seems like it’s clear there’s a market failure there. I don’t know what the answer is, but this sounds like exactly what academia could contribute here.

CURT NICKISH: Yeah. I think if there is one takeaway today, it’s that we can make choices about all this. It doesn’t seem too late. We have time to make them wisely. And we really appreciate all of you for contributing to understanding this problem and identifying ways forward. Thank you so much for attending. And please join me in giving this outstanding panel a big hand.

That’s Senator Barry Finegold, Divya Sridhar, Asu Ozdaglar, and Andre Hachu. We’ll be back with more episodes in the fall, and we’d love it if you would please rate and follow Is Business Broken? wherever you get your podcasts. If you follow the show, you’ll be sure to get the new episodes when they come out this fall.

And in the meantime, you can check out any episodes you may have missed. Thanks for listening to Is Business Broken? I’m Curt Nickisch.