Welcome to Entanglements. In this episode, hosts Brooke Borel and Anna Rothschild ask: Should tech companies — and the billionaires that often run them — decide for the rest of us how artificial intelligence is deployed? Without clearer regulatory guidance, ChatGPT and other products arrive without input from experts, policymakers, or the broader public. And for many observers, there is a stark debate on whether this approach is the right one.
As always, to dig in, our hosts invited two experts with differing opinions to share their points of view in an effort to find some common ground. The point isn’t to both-sides an issue or to try to force agreement. Instead, they aim to explore the nuance and subtleties that are often overlooked in heated online forums or in debate-style media.
Their guests this week are two book authors who have covered AI from essentially opposing perspectives: Greg Beato and Adam Becker.
Below is the full transcript of the podcast, lightly edited for clarity. New episodes drop on Wednesdays. You can also subscribe to Entanglements on Apple Podcasts and Spotify.
Ted Cruz: How harmful would it be to winning the race for AI if America goes down the road of the EU and creates a heavy-handed prior approval government regulatory process for AI?
Sam Altman: I think that would be disastrous.
[MUSIC]
Brooke Borel: You’re listening to Entanglements, the show where we dive into hot-button topics in science and see if we can find some common ground. I’m Brooke Borel, articles editor at Undark magazine.
Anna Rothschild: And I’m science journalist Anna Rothschild.
Brooke Borel: This time we’re going to talk about who should be making decisions on potentially society-changing tech.
Anna Rothschild: There are definitely a lot of strong opinions on that.
Brooke Borel: And that’s why we’re here. So, Anna, what you just heard was from a congressional hearing this past May, when Senator Ted Cruz asked Sam Altman, the CEO of OpenAI, about regulating AI tech.
Anna Rothschild: And OpenAI is the company that makes ChatGPT.
Brooke Borel: Right. At this hearing, the CEOs of several tech companies were testifying about the state of their AI products, including ChatGPT, and Sam Altman said this:
Sam Altman: If we have rules about what data we can train on that are not competitive with the rest of the world, then things can fall apart. If we are not able to build the infrastructure, and particularly if we’re not able to manufacture the chips in this country, the rules can fall apart. If we can’t build the products that people want, that naturally win in the market — and I think people do want to use American products, we can make them the best — but if we’re prevented from doing that, people will use a better product made from somebody else that doesn’t have the sort of, that is not stymied in the same way.
Anna Rothschild: I mean, I can’t say that I’m terribly surprised to hear that from a tech CEO.
Brooke Borel: Right.
Anna Rothschild: Though I do remember a few years ago, Sam Altman seemed a lot more open to regulation, at least when he was talking to the media.
Brooke Borel: Yeah, that’s absolutely true. But regardless of what the companies want, for now there isn’t really any significant regulation on this tech yet, at least in the U.S. And that means companies like OpenAI are making big decisions for the rest of us about how and when this tech is deployed. But not everyone is happy about that.
Anna Rothschild: Surprise, surprise.
Brooke Borel: Yes, which is what gets us to our question of the day: Should tech CEOs be the ones making that choice? And if not, then who?
Anna Rothschild: Good question.
Brooke Borel: Thank you. So, to dig into that, I have two book authors on today, who have written on this very topic — and who have wildly different perspectives.
[MUSIC]
Brooke Borel: So the question of the day is, should companies decide our AI tech future?
Greg Beato: Yes, they absolutely should.
Brooke Borel: And why is that?
Greg Beato: I think I like that question — we’ve had discussions about it — because it’s kind of a troll, right?
Brooke Borel: Yeah, a little bit.
Greg Beato: And at first I was like, “Oh, well I don’t like that question because it implies unilateral ability to decide the future.” But I don’t think companies have that unilateral ability. Once you put it in the realm of a commercial enterprise, you’re talking about getting consent from customers. And so you’re immediately shifting to a multi-stakeholder model. And that’s what’s key for me.
Brooke Borel: This is Greg Beato. He writes about tech and culture, and his work has appeared in places like The New York Times, Wired, and Reason, and a whole slew of other major outlets. And in 2024, he also co-authored a book with Reid Hoffman, the co-founder of LinkedIn. That book is called “Superagency: What Could Possibly Go Right with Our AI Future.” Here’s Greg on the impetus for that book.
Greg Beato: It was predicated, in part, because between the years 2015 and 2020, I was reading a lot of books about technology and they were all essentially: What could possibly go wrong? And, you know, they’d have all the ways that technology would fail and undermine different kinds of aspects of human life. And then they’d have a final chapter that would say, “But if we do this, you know, maybe this.”And so I just was like, “Well, what if we reversed this book and said: What could possibly go right?”
Brooke Borel: So, early on in this project, Greg and Reid were going to focus on a bunch of different technologies. But then large language models, which are the type of AI that powers products like ChatGPT, started coming online. And Greg and Reid think this tech is going to be transformative, so they made it the focus of the book.
Greg Beato: If it really is the biggest thing since steam power, it’s not just going to make life 10 percent more efficient in [a particular] realm, or increase our economy by 20 percent. It’s like steam power — it didn’t create the nation state, but it enabled the nation state.
Brooke Borel: Do you consider yourself a techno-optimist, or is there another term you prefer?
Greg Beato: Well, I consider myself a techno-pluralist. And that means that I’ve always felt that pluralism is my natural sort of state because I have very strong opinions about how I want to live my life, but I don’t really want to press them on other people.
Brooke Borel: By the way, pluralism is the idea that there is a diversity of perspectives and beliefs and ideologies that can coexist in a society.
Greg Beato: You know, it’s not that I’m a John Stuart Mill scholar, but I like his idea about experiments in living, in that a good society gives you different options to explore different ways of creating meaning for yourself and patterns of living. And technological innovation and entrepreneurship are really, in my mind, what can drive plurality in meaningful ways.
You need the equipment and infrastructure to pursue different ways of living, and you get that through innovation and entrepreneurship. And so when you have that going, you might say, “These are things I value.” And I say, “Well, but these are the things I value.” And we could sort of get an overlap. And I think now, because technology’s getting more powerful, we ought to have more opportunities rather than fewer to have different ways of living.
Brooke Borel: So Greg is sort of on the same page as Sam Altman: Basically, let tech companies do their thing and let users or consumers decide what’s useful to them. But he’s not totally against regulation, either.
Greg Beato: I think it needs to be very choiceful, very conservative in the sense that it should never be preemptive. It should be contingent on perceived risks, but also very much more so on real risks. And you often don’t really understand the real risks until there’s some level of deployment. And I also just want to point out that when we talk about regulation, we usually shorthand that to mean statutory regulation from elected government bodies. And in reality, regulation happens in many different ways: through commercial imperatives, social norms, societal trust, and how willing the public is to comply with official regulations.
Anna Rothschild: So he’s saying even though tech CEOs are pretty free to do whatever they want right now, society still puts some guardrails on that based on what we decide to use or buy or engage with.
Brooke Borel: Exactly, that’s right. Greg actually argues that releasing ChatGPT, or these other products, in this manner is democratic.
Greg Beato: AI has been finding its way into commercial products since the 2010s and probably before that, whether it’s recommendation engines, news curation, all these different types of things. But the difference with ChatGPT was that it was this general purpose technology that you could choose how you use it. And you also had to make an affirmative choice to use it. It wasn’t just something that a platform folded into its interface — you had to go to the website, ask a question, and use it. And then you could do all kinds of different things with it: It wasn’t prescribed how you would use it.
And so as soon as that happened, then you started seeing people say, “This is a threat to democracy. We need to put a ban on new development of this stuff until we can make some laws or create some oversight bodies to regulate it.” And I just found that to be pretty ironic that finally, when the public has a chance to use these technologies in hands-on ways, that was considered undemocratic and a threat to democracy versus, “Let’s get this thing back behind closed doors where a small number of experts can decide how it should be deployed and used.”
Brooke Borel: Here, Greg used the analogy of the early days of the automobile.
Greg Beato: You need to build the engine and the wheels before you build the taillights, right? With cars, we didn’t start with stoplights and guardrails. We started with engines and wheels and figured out where the issues are and then regulated over time.
Brooke Borel: But people got hurt, right? I mean, without traffic laws and seat belts and all those things, people did get hurt, if you’re using that analogy.
Greg Beato: If your standard is that the only way we can have progress is that we reduce risk entirely, that’s just not a realistic standard, right? That’s a recipe for: You never change anything. Because innovation requires change and change requires risk. It’s the unknown.
Anna Rothschild: I mean, I do see his point. But there are some known risks of AI, right?
Brooke Borel: Yeah, it’s an interesting but imperfect analogy. And some people definitely would not agree that there aren’t clear risks with AI that are worth considering now. We know, for example, that AI spreads misinformation and generates fake content.
Anna Rothschild: Which some people say is a threat to democracy.
Brooke Borel: Yes. And on that note, my next guest is very concerned that leaving decisions about this tech in the hands of CEOs is destroying democracy. And that’s part of why I thought it’d be interesting to pair these two together.
[MUSIC]
Brooke Borel: Do you think that companies should decide our tech future, specifically when it comes to AI?
Adam Becker: No.
Brooke Borel: All right, why not?
Adam Becker: Because companies are driven by profit motive and not, you know, what’s best for the world, is my short answer.
Brooke Borel: This is Adam Becker. He’s a journalist and, by training, an astrophysicist who has written for The New York Times, the BBC, Scientific American, and many other publications — including, I should say, Undark. His recent book is called “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.”
Adam Becker: A longer answer would be that companies are very good devices for separating people from their own best interests — and the best interests of the world — because they create a kind of misalignment of incentives. They replace questions of value and value for society in a broad and deep sense with questions about temporary profit motives. And as a result — especially with tech companies in the Bay Area, or just tech companies as a whole, mired in startup culture and venture capital culture — they have a tendency to create enormous amounts of hype around things that may not or do not deserve that amount of hype.
Anna Rothschild: I take it that Adam doesn’t think that large language models are the steam power of the 21st century?
Brooke Borel: You are correct. Very different view here, like I said. He thinks the hype about these AI products is way overblown in terms of how powerful they are, how useful they are, and on and on.
Adam Becker: I am an AI-skeptic. And I also don’t much care for calling the current generative AI systems, like large language models, “AI.” I feel like that’s a marketing term. It’s a term that’s sort of deflated in meaning since 30 years ago when I was a kid. Back then if you said “artificial intelligence,” people thought that you meant Commander Data on “Star Trek.” And now AI is like really good auto-complete.
If I went back in time 30 years and said: “Oh yeah, I can talk to an AI on my phone,” or, you know, “I have an AI,” I would’ve thought, “Oh, wow, that’s amazing.” And then if I then showed 10-year-old me, “Here’s the AI,” and it was a chat window with chatGPT and it’s hallucinating every other sentence, I think I would’ve been pretty unimpressed.
Brooke Borel: For our purposes, we are still going to call these products AI. And Adam does think that they are going to change the world, but for a completely different reason compared to Greg. He thinks that the impacts of AI products go way beyond the tech itself.
Anna Rothschild: You mean like AI getting too smart and wiping out humanity?
Brooke Borel: No, not even that. He’s concerned about the effects that are happening right now: The environmental impact, because it requires a lot of energy to run these large language models. The data impact, because it takes a lot of existing and sometimes copyrighted material to train these products.
Anna Rothschild: Ah, got it.
Brooke Borel: And there is another human toll. Because these models are built on so many existing things out there that humans already created, that includes some materials that are not so great.
Anna Rothschild: Yeah, like, content that is racist or sexist or otherwise harmful.
Brooke Borel: Right. In order to train the models to avoid spewing this sort of stuff back out at users, actual people need to intervene. And then they can be exposed to some nasty stuff, right? Some of that work is exported to workers in lower income countries, too, so it raises all sorts of ethical questions. And Adam just doesn’t think tech CEOs are the people who should be in charge of answering them.
Adam Becker: In my ideal world, these are not profit-driven or growth-driven entities in the first place that are creating these things. And so in that world, I feel like some organization develops a large language model and says, “Hey, we’ve got this thing, let’s talk about it.” And there’s a public discussion and debate that involves people talking about it online. It involves policymakers and experts — scientists, sociologists, experts in the humanities — talking about this and saying, “What would it mean to have this technology fully developed?”
Because the thing is, I think that one of the great myths that the tech industry has been very, very successful in convincing the rest of the world of is that the development of technology is somehow inevitable or on rails — that there’s just an obvious next step, and that what the tech industry does is takes those steps. And so we can go faster or slower, but ultimately, technology is inevitable. And that’s just not true. The development of every single technology has been a story filled with human choice and contingency and chance. I don’t see why we should just say, “OK, you know, the tech industry gets to make whatever they want and the decisions that go into the development of technology are just going to be made by small groups of democratically unaccountable people inside of a few companies.”
Brooke Borel: What do you think is the biggest risk?
Adam Becker: I think the biggest risk of letting companies do their thing is that they’re not only not going to address, but exacerbate the very biggest problems that face humanity today. And honestly, I think we are seeing the single biggest risk play out right now in the support that the tech industry has shown for destroying American democracy, and the contempt that the leaders of the tech industry have for democracy as a whole.
Brooke Borel: We followed up with Adam to elaborate on this point, and he brought up examples including: Elon Musk’s attempt at a $1 million sweepstakes to entice people to vote for Trump in the 2024 presidential election and the presence of tech billionaires at Trump’s inauguration.
Adam Becker: And so, you know, the risk is the same one that Louis Brandeis pointed out almost a hundred years ago: We can either have a great concentration of wealth in our society, or we can have democracy, but can’t have both.
So I think the risk is that the current system leads to massive inequality in the distribution of wealth and resources. And that leads to an erosion of democracy and the inability of our society to address the biggest problems that actually face us, in favor of addressing the short term interests of the extraordinarily wealthy and powerful. So instead of addressing climate change, we’re cutting taxes on the wealthy. That’s literally what the U.S. government just did. And that’s horrifying. So yeah, I think that’s the risk. I think we’re seeing it.
Anna Rothschild: Oh, man, Brooke. He does not like AI.
Brooke Borel: No, he doesn’t. And that definitely colors his opinion on whether or not the tech CEOs should be allowed to just release it.
Anna Rothschild: Right. How on earth are these two going to find common ground?
Brooke Borel: Well, no surprise here: Greg and Adam did not agree on the usefulness of AI. They also do not agree on whether the current model for deploying the technology is democratic. In fact, they have wildly different perspectives on that.
[MUSIC]
Brooke Borel: So you two have books on similar topics, right? That came out in the same year.
Adam Becker: Yep.
Brooke Borel: But have you ever met before?
Adam Becker: No.
Greg Beato: Have not, no.
Brooke Borel: We got right to the heart of their disagreement, which is: What does it mean to deploy AI democratically?
Greg Beato: We’re premising this discussion on democracy, and I think it’s super ironic that as soon as there was a tool that was available and accessible to millions of people, that all of a sudden there was other people saying, “Oh my God, we’ve got to save democracy by prohibiting this tool and putting a small number of regulators and industry experts in charge of how this thing gets developed.” How would that happen? Maybe Adam has a strategy for how you might develop AI in a more precautionary principle manner.
Adam Becker: I mean, regulation and input from experts is actually part of the function of democracy. It really seems like you are conflating market success with democracy and the two are manifestly different, right? I’m really surprised to hear you talking about AI and these concerns about democracy in the way that you are, Greg, because we’ve seen this movie before.
This is the same thing that happened with social media. It was widely popular and taken up by enormous numbers of people. And it did erode the fabric of democracy. And it’s largely responsible for the crisis of democratic governance that we see here in the U.S. and around the world. And I think that we could have avoided that if there had been stronger regulation around how social media functioned and how social media moderated speech, for example.
Anna Rothschild: OK, that’s an interesting analogy. I see his point: Just because something is popular and gets taken up by a lot of people doesn’t mean it’s actually good for those people.
Brooke Borel: Right. And we’re not here to talk about social media today, but let’s stick with this for a minute. Because I think it is a helpful example, especially when you see why Greg and Adam think so differently about it.
Greg Beato: I would say that it’s kind of a mixed bag, that what it did was erode the way that we used to achieve consensus through gatekeeping. Without social media, we wouldn’t have had Black Lives Matter, we wouldn’t have had #MeToo, at least at the same scale that we had them. There are all these things that came from social media that are to my mind, yes, broadly supportive of democracy and diversity and greater inclusivity participating in public discourse.
Brooke Borel: In other words, Anna, social media has also been largely unregulated since its beginning. And although there are plenty of criticisms about how that all turned out, Greg is arguing that a lot of positive things came out of these platforms.
Anna Rothschild: Right, it’s not all doomscrolling and brain rot.
Brooke Borel: Right. There’s plenty of that, but there are other things, too. And these two went on like this for a while. But bringing it back to AI, I asked Greg: Even if these products are getting out to the public, which then has an opportunity to accept or reject them, should a handful of CEOs be making the decisions on deployment? And basically, he doesn’t think that these decisions truly are just in the hands of the CEOs.
Greg Beato: Well, you know, we went from one frontier model at the end of 2022 to dozens if not hundreds of frontier models. And [today] there’s certainly close to 10 or a dozen big corporate players.
Brooke Borel: By the way, a frontier model just means a new AI model that is super big and powerful and is pushing new boundaries on what’s possible for the tech.
Greg Beato: And so there’s a lot of choice out there. And none of these models can say, “Here’s how we’re going to do it, and that’s the way it is.” Users do have to have an uptake. When you say, “Well, should it be in the hands of these people?” It’s really in the hands of users and they have a choice not to use it at all. Or they have a choice to pick and choose from many different kinds of approaches, whether it’s open source, different companies, there’s a lot of choice right now.
It could consolidate a lot. That’s usually what happens, right? There were hundreds of car makers in America at one time. There were thousands of beer manufacturers. And so, if we were moving towards a space where it’s just three places making models, then I think it would be more of an issue. But right now we’re on an upward trajectory, and it’s really about: Do people use it or not? And right now, I think OpenAI alone has 500 million weekly, or maybe even daily users? And so that to me is saying that it’s not being shoved down people’s throats. People are saying, “I want to use this thing.” They’re picking it up and using it.
Brooke Borel: Adam, what do you think about that?
Adam Becker: I agree that some people want to use these tools. But one of the things that I’m hearing you say in response to my concern about democratic accountability and democracy with this stuff: You keep coming back to sort of consumer choice in the market, and I really don’t think that that’s the same thing. Just because a product is popular does not mean that there’s a democratic mandate for that product to exist in the form that it does. Because the free markets are notoriously terrible at solving all kinds of problems. They’re great at solving certain problems, but they’re really very limited tools. And so they’re not good at, for example, balancing external harms that are not properly priced into the market.
Anna Rothschild: For example, the market isn’t accounting for the impact of AI on climate change.
Brooke Borel: Yes, exactly. And Adam brought up the example of the energy use of AI in a simple Google search.
Anna Rothschild: You mean that AI-generated text that you see at the top of your screen when you Google something? Because I hate that.
Brooke Borel: Yeah, and you’re not the only one.
Adam Becker: The other thing is, yes, there are people who want to use these AI tools for various things, but it’s also true that AI is being put into applications where at least a large number of people don’t want to see it. And in some cases, most people don’t want to see it. You know, there’s been enormous public pushback on Google incorporating AI into its search results, but Google went ahead and did it anyway. And the public consensus seems to be that it made search worse, and also made it 10 times more energy intensive. But Google did it anyway and there’s not much that the rest of us can do about it.
Brooke Borel: By the way, we fact-checked that figure, and it’s based in part on an estimate from the research company Digiconomist. More recently, Google released its own report on the energy uses of its large language model, Gemini, which powers Google AI search. And the company’s estimate across Gemini apps is 12.5 times lower than this one from Digiconomist. But none of these estimates are peer-reviewed and it’s best to take them with a grain of salt.
Adam Becker: And that sort of brings me to the other point, which is: The small number of people at these companies are making the decisions about how these products are going to enter into the public sphere. And because there has not been regulatory pushback from the government — in part because we’re here in the U.S. experiencing an authoritarian takeover at the moment from a techno-corporate-backed authoritarian administration — we don’t have input into how these technologies are being incorporated into our lives and into the public sphere.
Brooke Borel: Adam, I do have a question for you, a follow-up.
Adam Becker: Yeah, sure.
Brooke Borel: So if the CEOs maybe are not who you think should be in place to sort of decide, “We’re putting this technology out there, whether you like it or not,” you’re talking about experts and regulators having a role in having some guardrails there. Regulators and lawmakers aren’t always the same as experts and don’t always have technological expertise to be making these decisions. So with that in mind, are regulators or lawmakers any better placed to be making these decisions than the CEOs?
Adam Becker: Well, first of all, they’ve got a mandate to do it, right? That’s what democracy is, right? But also, I would argue that they are better placed for it in that they are not working off of pure profit motive or growth motive, right, the way that a startup or a corporation is. And yes, regulators are not the same things as experts, but we should work to make sure that regulators are well informed. And there has been a lot of success with that in the past.
Greg Beato: Can I just jump in
Brooke Borel: Please.
Greg Beato: So I just wanna also make it clear that I don’t think it’s an argument of free markets versus regulation. I’ve not used the word free market. All markets are conditioned in various ways. There’s no such thing as an operational free market in the world, I don’t think, right? They’re always regulated in various ways and that regulation is usually important to how they function. And I’m also not saying that there shouldn’t be any place for the government in determining the future of technology. When I talk about being a pluralist, my intent is to say, “Well, we should have multiple stakeholders who are all capable and well resourced so that they are both competing and collaborating with each other.”
I kind of take it as a given that the government is always going to be a very strong stakeholder because it has a monopoly on creating laws and in using force to compel compliance to those laws. No other of the stakeholders — I would also include the news media, academia, the public at large, and commercial interest — no one has that and commercial interest, no one else has that power that the government has. So I always assume that the government is going to be a major player in how society works out how it moves forward.
Brooke Borel: So in this case, it sounds like with that context, you’re both saying that there’s going to be regulation in some way. But maybe it’s just at the point in which it’s sort of happening within development and deployment of AI — [that perspective] might be kind of different?
Greg Beato: Well, I mean, I think that the internet — people often, you know, describe it as, “Oh, well there was no regulation.” But one of the things we talk about in the book was all of the things that happened during the Clinton administration that were in fact, sort of like regulatory decisions. [It was] definitely a light touch, but it gave a clarity as to say, “What is our policy and how will we move forward?”
There’s probably a need for more of that. We need a government that uses AI effectively when other states are embracing it as well. And so, I was amazed, for example, that in the two debates that we had during the presidential election that AI was not mentioned once.
Brooke Borel: Just to clarify, Kamala Harris did mention AI in a presidential debate in September 2024, but it was very brief.
Greg Beato: It didn’t seem to me to be a great thing that if we’re really on track for AGI, which a lot of people think we’re on track for 2027: Do we really want the National Weights and Measures to be in the hands of Donald Trump?
Brooke Borel: Here, Greg’s referencing artificial general intelligence, or AGI, which is the point at which AI is as good as a human at a task. The technology isn’t there yet, and there are wildly different estimates on when that might happen and how it might affect humanity.
Greg Beato: No one made that a campaign issue, you know?
Adam Becker: I think that was good.
Brooke Borel: Yeah. Why?
Adam Becker: Because I don’t think that we’re on track for 2027 and I think that the campaign to get people to think that we are is fundamentally misguided. I think it was a distraction, or would’ve been a distraction, from the real issues in the campaign. I think large language models and these other generative AI tools need to be regulated because they have concrete harms that they’re perpetrating right now. And so we need to do what we can to minimize or remove those harms, and that requires judicious regulation, something that this government’s not interested in. That’s the problem.
Brooke Borel: So what is one thing, after hearing each other’s perspective on this — and in particular this question: Should these companies be deciding our collective future when it comes to tech and specifically AI — what is one thing that you wish the other would consider more seriously?
Greg Beato: I guess I would just say that I do think commercial activity, free enterprise capitalism, whatever you want to call it, I don’t think it’s always about, “Oh, it’s just a money grab and it’s exploitative by nature.” I agree with Adam in the sense that I don’t think democracy and consumer choice are the same thing, but I think that consumer choice is an enabler of democracy. And so consider that: The power of the market to actually create meaningful choices, for people in life.
Brooke Borel: And Adam, what’s one thing that you wish Greg would consider more.
Adam Becker: Pretty much the opposite of what Greg said.
Brooke Borel: OK, great. Great.
Adam Becker: I would like Greg to consider that perhaps free enterprise and the market are often at odds with proper healthy functioning democracy. And that in particular,allowing massive concentration of wealth, erodes the fabric of democracy, as Supreme Court Justice Louis Brandeis said almost a century ago.
Brooke Borel: And did either of you hear anything that you found particularly compelling or at least something that made you think a little bit more about your perspective?
Adam Becker: Listening to Greg told me that I need to go take a look at how much of the climate impact numbers are current versus projections. And also take a look at what people on all sides of this debate are using those projections to justify.
Brooke Borel: And you Greg ?
Greg Beato: No, I honestly think that it’s just a good exercise in trying to understand how you could potentially shift people into considering a different perspective. I didn’t move Adam a millimeter.
Brooke Borel: Maybe you didn’t really move each other too much.
Greg Beato: So, you know, that’s that.
Brooke Borel: And how are you both feeling now? Were you surprised about how this conversation went?
Greg Beato: Well, the premise of our book is that we should be having discussions with people from different perspectives. I think I told you in advance that I was on the fence about this, just because I kind of assumed it would play out the way it has. But then I was like, “Well, I’m not living the premise of the book if I don’t try to engage with people that have a different perspective.”
Brooke Borel: And how about you, Adam.
Adam Becker: I think I agree with Greg that this conversation went roughly the way that I thought it would. I’m not super surprised by anything that’s happened in this conversation. But in addition to being a worrier, I’m also generally a fairly hopeful and cheerful person, I think, and I’m not quite sure how I square that circle, but I do. And I had hoped that the conversation would be, at the very least, fun. And it was, you know?
Brooke Borel: I had fun. Greg, did you have fun?
Greg Beato: I’m not going to — I have a pretty high bar for fun.
Brooke Borel: You have a high bar for fun. We can’t even agree that it was fun. That’s OK.
Adam Becker: Wow. Amazing
[MUSIC]
Brooke Borel: These two are probably not going to hang after this, unlike some of our other guests.
Anna Rothschild: Yeah, that is pretty clear. I don’t think they’re going to get a drink.
Brooke Borel: I will say, though, I think it’s interesting — it does seem like they have a lot of the same interests and they both seem to have some shared values about this tech as well.
Anna Rothschild: Yeah, they both want more people to have a say in this technology. It’s sort of just like they disagree on at what stage the public gets involved.
Brooke Borel: And they also disagree on the usefulness, like how life-changing this technology actually will be and also what risks it poses.
Anna Rothschild: Yeah, and they also have very different opinions about what it means for something to be democratic in some ways.
Brooke Borel: I mean, I will say that Greg’s take, I don’t think that he was thinking of democratic in terms of a political system.
Anna Rothschild: Right.
Brooke Borel: I think that his idea of democratic in this sense is that instead of having these technologies behind closed doors in university labs or government labs, having them deployed so that people can actually use them and interact with them and kind of know what’s going on — that’s what he is calling democratic in this case, where can actually be involved in working on these things and using them.
Anna Rothschild: For sure. I mean, I don’t really know what’s right: If you give people the theoretical knowledge of something and let them decide, or if people have to kind of see how this technology works in practice, before they can make a real decision about it.
Brooke Borel: I’m really curious to hear what our listeners think about all of this.
Anna Rothschild: Yeah, please, send us an email to [email protected]. We would love to hear from you.
Brooke Borel: And that’s it for this episode of Entanglements, brought to you by Undark Magazine, which is published by the Knight Science Journalism Program at MIT. Our amazing producer and editor is Samia Bouzid. This show is fact-checked by Undark deputy editor Jane Reza. Our production editor is Amanda Grennell, And Adriana Lacey is our audience engagement editor. Special thanks to our editor in chief, Tom Zeller Jr. I’m Brooke Borel.
Anna Rothschild: And I’m Anna Rothschild. Thanks for listening. See you next time.