If 2023 was the year artificial intelligence became a household topic of conversation, it’s in many ways because of Sam Altman, CEO of the artificial intelligence research organization OpenAI. Altman, who was named TIME’s 2023 “CEO of the Year” spoke candidly about his November ousting—and reinstatement—at OpenAI, how AI threatens to contribute to disinformation, and the rapidly advancing technology’s future potential in a wide-ranging conversation with TIME Editor-in-Chief Sam Jacobs as part of TIME’s “A Year in TIME” event on Tuesday.
Altman shared that his mid-November sudden removal from OpenAI proved a learning experience—both for him and the company at large. “We always said that some moment like this would come,” said Altman. “I didn’t think it was going to come so soon, but I think we are stronger for having gone through it.”
Read More: CEO of the Year 2023: Sam Altman
Altman insists that the experience ultimately made the company stronger—and proved that OpenAI’s success is a team effort. “It’s been extremely painful for me personally, but I just think it’s been great for OpenAI. We’ve never been more unified,” he said. “As we get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times should be of interest to the world.”
“I think everybody involved in this, as we get closer and closer to super intelligence, gets more stressed and more anxious,” he explained of how his firing came about. The lesson he came away with: “We have to make changes. We always said that we didn’t want AGI to be controlled by a small set of people, we want it to be democratized. And we clearly got that wrong. So I think if we don’t improve our governance structure, if we don’t improve the way we interact with the world, people shouldn’t [trust OpenAI]. But we’re very motivated to improve that.”
The technology has limitless potential, Altman says—“I think AGI will be the most powerful technology humanity has yet invented”—particularly in democratizing access to information globally. “If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that,” he said, “it’s a very different world. It’s the world that sci-fi has promised us for a long time—and for the first time, I think we could start to see what that’s gonna look like.”
Still, “like any other previous powerful technology, that will lead to incredible new things,” he says, “but there are going to be real downsides.”
Read More: Read TIME’s Interview With OpenAI CEO Sam Altman
Altman admits that there are challenges that demand close attention. One particular concern to be wary of, with 2024 elections on the horizon, is how AI stands to influence democracies. Whereas election interference circulating on social media might look straightforward today—“troll farms…make one great meme, and that spreads out”—Altman says that AI-fueled disinformation stands to become far more personalized and persuasive: “A thing that I’m more concerned about is what happens if an AI reads everything you’ve ever written online … and then right at the exact moment, sends you one message customized for you that really changes the way you think about the world.”
Despite the risks, Altman believes that, if deployment of AI is safe and placed responsibly in the hands of people, which he says is OpenAI’s mission, the technology has the potential to create a “path where the world gets much more abundant and much better every year.“
“I think 2023 was the year we started to see that, and in 2024, we’ll see way more of it, and by the time the end of this decade rolls around, I think the world is going to be in an unbelievably better place,” he said. Though he also noted: “No one knows what happens next. I think the way technology goes, predictions are often wrong.”
A Year in TIME was sponsored by American Family Insurance, The Macallan, and Smartsheet.