OpenAI’s plans to restructure as a for-profit business mark a significant departure from its roots as a non-profit research lab grounded in a commitment to build artificial intelligence (AI) to “benefit humanity.”
However, the latest changes represent the culmination of a yearslong shift away from the ChatGPT maker’s foundations and toward a typical for-profit startup, experts told The Hill.
“Restructuring around a core for-profit entity formalizes what outsiders have known for some time: that OpenAI is seeking to profit in an industry that has received an enormous influx of investment in the last few years,” said Sarah Kreps, director of Cornell University’s Tech Policy Institute.
Reports first emerged last week that OpenAI was considering restructuring into a public benefit corporation, a for-profit entity aimed at bettering society, and removing the non-profit board’s control over the company.
The shift comes as part of an effort to attract investors amid OpenAI’s latest fundraising round, which it announced Wednesday had secured $6.6 billion in new funding at a valuation of $157 billion.
The news of its potential restructuring was accompanied by the departure of several top OpenAI executives, including chief technology officer Mira Murati.
The latest resignations followed a series of departures earlier this year, including OpenAI co-founders Ilya Sutskever and John Schulman and machine learning researcher Jan Leike.
OpenAI CEO Sam Altman has sought to dispel any speculation that the recent departures are related to the company’s restructuring plans.
“We have been thinking about that, our board has, for almost a year, independently, as we think about what it takes to get to our next stage,” Altman said at Italian Tech Week in Turin last Thursday, according to Reuters.
Even if unconnected, the turnover at OpenAI and its restructuring plans appear to signal a shift in focus, Kreps noted.
“At least circumstantially, these changes – the shifting emphasis to for-profit, turnover at the top, as well as the dissolution of OpenAI’s super alignment team that focused on AI risk – points to an accelerated move into the boundary-pushing directions of AI research,” she said in a statement.
OpenAI dissolved its Superalignment team in May shortly after Sutskever and Leike announced their departures. The pair ran the team, formed less than a year earlier, that sought to address the potential dangers of superintelligence — AI that is smarter than humans.
“The moves collectively mark a potential departure from the company’s founding emphasis on safety, transparency, and an aim of not concentrating power in the development of artificial general intelligence,” Kreps added.
OpenAI was founded in 2015 as non-profit AI research company with the goal of developing the technology in a way “that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
It emphasized that as a non-profit, it would seek to “build value for everyone rather than shareholders.”
The company began shifting away from its non-profit foundations in 2019, when it announced plans to restructure as a “capped-profit company.”
Under the new structure, investors could receive a return up to 100 times their original investment, with the remaining profits going toward the non-profit. OpenAI’s non-profit board would also control the company.
The company explained at the time that its decision was driven by a desire to be able to raise more money for its efforts “while still serving our mission.”
This unique structure would ultimately result in the drama that unfolded at the company last November, when the non-profit board briefly ousted Altman as CEO in a surprise maneuver.
After several days of chaos, in which hundreds of OpenAI employees threatened to resign, the company brought Altman back as CEO and formed a new board, removing all but one of the members who had been part of the ouster.
“The circus show and the comedy show that happened in the potential coup d’etat of OpenAI last fall, that was the straw that broke the camel’s back, where this model couldn’t work,” Wedbush Securities analyst Dan Ives told The Hill.
Ives said he sees no way OpenAI could have stayed a non-profit, especially after the 2022 release of its incredibly popular ChatGPT tool.
“It was a matter of time that this was going to happen, and they ripped the band-aid off,” he said, adding, “I think it was known within the industry, known within the venture community, known on the Street, that this would happen.”
“They’re a victim of their own success,” Ives added. “If they didn’t have a moment that would really change the tech, and I’d say enterprise consumer landscape going forward, we wouldn’t be talking about change to a for-profit model.”
Even as a non-profit or a “capped-profit” company, critics have long questioned OpenAI’s commitment to its founding ideals.
“There’s been a big question for a long time around about whether OpenAI is really grounded in the public interest mission that it was founded on, and whether the fact that it’s a non-profit means anything at all,” said Mark Surman, president of Mozilla Foundation, which has been an advocate for open-source AI.
He suggested the move to restructure the company could provide clarity on this front.
“Something good might come out of this move to take it private, which is just to put OpenAI in a position where it has to be honest about what it is,” Surman said. “It’s a fast moving, mid-stage, very successful startup.”
“We also need publicly oriented, open-source AI that is built in a way that has safety in mind and that everybody can rely on,” he argued. “Let’s just not fool ourselves that OpenAI is a path to that.”
Similar complaints were at the center of a lawsuit that billionaire tech mogul Elon Musk filed against Altman and OpenAI in May.
Musk, who helped found the company, alleged that Altman and fellow co-founder Greg Brockman “assiduously manipulated Musk into co-founding their spurious non-profit venture.”
“After Musk lent his name to the venture, invested significant time, tens of millions of dollars in seed capital, and recruited top Al scientists for OpenAI Inc., Musk and the non-profit’s namesake objective were betrayed by Altman and his accomplices,” the lawsuit reads.
These concerns about OpenAI’s commitment to its founding ideals also highlight the need for regulation, said Julia Stoyanovich, director of the Center for Responsible AI at New York University.
“This really underscores that OpenAI never really meant to be thinking about the well-being of everybody as a central priority and this was always on their mind that they are a commercial entity,” Stoyanovich told The Hill.
“Now this is abundantly clear, and we need to step up our efforts to regulate the use of the technology that they produce to make sure that they don’t destroy society further,” she added.