42.8 F
Storrs
Saturday, May 9, 2026
Centered Divider Line
HomeOpinionOpenAI’s restructuring plot raises red flags

OpenAI’s restructuring plot raises red flags

Since the launch of ChatGPT in late 2022, OpenAI has become a household name in the Artificial Intelligence space, valued at over $80 billion earlier this year. As essentially a capped-profit company led by a not-for-profit board, OpenAI is unique to other companies like Elon Musk’s xAI and Anthropic. Since it is led by a not-for-profit board, OpenAI is bound to their humanitarian mission and “the benefit of all”, according to their charter. This mission is headed up by Chief Executive Officer Sam Altman, the face of the artificial intelligence and a looked-up-to leader. Recently though, Altman and OpenAI’s reputation are at risk as they release that they might change their company structure and leave their original mission behind.  

ChatGPT’s rise to fame hasn’t been without its past challenges. In Nov. 2023, about a year after ChatGPT’s initial launch, Altman was ousted by the Board of Directors because “he was not consistently candid in his communications with the board”, according to their blog post. Soon after, Microsoft offered Altman and company President Greg Brockman a position in research leadership. Shocked and frustrated by the sudden transition, investors and fellow employees pushed back against the decision in solidarity with Altman. Only a few days later, Altman and Brockman were reinstated to their positions after largely restructuring the Board of Directors.  

Turbulent workplace politics put eyes on OpenAI, but also demonstrated to investors the immense loyalty that employees of OpenAI have for their leadership, a promising win for the company. Post-restructuring, OpenAI seemed well positioned to capitalize on their recent $10 billion deal expansion with Microsoft. Since then, OpenAI has continued to campaign for funding and is currently in talks with investors for $6.5 billion that would raise the company valuation to nearly $150 billion. Should the deal be completed, that would push the valuation to be double that of chip superstar Intel and surpass Elon Musk’s SpaceX.  

Amid this campaign, a new development has emerged in the OpenAI case. In the past week, reports have come out revealing that OpenAI is considering transferring away from not-for-profit board control to a for-profit public benefit company. This would mean that rather than simply focusing on helping humanity, the company would also focus on delivering value to shareholders. Despite promises by OpenAI spokespeople that the company will retain a non-profit arm and the values that come with it, should the restructuring occur, OpenAI will no longer be effectively bound by their humanitarian mission or their not-for-profit board.  

While investors are thrilled at the possibility of increased returns, common people and those who fear AI should be concerned. A change in the structure of OpenAI would likely mean a change in the priorities listed on OpenAI’s charter. As of now, the charter states that they “are committed to doing the research required to make Artificial Generative Intelligence safe,” but their actions say otherwise, even before the restructuring plot. In May 2024, just a year after announcing its implementation, OpenAI announced the dissolution of their Superalignment team, a group dedicated to mitigating long-term safety risks associated with AGI. Right before that announcement, passionate AI safety proponents Ilya Sutskever and Jan Leike left the company. Leike burned the company soon after in a post on X saying, “over the past years, safety culture and processes have taken a backseat to shiny products.” He isn’t the only one concerned with management alignment and safety precautions. Countless other employees have raised concerns about the management team’s commitment to their creed. 

Photo by Solen Feyissa on Unsplash

OpenAI’s safety claims are full of empty promises and questionable execution plans. Though OpenAI announced the creation of a Safety and Security Committee, they faced major criticism for stacking the committee with company insiders. Surely there is no conflict of interest. Since then, however, OpenAI announced that they would move to make the committee more independent, but only after the initial backlash. Convenient.  

At the same time, OpenAI looks to shape the regulations that might control them in the future. They have registered multiple lobbyists, and Altman has been named to the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board. Also convenient.  

Couple this with straying from the not-for-profit board, there’s no doubt that OpenAI’s safety mission will go further toward the wayside. What this means for our global community, we don’t know, but it certainly is worrying. The nuances of AI regulation and its impacts on our health and wellbeing are still being figured out. Though regulations are in the works, AI companies need to hold themselves accountable for their potential impact. OpenAI might not be setting the best example.   

Leave a Reply

Featured

Discover more from The Daily Campus

Subscribe now to keep reading and get access to the full archive.

Continue reading