OpenAI’s $100 billion Microsoft pact could blur its ‘mission’ further
“We are actively working to finalize contractual terms,” OpenAI said last week, referring to a sweeping plan to restructure its powerful non-profit arm.
Remember the one that fired chief executive officer (CEO) Sam Altman back in November 2023 and nearly gave Microsoft executives a collective heart attack? It’s being given an equity stake worth over $100 billion that will redraw the company’s future.
OpenAI has spent the better part of two years seeking greater freedom from both Microsoft and the stiff founding principles that allowed its governing board to oust its CEO.
Now, a non-binding memorandum of understanding with its largest shareholder points to a future where Microsoft keeps privileged access to OpenAI’s technology, but OpenAI can also court new investors and expand deals with other cloud service providers—not just Microsoft.
Microsoft, for its part, likely had more leverage than OpenAI during talks. Over the last few months, it seems to have gone out of its way to signal that it wasn’t so reliant on OpenAI by releasing its own proprietary AI models under MAI-1 in August and buying technology from OpenAI’s arch rival Anthropic.
Microsoft CEO Satya Nadella has always been a master of diversification, having strengthened the company’s footholds in cloud computing (Azure), gaming (Xbox), office software (Windows and Office 365), professional networking (LinkedIn) and AI.
Branching into other sources for AI models not only spread his bets on picking a winner, it also meant Nadella didn’t need Altman as much as Altman needed Nadella.
OpenAI’s CEO is, after all, concurrently dealing with tough competition from Google and Anthropic, projections to burn through $115 billion over the next four years and myriad lawsuits. One thing that might help would be for the company to drop its hollow rhetoric about building AI systems for the greater good.
The non-profit entity it originally set up to “benefit humanity… unconstrained by a need to generate financial return” has turned into a moral albatross around Altman’s neck.
Sure, it helped motivate OpenAI researchers racing to build super-intelligent AI systems by designating them good guys. But the obligations Altman originally put in place allowed earlier board members to fire him when they decided his duplicity imperiled OpenAI’s noble mission.
Now the non-profit has been offered an unprecedented gift that looks like a poisoned chalice: a $100 billion endowment that could further erode its integrity.
OpenAI said that its non-profit would get a new equity stake in its for-profit business that “exceed[s] $100 billion,” making it one of the world’s best resourced philanthropic organizations. OpenAI has already promised $50 million in grants to charities promoting AI in education, media and civic life.
That all seems fine, except for an incentive structure that sees the non-profit’s actions shaped by the company it relies on for cash. OpenAI’s non-profit board legally approves some of its biggest product launches and decides whether a new model is safe to release.
But the board could be put under greater pressure to rubber-stamp product launches and support growth with resources linked more tightly to OpenAI’s commercial success.
OpenAI’s board was already shirking its obligations after a revamp, post-Altman’s firing.
One example: The release of GPT-4o in May 2024 was rushed out to pre-empt Google’s launch of a rival AI model and as a result, compressed months of safety tests into one week just before launch. OpenAI’s board let that model go through, despite concerns from the company’s safety team.
That decision was central to a recent lawsuit after a teenager died by suicide following months of interactions with ChatGPT. OpenAI had claimed its AI detected self-harm prompts 95% of the time, but the figure was based on controlled, single-prompt tests, not the multiple turns in conversations most users have.
After the lawsuit, OpenAI admitted its safeguards could break down during longer exchanges.
OpenAI’s restructuring still needs a sign-off from regulators in California and Delaware, who must assess whether it serves the public interest and keeps safety in check. If either attorney general (AG) objects, the deal could face delays or forced changes; checks and balances could come back to bite Altman.
Still, it’s hard to imagine California’s AG blocking a deal that enriches one of the state’s most prized exports. Till then, OpenAI’s humanitarian mission looks increasingly like a branding exercise. Perhaps that was the point. ©Bloomberg
Post Comment