OpenAI is now ClosedAI
So guys , it finally happens. We’ve been seen Sam Altman talking about OpenAI been a non-profit company and the hardship that was. And looks like this will be resolved soon , where some leaks indicates that the company will be for-profit, and Sam has decided to give himself 7% in shares. That’s $10.5 billion. Yes, *billion*. But wait – just four months ago, Sam claimed something totally different.
“It’s so deeply unimaginable to people to say, ‘I don’t really need more money,’” Sam told the world with a straight face in May 2024. “If I were to say I’m going to try and make a trillion dollars with OpenAI, it would save a lot of conspiracy theories.”
Fast forward a few months, and here we are, discussing his $10.5 billion slice of the AI pie. The irony writes itself.
The big part of why OpenAI was formed in the first place was to make sure that AI development and advancement wasn’t a thing that only two big tech companies would withhold, without caring about safety , law infringement , copyright issues … but rather to share , safely , the knowledge of it to everyone, where AI should benefit all.
So we have three pillars what OpenAI were formed:
- AI should be safe
- AI should benefit all
- AI should be open
And they went so far ahead than everyone, that they saw there were an opportunity to make money to continue their pursuit without been too much depended of it’s backers.
Now you have a company where you supposedly have a board that keeps the company in line with these core pillars, creating a conflict with a CEO who knows how to make money and build businesses. If it wasn’t completely wrong to see a non-profit company become a market leader, coupled with these conflicts that ultimately left Sam unemployed for a weekend, we see today that OpenAI has fallen behind other companies like Anthropic.
With all the moves on the industry , and the competitors getting ahead of OpenAI in terms of advancements, model improvements, you start to sense , trough Sam’s remarks about the company structure, that something would change , way more than create a for-profit body controlled by the nonprofit. Until yesterday
To be clear, I have nothing against people making a fortune from their businesses. In fact, I love seeing entrepreneurs and visionaries strike gold when they’ve built something world-changing. Capitalism has rewards, and if you’ve built the ship that’s steering the future of AI, by all means, cash in.
What feels off here is the calculated coyness of Sam’s narrative. Is he the benevolent genius who “doing it because he love it,” or is he just the next Silicon Valley mogul quietly securing his billions while giving us the “aww shucks” routine? You never quite know where he stands, and that’s the real problem. It’s not the money, it’s the way it’s framed.
And for me the big problem here is consistency. I remember seeing Justin Kan said that when he founded Justin.TV , he just wanna to make a successful company , and he followed this principle and made it. I have no problem to OpenAI, which is a CLOSED sourced company come all out and decide to make more than the billions it already got it from investors , to topple their competitors , the problem is the principles you got there on the first hand.
While nonprofit, OpenAI gather the trust of potential competitors, researchers, universities, and even lawmakers to be able to access and harness critical data , those data gathered and used to create their LLM’s models where acquired based of on the principles OpenAI were formed on the first place, and those principles were made not to deceive everyone (apparently not) , to actually have a company that would be the one that would fight for everyone , and they are just about to blew all this up. For me this will be a major reputation damage, and makes me think if I’ll continue to use OpenAI tools.
Because it doesn't make sense anymore , that you say you are open , that even your models are not open for anyone to access.
Despite that, I’m seeing a lot of criticism about Sam, because from a person who didn't have any share on the company , he’s reportedly will have 7% of it. Look, it’s totally fine to want to get rich. There’s no shame in reaping the rewards of hard work and innovation. Heck, if you’re transforming the way we work, communicate, and even think, why shouldn’t you get your cut? But don’t tell us you’re indifferent to wealth while you’re making moves like this.
If Sam had just come out and said, “Look, I’ve built something world-changing, and I’m taking my well-earned cut,” people might actually be on board with it. It’s not the fact that he’s cashing in — it’s the messaging that’s so out of touch. The narrative of building “safe AI for all” seems to crumble under the weight of these billion-dollar moves.
The SuperAlignment team, which was supposed to ensure AI safety and ethical development, were disbanded, while OpenAI cozies up to Microsoft, giving them a seat on the board. How’s that for transparency and accountability? The more this unfolds, the clearer it becomes that OpenAI’s original pillars — safety, accessibility, and openness — are eroding as the company’s bottom line takes precedence.
We started with promises that OpenAI was here to democratize AI, making its benefits accessible to all. Now, the only thing being democratised is a corporate shuffle designed to line pockets while competitors like Anthropic gain ground in actually advancing AI. The irony is striking, given that OpenAI was supposed to prevent a two-company monopoly over AI and now seems poised to create its own, with Microsoft’s backing firmly in place.
At this point, “ClosedAI” might be a more fitting name. Because it’s clear that OpenAI isn’t as open — or altruistic — as it once claimed to be.
This pivot at OpenAI, wrapped in the cloak of a for-profit venture creates a reflection, a mirror held up to the soul of the tech industry, where the deception of companies that want to “Save the World” has long been exposed. The narrative twist here is not just Altman’s; it is Silicon Valley’s, where the lines between social good and shareholder value blur into an indistinguishable haze, or at least that is what they want us to see.
The tech community, fooled by OpenAI’s promises, seems to have failed to learn from the past. “Don’t be evil” and “AI for all” are just idealistic marketing slogans, nothing more. They are designed precisely to cultivate our empathy. When will the tech giants be held to the fire of their original ideals? I think it will take a while. This is just another chapter in the saga of how you “fake it till you make it,” leaving us all to ponder whether there is an AI company building what is truly for “all of humanity” (no, Elon won’t do that) or will it exist only for those who can afford it? As the dust settles, one thing is clear: the conversation around AI ethics is over, and it’s something society cannot ignore.