Ilya Sutskever Left OpenAI to Save the World. His New Company Just Raised $2B With No Product.

Ilya Sutskever Left OpenAI to Save the World. His New Company Just Raised $2B With No Product.

March 28, 2026·4 min read
← The Signal Archive

SSI raised $2B at a $32B valuation from Alphabet — with no product, no revenue, no users. Ilya Sutskever's bet is that removing every commercial pressure is the only way to build superintelligence safely.

Ilya Sutskever Left OpenAI to Save the World. His New Company Just Raised $2B With No Product.

There is a particular kind of Silicon Valley story that only makes sense if you stop trying to apply normal business logic to it.

Ilya Sutskever co-founded OpenAI in 2015 alongside Sam Altman, Greg Brockman, and Elon Musk. He spent nearly a decade as its chief scientist, overseeing the training of GPT-2, GPT-3, GPT-4, and the foundational research that made the company worth $300 billion. Then, in May 2024, he quit. Months later, he founded Safe Superintelligence Inc. — SSI — with a single stated mission: build superintelligence safely, and do nothing else.

No chatbot. No API. No enterprise sales. No distraction.

OpenAI headquarters San Francisco

This week, SSI announced it had raised $2 billion at a $32 billion valuation, led by Alphabet — Google's parent company. The company has no product. No revenue. No users. It has Ilya Sutskever, a handful of world-class researchers, and a pitch that amounts to: trust us, we are going to build the most important technology in human history, and we are going to do it correctly.

Alphabet's decision to lead this round is the story within the story. Google's own AI division, Google DeepMind — led by Demis Hassabis — is one of the best-resourced AI research organizations on the planet. And yet Google wrote a check for $2 billion to a competitor. The reasoning, most likely, is portfolio hedging. If Sutskever is right that SSI will build superintelligence first, owning a piece of that outcome is worth the price of admission regardless of what DeepMind produces.

The funding math is worth examining. OpenAI recently completed a $40 billion round at a $300 billion valuation. Anthropic's last round valued it at $61 billion. SSI, with no product, sits at $32 billion. What you are pricing when you invest in SSI is not a business. You are pricing the probability that Ilya Sutskever is the person most likely to build AGI, multiplied by whatever you believe AGI is worth.

Apparently Alphabet thinks that product is worth $32 billion times some multiple.

OpenAI team — the people Ilya left behind

What is Sutskever actually building? Nobody knows, which is the point. SSI has published almost nothing publicly. They hire quietly. They do not attend conferences. They do not give interviews. The entire operating thesis is that the best way to build superintelligence safely is to remove every pressure that would cause you to cut corners — shipping timelines, customer demands, board pressure, press cycles.

This is the direct opposite of how Sam Altman runs OpenAI. Altman is perpetually on camera, perpetually announcing, perpetually shipping. ChatGPT has hundreds of millions of users. The company runs Sora, DALL-E, the GPT API, an enterprise business, and a consumer app that is one of the most-used pieces of software on earth. The pressure to ship is baked into the DNA.

Sutskever's bet is that this pressure is precisely what will cause OpenAI to get it wrong. That the race dynamics of the current AI industry are incompatible with the care required to build a truly safe superintelligent system. And that the only way to win correctly is to opt out of the race conditions entirely — which requires enough capital that you never have to compromise.

Two billion dollars buys a lot of runway to be careful.

Whether Sutskever is right will probably be the most important question in technology over the next decade. The fact that Alphabet is now betting alongside him — even as they fund and develop their own systems at DeepMind — suggests that even the people most likely to beat SSI are not entirely sure they will.

Deep Dive

See our previous coverage on the AI safety paradox: Anthropic's Most Powerful Model Just Leaked

Found this useful? Share it.

Get posts like this in your inbox.

The Signal — AI & software intelligence. 4x daily. Free.

Subscribe free →

More from The Signal