Ilya Sutskever and Friends Found Safe SuperIntelligence Inc.
Dr. Ilya Sutskever, Daniel Gross, and Daniel Levy, writing on the website of their new company:
Building safe superintelligence (SSI) is the most important technical problem of our time.
We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence Inc…
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
“Superintelligence” is not a word in the dictionary, but it’s meant to be a catch-all, alternative term for artificial general intelligence, a term for a computer system as smart as or even smarter than humans. Dr.Sutskever is one of OpenAI’s co-founders, and he served as its chief scientist until he suddenly resigned in May. Gross and Levy are also expatriates of OpenAI, whose mission is to “ensure that artificial general intelligence benefits all of humanity,” as posted on its website. I assume Dr. Sutskever’s new company is using “superintelligence” instead of “AGI” or simply “artificial intelligence” because he tried to accomplish that with OpenAI and apparently failed — so now, the mission has to be slightly modified to try it all again.
The last line I quoted about “distraction by management overhead” seemingly alludes to OpenAI’s obvious loss of direction. It’s true that OpenAI has become commercialized, which is potentially concerning for the safe development of AGI — OpenAI’s mission — but I guess the mission doesn’t matter anymore if Sam Altman, the chief executive, wants to eliminate board oversight of his company in the near future. So, thus, Safe Superintelligence — a boring name for a potentially boring company. Safe Superintelligence probably won’t create the next GPT-4 — the large language model that powers ChatGPT — or advance major research projects because it’ll struggle to raise the capital OpenAI has. It won’t have deals with Apple or Microsoft and certainly won’t be motivated by profit in the same way Altman’s company now is. Safe Superintelligence is the new OpenAI, whereas the real OpenAI is more akin to “Commercial AI.”
Is the commercialization of AI a bad thing? Probably not, but there are some doomsayers who believe it is because AI could “go rogue” and destroy humanity. I think the likelihood of such an event is minimal, but nonetheless, I also believe AI research institutes like Safe Superintelligence should exist to study the effects of powerful computer systems on society. I don’t think Safe Superintelligence should build anything new like how OpenAI did — it’s best to leave the building to the companies with capital — but the oversight should exist in a well-balanced industry. If OpenAI cooks up a contraption that has the potential to do harm, Safe Superintelligence should be able to probe it and understand how it works. It’s best to think of Safe Superintelligence and OpenAI as collaborators, not just competitors, especially if OpenAI truly does disband its board.
Let’s hope Safe Superintelligence actually lives up to its name, unlike OpenAI, though. AI is like drugs for the business industry right now: OpenAI dabbled with making a consumer product, ChatGPT — which was intended to be a limited research preview when it launched in November 2022 — the product went viral, and its entire corporate strategy shifted from safe AGI development to money making. If Safe Superintelligence, contrary to my prediction, achieves a scientific breakthrough and a hit consumer product, it’s quite possible it’ll get carried away just like OpenAI. Either Safe Superintelligence has more self-restraint than OpenAI (probably the case), or it’ll suffer the same fate.