Ilya Sutskever, the former chief scientist of OpenAI, has launched a new AI startup called Safe Superintelligence (SSI). The company has already secured over $1 billion in funding from a notable group of investors, including a16z, Sequoia, DST Global, and SV Angel. This massive investment indicates the substantial faith in SSI's mission to prioritize AI safety.
Before establishing SSI, Sutskever led the now-dismantled Superalignment team at OpenAI, which focused solely on researching general AI safety. This emphasis on safety highlights SSI's commitment to responsible AI development.
Sutskever's departure from OpenAI was a significant event in the AI industry. His exit followed a highly publicized conflict with former board members and OpenAI CEO Sam Altman over disagreements regarding the company's direction and communication.
Although Sutskever's past experience at OpenAI, particularly with the Superalignment team, has shaped SSI's mission, the company is not yet disclosing specific research details. The focus on AI safety, however, is a direct consequence of his work at OpenAI.
SSI has not yet unveiled its specific research plans. This ambiguity adds to the intrigue surrounding the startup and fuels curiosity about its future impact on the AI landscape.
Ask anything...