Via The Neuron
“Ilya Sutskever is kind of a big deal in AI, to put it lightly. | ||||
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman. | ||||
|
||||
Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence. | ||||
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas. | ||||
|
||||
No one knows when this superintelligence will arrive, but when it does, it will have the power to impact humanity in massively positive or negative ways. | ||||
Safe Superintelligence (SSI) is being formed to, you guessed it, ensure that impact is positive. | ||||
|
||||
Why it matters: We know little about SSI other than that it’s a research lab reminiscent of OpenAI’s early days, with the singular focus of building superintelligence with “no distraction by management overhead or product cycles.” | ||||
If the last part of that sentence sounded like a dig at OpenAI to you, you’re not alone. In part, the announcement post and Ilya’s Bloomberg interview read as, “We’re the ones who are actually going to build safe AI, not AI with a hint of ‘trust and safety’ like the rest of Big Tech.” | ||||
And while we believe SSI when it says its “business model means safety, security, and progress”, we wouldn’t be shocked to see a $1B+ fundraising announcement soon. | ||||
“ |