Uncategorized

Ilya Sutskever Has a New Plan for Safe Superintelligence OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.


Via The Neuron

“Ilya Sutskever is kind of a big deal in AI, to put it lightly.
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.
  • The details of the saga are still under wraps, but part of the disagreement likely stemmed from Ilya thinking not enough resources were dedicated to ensuring that OpenAI’s products were really safe.
  • Since then, we haven’t heard much from him, prompting endless “Where’s Ilya?” and “What did Ilya see?” memes on X and Reddit.
Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.
A sneak peak at future content 🙂
No one knows when this superintelligence will arrive, but when it does, it will have the power to impact humanity in massively positive or negative ways.
Safe Superintelligence (SSI) is being formed to, you guessed it, ensure that impact is positive.
“At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale. After this, we can say we would like it to be a force for good.”
Bloomberg
Why it matters: We know little about SSI other than that it’s a research lab reminiscent of OpenAI’s early days, with the singular focus of building superintelligence with “no distraction by management overhead or product cycles.”
If the last part of that sentence sounded like a dig at OpenAI to you, you’re not alone. In part, the announcement post and Ilya’s Bloomberg interview read as, “We’re the ones who are actually going to build safe AI, not AI with a hint of ‘trust and safety’ like the rest of Big Tech.”
And while we believe SSI when it says its “business model means safety, security, and progress”, we wouldn’t be shocked to see a $1B+ fundraising announcement soon.

Ilya Sutskever, Daniel Gross, and Daniel Levy announce Safe Superintelligence, an American company “with one goal and one product: a safe superintelligence”

Ilya Sutskever discloses his plans for Safe Superintelligence, says “we mean safe like nuclear safety as opposed to safe as in ‘trust and safety’”, and more

 

 

 

 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *