Uncategorized

Thread by @DKokotajlo67142 on Thread Reader App – Thread Reader App


Daniel Kokotajlo Profile picture
1/15: In April, I resigned from OpenAI after losing confidence that the company would behave responsibly in its attempt to build artificial general intelligence — “AI systems that are generally smarter than humans.” openai.com/index/planning…

2/15: I joined with the hope that we would invest much more in safety research as our systems became more capable, but OpenAI never made this pivot. People started resigning when they realized this. I was not the first or last to do so.

3/15: When I left, I was asked to sign paperwork with a nondisparagement clause that would stop me from saying anything critical of the company. It was clear from the paperwork and my communications with OpenAI that I would lose my vested equity in 60 days if I refused to sign.

4/15 some documents and emails visible here: vox.com/future-perfect…

5/15: My wife and I thought hard about it and decided that my freedom to speak up in the future was more important than the equity. I told OpenAI that I could not sign because I did not think the policy was ethical; they accepted my decision, and we parted ways.

6/15: The systems that labs like OpenAI are building have the capacity to do enormous good. But if we are not careful, they can be destabilizing in the short term and catastrophic in the long term.

7/15: These systems are not ordinary software; they are artificial neural nets that learn from massive amounts of data. There is a rapidly growing scientific literature on interpretability, alignment, and control, but these fields are still in their infancy.

8/15: There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all arenas.

9/15: Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to “move fast and break things.”

10/15: Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.

11/15: I applaud OpenAI for promising to change these policies!

12/15: It’s concerning that they engaged in these intimidation tactics for so long and only course-corrected under public pressure. It’s also concerning that leaders who signed off on these policies claim they didn’t know about them.

13/15: We owe it to the public, who will bear the brunt of these dangers, to do better than this. Reasonable minds can disagree about whether AGI will happen soon, but it seems foolish to put so few resources into preparing.

14/15: Some of us who recently resigned from OpenAI have come together to ask for a broader commitment to transparency from the labs. You can read about it here: righttowarn.ai

15/15: To my former colleagues, I have much love and respect for you, and hope you will continue pushing for transparency from the inside. Feel free to reach out to me if you have any questions or criticisms.

• • •


Missing some Tweet in this thread? You can try to
force a refresh

 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *