![]() The entire blog is available on Substack for free, and I highly recommend you read it to understand everything there is to know about OpenAI. This brings me to the following theoretical scenario that Tomas Pueyo put up in his amazing walkthrough of the OpenAI CEO situation last week. Therefore, a smarter version of ChatGPT had fired Altman. The gist of one of them was that it was AI directing OpenAI. I’m sure the philosophical discussions about AGI and alignment were from the start.īefore I show you the scary AI scenario that made me change my mind, if only slightly, I’ll also remind you of the memes that flooded social media between Sam Altman’s firing and rehiring. They could very well have stopped development before we got to generative AI. ![]() These are, after all, the same minds who came up with the breakthroughs that got us here. I also said that I’m not worried about the doom scenarios AI researchers put out. OpenAI DevDay keynote: ChatGPT usage this year. What we want, and the OpenAI non-profit wants, is to avoid getting to AGI before we figure out alignment. Everyone will make better and faster models, and we’ll reach AGI whether we want it or not. We can’t put the genie back in the bottle now that generative AI is out. I said before that we’re “doomed” to reach AGI now that we have ChatGPT. That is, AI that will not eventually try to hurt (or eradicate) us once it realizes that we’re an obstacle. It boils down to a simple thing: we must create AGI with interests that align with ours. Once we make AI that’s as good as humans at solving problems, it might hide itself from us if it’s the kind of misaligned AI everyone fears.ĪI alignment is something you might have been hearing a lot since ChatGPT came along. ![]() ![]() By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |