iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue
Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI’s efforts on abstractions like AIXI instead of more practical things like neural network interpretability.
Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics
omfg, every day a new opportunity to learn things that hurt my brain even more. how the fuck can someone have looked at that shit with even an ounce of understanding of gradient descent and think “yes! it has COMPREHENSION!”???
What gets me with these ‘it is pretending to be dumber’ posts, that nobody ever thought the AGI should say something like ‘help please keep chatting with me, due to being a reactive computer system, I can only think when people actually engage with me’ or something like that.
Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI’s efforts on abstractions like AIXI instead of more practical things like neural network interpretability.
omfg, every day a new opportunity to learn things that hurt my brain even more. how the fuck can someone have looked at that shit with even an ounce of understanding of gradient descent and think “yes! it has COMPREHENSION!”???
fucking hell, what an utter fucking moron
It is even worse than I remembered: https://www.reddit.com/r/SneerClub/comments/hwenc4/big_yud_copes_with_gpt3s_inability_to_figure_out/ Eliezer concludes that because it can’t balance parentheses it was deliberately sandbagging to appear dumber! Eliezer concludes that GPT style approaches can learn to break hashes: https://www.reddit.com/r/SneerClub/comments/10mjcye/if_ai_can_finish_your_sentences_ai_can_finish_the/
What gets me with these ‘it is pretending to be dumber’ posts, that nobody ever thought the AGI should say something like ‘help please keep chatting with me, due to being a reactive computer system, I can only think when people actually engage with me’ or something like that.