• 18 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle
  • Thanks. I’m not intending to push anything here. She has some other interesting articles and I came across this one on her blog while I was reading those. Didn’t really see any discussion online about it, and thought this would be a good community for evaluating the claims. I’m coming from a place where I think stuff like this that tries to make concrete claims from data is best discussed and picked apart, especially since it didn’t come across as bad faith to me, but I can understand that might not be a view shared by everyone.


























  • This is a silly argument:

    […] But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’

    That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

    ‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.

    That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented. Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.

    EDIT: From the paper:

    The remainder of this paper will be an argument in ‘two acts’. In ACT 1: Releasing the Grip, we present a formalisation of the currently dominant approach to AI-as-engineering that claims that AGI is both inevitable and around the corner. We do this by introducing a thought experiment in which a fictive AI engineer, Dr. Ingenia, tries to construct an AGI under ideal conditions. For instance, Dr. Ingenia has perfect data, sampled from the true distribution, and they also have access to any conceivable ML method—including presently popular ‘deep learning’ based on artificial neural networks (ANNs) and any possible future methods—to train an algorithm (“an AI”). We then present a formal proof that the problem that Dr. Ingenia sets out to solve is intractable (formally, NP-hard; i.e. possible in principle but provably infeasible; see Section “Ingenia Theorem”). We also unpack how and why our proof is reconcilable with the apparent success of AI-as-engineering and show that the approach is a theoretical dead-end for cognitive science. In “ACT 2: Reclaiming the AI Vertex”, we explain how the original enthusiasm for using computers to understand the mind reflected many genuine benefits of AI for cognitive science, but also a fatal mistake. We conclude with ways in which ‘AI’ can be reclaimed for theory-building in cognitive science without falling into historical and present-day traps.

    That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.


  • BitSound@lemmy.worldtoLinux@lemmy.mlThe Dislike to Ubuntu
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 months ago

    Canonical lives and dies by the BDFL model. It allowed them to do some great work early on in popularizing Linux with lots of polish. Canonical still does good work when forced to externally, like contributing upstream. The model falters when they have their own sandbox to play in, because the BDFL model means that any internal feedback like “actually this kind of sucks” just gets brushed aside. It doesn’t help that the BDFL in this case is the CEO, founder, and funder of the company and paying everyone working there. People generally don’t like to risk their job to say the emperor has no clothes and all that, it’s easier to just shrug your shoulders and let the internet do that for you.

    Here are good examples of when the internal feedback failed and the whole internet had to chime in and say that the hiring process did indeed suck:

    https://news.ycombinator.com/item?id=31426558

    https://news.ycombinator.com/item?id=37059857

    “markshuttle” in those threads is the owner/founder/CEO.