• db2@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      2
      ·
      16 days ago

      In case anyone doesn’t get what’s happening, imagine feeding an animal nothing but its own shit.

      • Stern@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        12
        ·
        16 days ago

        I use the “Sistermother and me are gonna have a baby!” example personally, but I am a awful human so

      • BassTurd@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        16 days ago

        Not shit, but isn’t that what brought about mad cow disease? Farmers were feeding cattle brain matter that had infected prions. Idk if it was cows eating cow brains or other animals though.

        • _cnt0@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          16 days ago

          It was the remains of fish which we ground into powder and fed to other fish and sheep, whose remains we ground into powder and fed to other sheep and cows, whose remains we ground to powder and fed to other cows.

    • Snowclone@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      16 days ago

      It’s more ''we are so focused on stealing and eating content, we’re accidently eating the content we or other AI made, which is basically like incest for AI, and they’re all inbred to the point they don’t even know people have more than two thumb shaped fingers anymore."

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 days ago

      All such news make me want to live to the time when our world is interesting again. Real AI research, something new instead of the Web we have, something new instead of the governments we have. It’s just that I’m scared of what’s between now and then. Parasites die hard.

    • jimmy90@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 days ago

      or “we’ve hit a limit on what our new toy can do and here’s our excuse why it won’t get any better and AGI will never happen”

  • rickdg@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    16 days ago

    Old news? Seems to be a subject of several papers for some time now. Synthetic data has been used successfully already for very specific domains.

    • SomeGuy69@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      16 days ago

      Yup, old news and wrong news. Also so many people who hate AI but don’t understand how it works. Pretty disappointing for a technology community.

  • aggelalex@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    15 days ago

    So AI:

    1. Scraped the entire internet without consent
    2. Trained on it
    3. Polluted it with AI generated rubbish
    4. Trained on that rubbish without consent
    5. Are now in need of lobotomy
  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    14 days ago

    Uh, good.

    As an engineer who cares a LOT about engineering ethics, it is absolutely fucking infuriating watching the absolute firehose of shit that comes out of LLMs and public-consumption audio, image, and video ML systems, juxtaposed with the outright refusal of companies and engineers who work there to accept ANY accountability or culpability for the systems THEY FUCKING MADE.

    I understand the nuances of NNs. I understand that they’re much more stochastic than deterministic. So, you know, maybe it wasn’t a great idea to just tell the general public (which runs a WIDE gamut of intelligence and comprehension ability - not to mention, morality) “have at it”. The fact that ML usage and deployment in terms of information generating/kinda-sorta-but-not-really-aggregating “AI oracles” isn’t regulated on the same level as what you’d see in biotech or aerospace is insane to me. It’s a refusal to admit that these systems fundamentally change the entire premise of how “free speech” is generated, and that bad actors (either unrepentantly profit driven, or outright malicious) can and are taking disproportionate advantage of these systems.

    I get it - I am a staunch opponent of censorship, and as a software engineer. But the flippant deployment of literally society-altering technology alongside the outright refusal to accept any responsibility, accountability, or culpability for what that technology does to our society is unconscionable and infuriating to me. I am aware of the potential that ML has - it’s absolutely enormous, and could absolutely change a HUGE number of fields for the better in incredible ways. But that’s not what it’s being used for, and it’s because the field is essentially unregulated right now.

  • pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    16 days ago

    oh no are we gonna have to appreciate the art of human beings? ew. what if they want compensation‽

  • draughtcyclist@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    16 days ago

    I’ve been assuming this was going to happen since it’s been haphazardly implemented across the web. Are people just now realizing it?

    • DeathbringerThoctar@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      16 days ago

      People are just now acknowledging it. Execs tend to have a disdain for the minutiae. They’re like kids that only want to do the exciting bits. As a result things get fucked because they don’t really understand what they’re doing. As Muskrat would say “move fast and break things.” It’s a terrible mindset.

  • Hugin@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    16 days ago

    The solution for this is usually counter training. Granted my experience is on the opposite end training ai vision systems to id real objects.

    So you train up your detector ai on hand tagged images. When it gets good you use it to train a generator ai until the generator is good at fooling the detector.

    Then you train the detector on new tagged real data and the new ai generated data. Once it’s good at detection again you train the generator ai on the new detector.

    Repeate several times and you usually get a solid detector and a good generator as a side effect.

    The thing is you need new real human tagged data for each new generation. None of the companies want to generate new human tagged data sets as it’s expensive.

  • levzzz@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    16 days ago

    Fake news, just like that one time Nightshade “killed” stable diffusion (literally had no effect) Flux came out not long ago and it’s better than ever