• owenfromcanada@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Not sure why he lost. He could have claimed to run it through an AI that was only trained on the one picture.

    • Hjalmar@feddit.nu
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      That probably doesn’t count as “AI” do… It’s more a very bad form of compression (that may very well make the image file larger)

    • LazaroFilm@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      “I will give a picture, you will create an exact copy, accurate pixel to pixel to the one I give you”.

    • Daxtron2@startrek.website
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      you don’t even need to train anything, just run it through on a super low blur and almost nothing will change.

    • Mixel@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      To train a diffusion model that only outputs one image with difference is I think not possible you could do an image to image and then fix the seed so you would get a consistent result and then picking the nearest result that is nearly an identical copy

      • Hjalmar@feddit.nu
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        It’s very much possible and indeed such a problem that it may be done by mistake if a large enough data set isn’t used (see overfitting). A model trained to output just this one image will learn to do so and over time it should learn to do it with 100% accuracy. The model would simply learn to ignore whatever arbitrary inputs you’ve given it