• 100@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    time to fill sites code with randomly generated garbage text that humans will not see but crawlers will gobble up?

  • elgordino@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    TikTok spider has been a real offender for me. For one site I host it burred through 3TB of data over 2 months requesting the same 500 images over and over. It was ignoring the robots.txt too, I ended up having to block their user agent.

    • dan@upvote.au
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Are you sure the caching headers your server is sending for those images are correct? If your server is telling the client to not cache the images, it’ll hit the URL again every time.

      If the image at a particular URL will never change (for example, if your build system inserts a hash into the file name), you can use a far-future expires header to tell clients to cache it indefinitely (e.g. expires max in Nginx).

      • elgordino@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Thanks for the suggestion, turns out there are no cache headers on these images. They indeed never change, I’ll try that update. Thanks again

  • marcos@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    TBF, pushing a site to the public while adding a “no scrapping” rule is a bit of a shitty practice; and pushing it and adding a “no scrapping, unless you are Google” is a giant shitty practice.

    Rules for politely scrapping the site are fine. But then, there will be always people that disobey those, so you must also actively enforce those rules too. So I’m not sure robots.txt is really useful at all.

    • breakingcups@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      No it’s not, what a weird take. If I publish my art online for enthusiasts to see it’s not automatically licensed to everyone to distribute. If I specifically want to forbid entities I have huge ethical issues with (such as Google, OpenAI et. al.) from scraping and transforming my work, I should be able to.

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Nothing in my post (or in robots.txt) has any relation to distributing your content.

        • BlackPenguins@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          5 months ago

          What else would they scrape your data for? Sure some could be for personal use but most of the time it will be to redistribute in a new medium. Like a recipe app importing recipes.

          • lud@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            Indexing is what “scrapers” mostly do.

            That’s how search engines work. If you don’t allow any scraping don’t be surprised if you get no visitors.