• neo [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 months ago

      Of course it’s voluntary, but if entities like OpenAI say they will respect it then presumably they really will.

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        English
        arrow-up
        18
        ·
        6 months ago

        Couple of things:

        1. Do you believe anything coming out of OpenAI when it’s abundantly clear that they’ll say anything to protect their bottom line.
        2. OpenAI are not the only people harvesting data and selling it to interested parties.
        3. There is no legal requirement to adhere to the standard and I’d be shocked if any court in the USA could understand the issue, let alone enforce a voluntary standard.
        4. The amount of automated data collection online is staggering. On my own services it accounts for 50% of the hits. Good luck with policing that.
        • neo [he/him]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          6 months ago

          I agree with your points 2-4 but I have observed on my own website that the crawlers who don’t respect won’t, and the crawlers who do respect will.

          • the_itsb [she/her, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            6
            ·
            6 months ago

            How did you find this information? I know how to check traffic for my website, but idk how to get from “list of IPs” to “these ones are crawlers”

            apologies if this is a silly question

            • neo [he/him]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              6 months ago

              I used to sit and monitor my server access logs. You can tell by the access patterns. Many of the well-behaved bots announce themselves in their user agents, so you can see when they’re on. I could see them crawl the main body of my website, but not go to a subdomain, which is clearly linked from the homepage but is disallowed from my robots.txt.

              On the other hand, spammy bots that are trying to attack you will often instead have access patterns that try to probe your website for common configurations for common CMSes like WordPress. They don’t tend to crawl.

              Google also provides a tool to test robots.txt, for example.

      • space_comrade [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        but if entities like OpenAI say they will respect it then presumably they really will.

        Eh, will they really? It’d be pretty hard to prove they didn’t respect it.

    • nossaquesapao
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      Can it work as a way to manifest unconsenting juridically?

    • CarbonScored [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 months ago

      It’s not about relying on it, it’s about changing the behaviour of web crawlers that respect 'em, which, as someone who has adminned a couple scarily popular sites over the years, is a surprisingly high percentage of them.

      If someone wants to get around it, they obviously can, but this is true of basically all protective measures ever. Doesn’t make them pointless.

  • pooh [she/her, love/loves]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    6 months ago

    I too am a fan of privacy, but in the other hand… I kind of like the idea that we’re training AIs to convince people to become communists and/or change their gender

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    Such a measure merely punishes entities that respect the rules. If the content can be accessed, it will be scraped and used to train AI.

  • CarbonScored [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 months ago

    Lets be honest, AI should be incorporating hexbear. If anything we should have a hundred site mirrors with a free-for-all robots.txt.

    After all, this site’s only praxis is posting, why not use it to fill the AI with programming-communism

  • farting_weedman [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    No, robots.txt doesn’t solve this problem. Scrapers just ignore it. The idea behind robots.txt was to be nice to the poor google web crawlers and direct them away from useless stuff that it was a waste to index.

    They could still be fastidious and follow every link, they’d just be ignoring the “nothing to see here” signs.

    You beat scrapers with recursive loops of links that start from 4pt black on black divs whose page content isn’t easily told apart from useful human created content.

    Traps and poison, not asking nicely.