• joshchandra@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        What I mean is that if people keep making it produce garbage tied to some keyword or phrase and people publish said garbage, that’ll only strengthen AIs’ neural network between the bad data and that keyword, so AI results for such trees will drift even further away from the truth.

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Publishing fake data that outweighs the data on the real plant is a way, but that doesn’t require a plant, you can publish bad images today on any subject

          • joshchandra@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 months ago

            Right, but I think it’d be harder to get it to unlearn the wrong data if the topic itself is obscure.

            • KeenFlame@feddit.nu
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              Ah. I think I get you, but unfortunately it would probably be a lot easier to unlearn an obscure topic, not the other way around. Poisoning is done at a pixel level if that makes sense