• Ultraviolet@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        There should be some sort of law where if you want to offload decisions to AI, the person who decides to let the AI make those decisions needs to step up to take full civil and criminal liability for everything it does.

          • jonne@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Yes, one person we can pin all of humanity’s sins on, and then we just kill them. It’s almost like a religious ritual.

        • ForgotAboutDre@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          No, every decision maker in the chain of command should be responsible. They should know what the intelligence is based on, if the people sharing the information are competent and should be validating the information.

          Using AI to perform these tasks requires gross negligence at several stages. However, it does appear killing civilians and children is the intended outcome so negligence about AI is likely just a cover.

  • crazyCat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    1 year ago

    This is fucking insane dystopian shit, it’s worse than I thought and has become real sooner than I thought it would, bloody hell.

  • frontporchtreat@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Yeah we’re getting really good at teaching computers to analyze satellite imagry and other forms of spatial data to find the spots we want. All we have to do is decide if we put green spaces, Walmarts or bombs in those spots.

  • spiderkle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    A machine learning algorithm is only as good as the data you feed it. flawed or incomplete data just puts out false/biased positives and some systems will always try to output any data that you might accept. So if the approach really was “it’s like a streetlight, when it’s green we bomb”, then that’s really dark skynet stuff. But Imho blaming a machine, just sounds like a bad PR excuse for getting out of a potential crimes against humanity charge in den haag.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      Both this use and corporate use of AI isn’t really about making things better, it’s to avoid anyone having responsibility for anything. A human might have issues with picking out a target that kills 20 innocent people on the off chance a Hamas fighter might be there, and might hold back a little bit if they might be worried the ICC could come knocking, or a critical newspaper article could come out that calls you a merchant of death. AI will pop out coordinates all day and night based on the thinnest evidence, or even no evidence at all. Same with health insurers using AI to deny coverage, AI finding suspects based on grainy CCTV footage, etc, etc.

      Nobody’s responsible, because ‘the machine did it’, and we were just following its lead. In the same way that corporations really aren’t being held responsible for crimes a private person couldn’t get away with, AI is another layer of insulation between ‘externalities’ and anyone facing consequences for them.

    • prime_number_314159@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      My favorite ML result was (details may be inaccurate, I’m trying to recall from memory) a model that analyzed scan images from MRI machines, that would have far more confidence of the problems it was detecting if the image was taken on a machine with an old manufacture date. The training data had very few negative results from older machines, so the assumption that an image taken on an old machine showed the issue fit the data.

      There was speculation about why that would happen in the training data, but the pattern noticing machine sure noticed the pattern.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    As Israel resumes its offensive after a seven-day ceasefire, there are mounting concerns about the IDF’s targeting approach in a war against Hamas that, according to the health ministry in Hamas-run Gaza, has so far killed more than 15,000 people in the territory.

    The latest Israel-Hamas war has provided an unprecedented opportunity for the IDF to use such tools in a much wider theatre of operations and, in particular, to deploy an AI target-creation platform called “the Gospel”, which has significantly accelerated a lethal production line of targets that officials have compared to a “factory”.

    The Guardian can reveal new details about the Gospel and its central role in Israel’s war in Gaza, using interviews with intelligence sources and little-noticed statements made by the IDF and retired officials.

    This article also draws on testimonies published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, which have interviewed several current and former sources in Israel’s intelligence community who have knowledge of the Gospel platform.

    In the IDF’s brief statement about its target division, a senior official said the unit “produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to non-combatants”.

    Multiple sources told the Guardian and +972/Local Call that when a strike was authorised on the private homes of individuals identified as Hamas or Islamic Jihad operatives, target researchers knew in advance the number of civilians expected to be killed.


    The original article contains 1,734 words, the summary contains 241 words. Saved 86%. I’m a bot and I’m open source!

  • Adub@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    9
    ·
    1 year ago

    Can AI target what story might next outrage people on the political fringes? I’d like to not hear about Al Jazeera and the Daily Caller any more.