• Barry Zuckerkorn@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    5 months ago

    Your scenario 1 is the actual danger. It’s not that AI will outsmart us and kill us. It’s that AI will trick us into trusting them with more responsibility than the AI can responsibly handle, to disastrous results.

    It could be small scale, low stakes stuff, like an AI designing a menu that humans blindly cook. Or it could be higher stakes stuff that actually does things like affect election results, crashes financial markets, causes a military to target the wrong house, etc. The danger has always been that humans will act on the information provided by a malfunctioning AI, not that AI and technology will be a closed loop with no humans involved.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 months ago

      Yup, it is a real risk. But on a lighter side, it’s a risk that we [humanity] have been fighting against since forever - the possibility of some of us causing harm to the others not due to malice, but out of assumptiveness and similar character flaws. (In this case: “I assume that the AI is reliable enough for this task.”)