• Hagels_Bagels@lemmygrad.ml
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    Great. Now people are going to read up a bunch of bs generated by a language model and confidently spread around “hallucinations” as facts.

        • itchy_lizard@feddit.it
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          No, that’s exactly how this stuff works. Lay off 80% of writers and keep all your fact checkers and editors.

      • salient_one@lemmy.villa-straylight.social
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Probably, though it might be too optimistic to assume that. However, I believe it will still result in more mistakes simply because it’s harder to spot errors in an existing text than to not put errors in the text in the first place by fact-checking beforehand and then having another person proof-read.

        One of the reasons for that is that LLMs don’t feel guilty when they hallucinate while most humans don’t like to lie or be too lazy to fact check, and even if they don’t care about that, they still have to think about getting caught and damaging their reputation, which again LLMs don’t have. And you can’t call stating something false as a fact in an article an honest mistake (it’s negligence at best) unlike an editor’s missing something (due to a looming deadline, perhaps), especially when it’s assumed there won’t be too many hallucinations, which isn’t a certainty.