It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?

  • kredditacc@lemmygrad.ml
    link
    fedilink
    arrow-up
    31
    ·
    6 days ago

    People should stop treating LLMs as some sources of truth. They were trained on data from the Internet, some of these data are sources but most of them are derivatives, some are even themselves AI-generated. LLM is a search engine that doesn’t provide a source, and a word mixer.

    Anyway, if you really want to learn histories from the Chinese perspectives then you’d have no choice but to learn Chinese.

    • FuckBigTech347@lemmygrad.ml
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      LLM is a search engine

      I wouldn’t even call them that. Any fulltext search engine will always produce reliable output and doesn’t make shit up. Also they don’t need a GPU to number crunch and don’t require the most recent beast computer to run. For example, any PostgreSQL Table can be used as a search engine index via TSVECTOR.

    • certified sinonist@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 days ago

      LLMs have great use-case applications like erotically roleplaying with yourself or creating incredibly shit search engine summaries. That’s the beginning and the end of their application, though, and I reel when I consider that some people are putting in questions and taking the generated answers as any sort of truth.

      Genuinely think it’s a huge problem that another generation down the line and “AI’s will lie to you” won’t be taken as common sense

      • amemorablename@lemmygrad.ml
        link
        fedilink
        arrow-up
        3
        ·
        6 days ago

        Hopefully we will see improvements in architecture and alignment research long before LLMs reach a point of normality that is taken for granted. Right now, there seems to be a push to use LLMs as they are to cash in on investments, but they are far from a point for research to chill at.

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      13
      ·
      6 days ago

      💯 The best LLM is still worse than wikipedia when it comes to getting factual information and that’s saying something, considering the “pretense of neutrality yet heavily biased” problems wikipedia has. At least with wikipedia, you can see what sources a page is using. With an LLM, as a user, you have no idea where it’s getting the ideas from.

      • bobs_guns@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 days ago

        And in a lot of cases it has just made everything up in a way that sounds as plausible as possible.

        • amemorablename@lemmygrad.ml
          link
          fedilink
          arrow-up
          8
          ·
          6 days ago

          Yeah LLMs are great at being confidently wrong. It’s a little terrifying sometimes how good they are at it, when considering people who don’t know they’re effectively BSing machines. Not to say the intentional design is for them to BS, but it’s sort of a side effect of the fact that they’re supposed to continue the conversation believably and they can’t possibly get everything right, even if everything had a universal right answer that was agreed on (which is often not the case with humans to begin with).

          Similar to a con artist, it becomes more apparent how poor they tend to be on facts when they get into subjects the user knows really well. I’ve occasionally used an LLM to help jog my memory on something and then I cross-reference it with other sources, but trusting them alone as a sole source on something is always a bad idea.