It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?

  • certified sinonist@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    6 days ago

    LLMs have great use-case applications like erotically roleplaying with yourself or creating incredibly shit search engine summaries. That’s the beginning and the end of their application, though, and I reel when I consider that some people are putting in questions and taking the generated answers as any sort of truth.

    Genuinely think it’s a huge problem that another generation down the line and “AI’s will lie to you” won’t be taken as common sense

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      Hopefully we will see improvements in architecture and alignment research long before LLMs reach a point of normality that is taken for granted. Right now, there seems to be a push to use LLMs as they are to cash in on investments, but they are far from a point for research to chill at.