This is both upsetting and sad.

What’s sad is that I spend about $200/month on professional therapy, which is on the low end. Not everyone has those resources. So I understand where they’re coming from.

What’s upsetting is that this user TRUSTED ChatGPT to follow through on the advice without critically challenging it.

Even more upsetting is this user admitted to their mistake. I guarantee you that there are thousands like OP who wasn’t brave enough to admit it, and are probably to this day, still using ChatGPT as a support system.

Source: https://www.reddit.com/r/ChatGPT/comments/1k1st3q/i_feel_so_betrayed_a_warning/

  • mbtrhcs@feddit.org
    link
    fedilink
    arrow-up
    19
    ·
    6 days ago

    I have a compsci background and I’ve been following language models since the days of the original GPT and BERT. Still, the weird and distinct behavior of LLMs hasn’t really clicked for me until recently when I really thought about what “model” meant, as you described. It’s simulating what a conversation with another person might look like structurally, and it can do so with impressive detail. But there is no depth to it, so logic and fact-checking are completely foreign concepts in this realm.

    When looking at it this way, it also suddenly becomes very clear why people frustratedly telling LLMs things such as “that didn’t work, fix it” is so unproductive and meaningless: what would follow that kind of prompt in a human-to-human conversation? Structurally, an answer that looks very similar! Therefore the LLM will once more produce a structurally similar answer, but there is literally no reason why it would be any more “correct” than the prior output.

    • AFK BRB Chocolate@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 days ago

      That’s right, you have it exactly. When the prompt is that the prior output is wrong, the program is supposed to apologize and reprocess with a different output, but it uses the same process.