This is both upsetting and sad.

What’s sad is that I spend about $200/month on professional therapy, which is on the low end. Not everyone has those resources. So I understand where they’re coming from.

What’s upsetting is that this user TRUSTED ChatGPT to follow through on the advice without critically challenging it.

Even more upsetting is this user admitted to their mistake. I guarantee you that there are thousands like OP who wasn’t brave enough to admit it, and are probably to this day, still using ChatGPT as a support system.

Source: https://www.reddit.com/r/ChatGPT/comments/1k1st3q/i_feel_so_betrayed_a_warning/

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    6 days ago

    People really misunderstand what LLMs (Large Language Models) are. That last word is key: they’re models. They take in reams of text from all across the web and make a model of what a conversation looks like (or what code looks like, etc.). When you ask it a question, it gives you a response that looks right based on what it took in.

    Looking at how they do with math questions makes it click for a lot of people. You can ask an LLM for a mathematical proof, and it will give you one. If the equation you asked it about is commonly found online, it might be right because its database/model has that exact thing multiple times, so it can just regurgitate it. But if not, it’s going to give you a proof that looks like the right kind of thing, but it’s very unlikely to be correct. It doesn’t understand math - it doesn’t understand anything - it just uses it’s model to give you something that looks like the right kind of response.

    If you take the above paragraph and replace the math stuff with therapy stuff, it’s exactly the same (except therapy is less exacting than math, so it’s less clear that the answers are wrong).

    Oh and since they don’t actually understand anything (they’re just software), they don’t know if something is a joke unless it’s labeled as one. So when a redditor made a joke about using glue in pizza sauce to help it stick to the pizza, and that comment got a giant amount of upvotes, the LLMs took that to mean that’s a useful thing to incorporate into responses about making pizza, which is why that viral response happened.

    • mbtrhcs@feddit.org
      link
      fedilink
      arrow-up
      19
      ·
      6 days ago

      I have a compsci background and I’ve been following language models since the days of the original GPT and BERT. Still, the weird and distinct behavior of LLMs hasn’t really clicked for me until recently when I really thought about what “model” meant, as you described. It’s simulating what a conversation with another person might look like structurally, and it can do so with impressive detail. But there is no depth to it, so logic and fact-checking are completely foreign concepts in this realm.

      When looking at it this way, it also suddenly becomes very clear why people frustratedly telling LLMs things such as “that didn’t work, fix it” is so unproductive and meaningless: what would follow that kind of prompt in a human-to-human conversation? Structurally, an answer that looks very similar! Therefore the LLM will once more produce a structurally similar answer, but there is literally no reason why it would be any more “correct” than the prior output.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 days ago

        That’s right, you have it exactly. When the prompt is that the prior output is wrong, the program is supposed to apologize and reprocess with a different output, but it uses the same process.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      6 days ago

      LLMs are our attempts to teach computers how to speak, like how we would a baby.

      They can string together sentences, but the cognitive ability just isn’t there yet.

      You wouldn’t have a baby as your therapist either. But you can use it as creative input to inspire new ideas for yourself.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        6 days ago

        We’d be better off talking about AI if no one used words like intelligence, cognition, think, understand, know, learn, etc. They don’t do any of those things.

    • JohnnyEnzyme@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      6 days ago

      And unless I’m quite mistaken, any specific correction one might possibly contribute to the LLM project in question (i.e. software hallucination), is generally-speaking, roundly & enthusiastically embraced and even celebrated by the LLM, then immediately and completely ignored.

      I.e., they’re not programmed to listen to our feedback in a meaningful, educational way, only to keep munching on the databases their doggie-daddies have sicced them upon.

      EDIT: that cynicism / critique aside, ChatGPT in particular has been hugely useful in my language-learning, and there’s no question to me that it’s improved a lot, just across the last few months. FWIW

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        I.e., they’re not programmed to listen to our feedback in a meaningful, educational way

        Right, because “listen” and “educational” don’t apply to a software application like this. It has a model based on processing a truly huge amount of text. Your lone correctional prompt might tweak that model, but only slightly.

        And sure, as a tool, LLMs can be very useful. I managed a software engineering organization for an aerospace company for a lot of years, and I made a number of constraints about how the LLM could be used (the company had one inside the firewall, so there weren’t IP issues), but I for sure encouraged it to be used. Essentially I was concerned about our software engineers using it in any way where they counted on it to be correct, because it often wouldn’t be. But it was great for things like suggesting test cases to test a piece of code.

  • Akt0@reddthat.com
    link
    fedilink
    arrow-up
    60
    arrow-down
    1
    ·
    7 days ago

    More upsetting still is that they think this only applies to their circumstances. It has been proven to fail at all of those things they listed at the end.

    • driving_crooner
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      7 days ago

      I was about to said that. My wife is a pastry chef, tried to used to create recipes dor desserts and it was just recycling old recipes and methods without any creativity and mixing ingredients that don’t work well together.

      For coding works well if you are asking for small scrips and functions at the time and ypu are fixing all together.

      • Thassodar@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        7 days ago

        NGL I use it for bonus questions when I do my paid weekly trivia gig.

        • davidgro@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          7 days ago

          You mean answering them (and accepting wrong answers sometimes) and not generating them with wrong answers, right?

          right?

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    6 days ago

    you feel stupid, because it was stupid.

    at least you’re alive and can make different stupid mistakes and learn from those too.

    eventually, the stupid will evolve into wisdom. and that’s how you become an adult.

  • undeffeined@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    6 days ago

    One of the problems, in my opinion, is calling it Artificial Intelligent. It leads people thinking it’s something more than it is: a text predictor.

  • Captain Aggravated@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    See, I came to a similar realization in what I hope is less critical circumstances: I’m thinking of starting a Youtube channel and asked both ChatGPT and Gemini to help brainstorm for a channel name/branding. And I came to that same realization…this thing is being a total yes-man. Everything I thought up was a good idea.

  • Zomg@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    6 days ago

    Thankfully the user questioned and verified and learning about it’s problem. Most people wouldn’t have done that

  • Kaput@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    7 days ago

    Iv used it to try to put a name on some health problem iv been having for years. Google has been of no help on this and thought let’s give it a try. It did give a few names that I could then Google and eliminates or research a bit further. I don’t trust it for helping, and it is trying very hard to get me to consult it. "Are those symptômes you experience? " “Would you like some advice on…”. I never answer to that. Just very generic questions and come back later in different thread to cross question. Exemple please make a list of ailment linked to this list of symptômes. Vs later asking please describe symptoms of ailment x. See if they still match. Then Google actual medical organism sites and approached my doctor with here are my problems they seem to match this disease, is that plausible, can we do the test ? Chatgpt is a crutch find words and to navigate enshitified Google.

    • username@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      Agreed. Sometimes I’ve used it to find words too.

      Language models are extremely useful tools for some specific purposes - people just need to know how and when to use them, taking into account that they aren’t intelligent at all

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    6 days ago

    its not useful for RESUME/CV either, you’re better off having the resume sub review than chatgpt, if you are going that far.

  • xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    In my experience it gets worse as the amount of context increases. I’ll often re-ask a question w/o history just for comparison (getting significantly different results), and for the same reason i can’t imagine the “memory” feature being beneficial (quickly turned that off).

  • stupidcasey@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    6 days ago

    The five true geniuses in the world must truly see it as a marvel of human engineering, they just never see the response telling them to eat glue.