It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
And in a lot of cases it has just made everything up in a way that sounds as plausible as possible.
Yeah LLMs are great at being confidently wrong. It’s a little terrifying sometimes how good they are at it, when considering people who don’t know they’re effectively BSing machines. Not to say the intentional design is for them to BS, but it’s sort of a side effect of the fact that they’re supposed to continue the conversation believably and they can’t possibly get everything right, even if everything had a universal right answer that was agreed on (which is often not the case with humans to begin with).
Similar to a con artist, it becomes more apparent how poor they tend to be on facts when they get into subjects the user knows really well. I’ve occasionally used an LLM to help jog my memory on something and then I cross-reference it with other sources, but trusting them alone as a sole source on something is always a bad idea.