It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
As others have said, LLMs aren’t “fact machines” or something, they just aggregate data based on the prompt and reword it to form an appropriate response. An LLM scraping the English language internet is naturally going to have a huge number of anti-China articles as a part of the data set.