It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
I think it has to do with the language being used. Language data discussing Marxism between English versus Chinese is bound to be inherently different. The AI has a lot more examples of discussing Marxism defensively in English than it does in other languages, and so will spit out more consistently cagey responses to an English user.
I don’t know any of the technicalities of how AI works but I’ve read this explanation somewhere before.