It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
At least part of it seems to be a “guard-model” identifying that the topic might be illegal or something like that and just stoping the main model from going further while saying “can’t do that”. Other Chinese models, from what I heard, might handle things like this better, although in DeepSeek’s case they might have gone too far because they might have been under attack early on from bad faith people asking all sorts of “questions” about China and things like that, but I can’t say for sure.
As the other person said, you can either try running the model yourself, or if you don’t have a computer with at least 400GB RAM lying around you can look for other services hosting DeepSeek. I can’t guarantee they won’t have “guard-models”/censorship themselves but there should be some without it.
But another point to consider though is that DeepSeek was still trained on a lot of western
datapropaganda, so don’t expect impartiallity from DS or any other model for that matter. We may still be a ways off of models that can actually understand their bias and correct it properly.You can pen-test the restrictions pretty easily. Its coded to censor mentions of China or Chinese leaders in an political context (or maybe any context at all)
If you ask the same question but ask it to replace any mention of the word China with something else (I used zeon from msg), it will actually answer the question, but obliquely.
If you ask it to answer again with real Chinese historical figures, it fails