It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
It either spits out a weird canned message about how cool China is (yes, but not what I’m asking) or starts generating a response but then changes mid-sentence to “I can’t talk about that.” Anyone else experience this?
Hopefully we will see improvements in architecture and alignment research long before LLMs reach a point of normality that is taken for granted. Right now, there seems to be a push to use LLMs as they are to cash in on investments, but they are far from a point for research to chill at.