It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.
It’s not that it doesn’t know how to say “I don’t know”. It simply doesn’t know. Period. LLMs are not sentient and they don’t think about the questions they are asked, let alone if the answer they provide is correct. They string words together. That’s all. That we’ve gotten those strings of words to strongly resemble coherent text is very impressive, but it doesn’t make the program intelligent in the slightest.
What amazes me is that people don’t find it significant that they don’t ask questions. I would argue there is no such thing as intelligence without curiosity.
What do you even mean with that? Pi asks questions and certainly feels curious and engaged in conversation. Even chatgpt will ask for more information if it doesn’t find the requested information in, for example, an Excel spreadsheet you upload.