It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.
This is what LLM’s can’t do though. They can’t use what they understand because they don’t understand anything. They can’t infer, they can’t reason, they can’t evaluate or compare. They can spit out words that make it look like they did those things, but they didn’t.
Here I think you are behind on the literature. LLMs can infer and reason, and there are whole series of papers that evaluate LLMs for these properties the exact same way we evaluate humans. So if you can’t trust the metrics, then you cannot even assert that humans can reason and infer and understand.
https://arxiv.org/html/2403.04121v1
Good read from a group of computer scientists at Arizona State. Their conclusions are the same as mine but they illustrate the problems better than I ever could.
You linked a paper on planning in LLMs. Planning is largely in the domain of reinforcement learning. The paper you linked conflates reasoning with planning, alongside the obviously biased prose, so the author really doesn’t seem credible. I prefer nuanced and careful evaluations such as: https://www.sciencedirect.com/science/article/pii/S2949719123000298
Without commenting on the content of the paper,
Hm. 🤔
Notice that there are methods, data, and peer reviews that I can freely scrutinize. All things your opinion piece lacks.