We’ve already got LLMs that can simulate conversing with those dead people to some degree, I wouldn’t say they’re beyond the reach of any technology. In a few years they might be good enough simulations that you can’t tell the difference.
People downvoted you but I don’t think they touched on the main idea; I don’t want to show Einstein modern physics for my entertainment, I want to teach it so he can be amazed.
good enough simulations that you can’t tell the difference.
This requires us having actual conversations with those dead people to compare against, which we obviously can’t do.
There is simply not enough information to train a model on of a dead person to create a comprehensive model of how they would respond in arbitrary conversations. You may be able to train with some depth in their field of expertise, but the whole point is to talk about things which they have no experience with, or at least, things which weren’t known then.
So sure, maybe we get a model that makes you think you’re talking to them, but that’s no different than just having a dream or an acid trip where you’re chatting with Einstein.
There is simply not enough information to train a model on of a dead person to create a comprehensive model of how they would respond in arbitrary conversations.
True. And even if we did, most of them would be super racist, anyway. Just like chatbots from a few years ago!
Wait, maybe we do have the necessary technology…Hooray? Lol.
Well, I’d think you’d test that model with living authors with similar inputs and make comparisons and then refine the process till nobody can tell the difference. We’ll never get all the way there, but I bet we’ll get far enough that we won’t be able to tell the difference.
LLMs are no better than speaking to a clever parrot. It might say the correct words but has no understanding and so there is no value in teaching it beyond a petty parlour trick.
They might mimic those things in a convincing fashion in a few years but there wouldn’t be a reason for them to exist. There’s no person behind the curtain, or inside the multilayered, statistically-weighted, series of if statements.
We’ve already got LLMs that can simulate conversing with those dead people to some degree, I wouldn’t say they’re beyond the reach of any technology. In a few years they might be good enough simulations that you can’t tell the difference.
No, they are beyond the reach of any technology. An Isaac Newton themed chatbot isn’t actually the spirit of Isaac Newton. It’s a chatbot.
Can’t wait to ask my dead grandma to write a Python script for me
People downvoted you but I don’t think they touched on the main idea; I don’t want to show Einstein modern physics for my entertainment, I want to teach it so he can be amazed.
This requires us having actual conversations with those dead people to compare against, which we obviously can’t do.
There is simply not enough information to train a model on of a dead person to create a comprehensive model of how they would respond in arbitrary conversations. You may be able to train with some depth in their field of expertise, but the whole point is to talk about things which they have no experience with, or at least, things which weren’t known then.
So sure, maybe we get a model that makes you think you’re talking to them, but that’s no different than just having a dream or an acid trip where you’re chatting with Einstein.
True. And even if we did, most of them would be super racist, anyway. Just like chatbots from a few years ago!
Wait, maybe we do have the necessary technology…Hooray? Lol.
Well, I’d think you’d test that model with living authors with similar inputs and make comparisons and then refine the process till nobody can tell the difference. We’ll never get all the way there, but I bet we’ll get far enough that we won’t be able to tell the difference.
As for when… Who knows?
LLMs are no better than speaking to a clever parrot. It might say the correct words but has no understanding and so there is no value in teaching it beyond a petty parlour trick.
“Parrots only repeat” is a pretty archaic view in regards to animal psychology, though.
LLM’s are glorified chatbots but parrots can actually understand even. To what extent, that’s very much debatable.
Here’s a good set of videos from Nativlang on the subject https://www.youtube.com/watch?v=YmkQLDJdhJI&list=PLc4s09N3L2h2lYeVD6pmax3f7qiHxyq3k
I was absolutely being unfair to parrots to get my point across, I apologise to all our feathered friends.
Thank you.
https://www.youtube.com/shorts/RX2Edsmzqlw
Socrates character AI is no fun. He isn’t clever or insightful or skeptical.
As I said, wait a few years. The technology is rapidly advancing.
They might mimic those things in a convincing fashion in a few years but there wouldn’t be a reason for them to exist. There’s no person behind the curtain, or inside the multilayered, statistically-weighted, series of if statements.