I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn’t magic and it isn’t emergent behavior. It isn’t truly intelligent.
LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it’s nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and “good” human responses.
There is no understanding behind it. No higher cognitive process. Just “what words go next based on Q&A training data.” Which is why we get well written answers that are often total bullshit.
Even so, the tech could easily upend many writing careers.
I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.
I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn’t magic and it isn’t emergent behavior. It isn’t truly intelligent.
LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it’s nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and “good” human responses.
There is no understanding behind it. No higher cognitive process. Just “what words go next based on Q&A training data.” Which is why we get well written answers that are often total bullshit.
Even so, the tech could easily upend many writing careers.
I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.