An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
It was evident that this was inevitable since they poisoned the Internet, the very thing they train their crap on, with their own slop.
They temporarily mitigated it by pirating every existing form of media… but that would only work as long as they didn’t train their models on anything published after they poisoned the well, which would make it even more useless for most use cases, so they kept using their own slop for training, maybe believing their own lies that they’ll be able to fix it, maybe planning to sell and run just before the bubble bursts.
Last time we fed something its own shit and corpses we got mad cow disease.
Guess mad “AI” ¹ is on the menu for the foreseeable future. 🤷♂️
¹ (It’s not even real AI, just fancy applied statistics to make a marginally better — but, thanks to model poisoning, progressively worse — autocomplete.)
Photocopy of a photocopy.
It was evident that this was inevitable since they poisoned the Internet, the very thing they train their crap on, with their own slop.
They temporarily mitigated it by pirating every existing form of media… but that would only work as long as they didn’t train their models on anything published after they poisoned the well, which would make it even more useless for most use cases, so they kept using their own slop for training, maybe believing their own lies that they’ll be able to fix it, maybe planning to sell and run just before the bubble bursts.
Last time we fed something its own shit and corpses we got mad cow disease.
Guess mad “AI” ¹ is on the menu for the foreseeable future. 🤷♂️
¹ (It’s not even real AI, just fancy applied statistics to make a marginally better — but, thanks to model poisoning, progressively worse — autocomplete.)