I’m pretty skeptical of the value of these LLMs, but Google’s isn’t even good enough to call it half baked…
These models are ultimately crap in my opinion. They’re not optimizing for intelligence or correct answers, they’re optimizing for seeming correct and intelligent. All the feedback they get from users is uninformed, you only ask an LLM what you don’t know.
I used to think these tech companies knew what they were doing, at least somewhat. It’s just a bunch of business dipshits running from fad to fad burning tons of money until they fall backwards into a business. Then they monopolize it and enshittify it.
Google’s isn’t even good enough to call it half baked
Google has a problem where they let OpenAI get way ahead of them in terms of institutional knowledge and product capability, and with the number of people who really understand how these LLMs work being countable on one hand, no amount of money they throw at the problem can bridge that gap.
The smart thing to do would be to develop slowly and not release anything until they have something good - but they have to show something to their investors, so that’s not an option. They have to release something in a short amount of time, and they simply don’t have the capability to release something on that timetable that doesn’t suck.
I’m pretty skeptical of the value of these LLMs, but Google’s isn’t even good enough to call it half baked…
These models are ultimately crap in my opinion. They’re not optimizing for intelligence or correct answers, they’re optimizing for seeming correct and intelligent. All the feedback they get from users is uninformed, you only ask an LLM what you don’t know.
I used to think these tech companies knew what they were doing, at least somewhat. It’s just a bunch of business dipshits running from fad to fad burning tons of money until they fall backwards into a business. Then they monopolize it and enshittify it.
Google has a problem where they let OpenAI get way ahead of them in terms of institutional knowledge and product capability, and with the number of people who really understand how these LLMs work being countable on one hand, no amount of money they throw at the problem can bridge that gap.
The smart thing to do would be to develop slowly and not release anything until they have something good - but they have to show something to their investors, so that’s not an option. They have to release something in a short amount of time, and they simply don’t have the capability to release something on that timetable that doesn’t suck.