Sounds more like trying to use a screwdriver to hammer in screws
This is what I think about AI being forced into many things these days. Feels more like an attempt to justify subscription plans than anything actually productive.
In part this is because the SotA model is by far GPT-4, but OpenAI has pigeon holed it into ‘chatbot.’
The earliest versions of it pre-release when it was being incorporated into Bing were amazing. Probably the most impressive thing I’ve seen in tech.
But it was too human-like and freaking users out, so rather than wait for the market to adjust they did extensive fine tuning to make the large language model trained to predict human ouput be less likely to produce human-like output.
The problem is that they don’t have a scalpel for this sort of thing and ended up with a model that’s very good as a chatbot within a certain scope, but significantly impaired at some of the outside the box mechanics visible early on.
And because it’s the SotA, everyone is now using it to fine tune their own models.
So the entire industry is being set back in practical applications outside of “kind of boring chatbot.”
That’s a terrible way to be using a LLM for generating clinical notes.
Sounds more like trying to use a screwdriver to hammer in screws than an issue with the screwdriver itself.
This is what I think about AI being forced into many things these days. Feels more like an attempt to justify subscription plans than anything actually productive.
In part this is because the SotA model is by far GPT-4, but OpenAI has pigeon holed it into ‘chatbot.’
The earliest versions of it pre-release when it was being incorporated into Bing were amazing. Probably the most impressive thing I’ve seen in tech.
But it was too human-like and freaking users out, so rather than wait for the market to adjust they did extensive fine tuning to make the large language model trained to predict human ouput be less likely to produce human-like output.
The problem is that they don’t have a scalpel for this sort of thing and ended up with a model that’s very good as a chatbot within a certain scope, but significantly impaired at some of the outside the box mechanics visible early on.
And because it’s the SotA, everyone is now using it to fine tune their own models.
So the entire industry is being set back in practical applications outside of “kind of boring chatbot.”
Right. It seemed like a reach when I first heard of it, but that’s how it’s advertised and the Hospital was sold on at least trying it out.