Depends on what you’re calling AI. LLMs (and generative AI in general) are garbage for all those things, and most things in general (all things if you take their cost into account). Machine Learning and expert systems can do at least some of that.
I absolutely hate that generative AI is being marketed as though it’s deep learning instead of a fancy Markov chain. But I think I’ve lost the battle over that nomenclature.
This. I work at a medical computer vision company, and our system performs better, on average, than radiologists.
It still needs a human to catch the weird edge cases, but studies show humans plus our model have a super high accuracy rate and speed. It’s perfect because there’s a global radiologist shortage, so helping the radiologists we have go faster can save a lot of lives.
But people are bad at nuance. All AI is like LLMs -_-
I mean really, where do these legends come from? I have tried to make chatgpt sort through single document and present clear organized data, present in the document, into sorted table. It can’t reliably do that. How would it do any kind of complex task? That is just laughable.
I’m convinced that people who are fascinated by llm chatbots are those who usually aren’t better than a chatbot at whatever they do. That is to say, they can’t do shit.
No, not really
Depends on what you’re calling AI. LLMs (and generative AI in general) are garbage for all those things, and most things in general (all things if you take their cost into account). Machine Learning and expert systems can do at least some of that.
I absolutely hate that generative AI is being marketed as though it’s deep learning instead of a fancy Markov chain. But I think I’ve lost the battle over that nomenclature.
This. I work at a medical computer vision company, and our system performs better, on average, than radiologists.
It still needs a human to catch the weird edge cases, but studies show humans plus our model have a super high accuracy rate and speed. It’s perfect because there’s a global radiologist shortage, so helping the radiologists we have go faster can save a lot of lives.
But people are bad at nuance. All AI is like LLMs -_-
Case in point: the downvotes are from people who don’t know or care about the difference.
It can say it can, when asked by an investor. And really, what else matters?
Did the author have a stroke by the time they reached the end of writing the article? The mental gymnastics would be funny if it wasn’t terrifying.
Wouldn‘t be surprised if the author used AI too but then again bad or let‘s call it „weird“ journalism isn’t all that new.
You’re absolutely right.
I mean really, where do these legends come from? I have tried to make chatgpt sort through single document and present clear organized data, present in the document, into sorted table. It can’t reliably do that. How would it do any kind of complex task? That is just laughable.
I’m convinced that people who are fascinated by llm chatbots are those who usually aren’t better than a chatbot at whatever they do. That is to say, they can’t do shit.
“I don’t know how to run a shop, but it can’t be that hard, let’s just have AI do it!”