I’ve tried using deepseek (first time I’ve ever used an LLM, so maybe I’m being dumb) to help me a little with designing some circuit because my reference book was leaving out a LOT of crucial information.
The results have been … subpar. The model seems to be making quite elementary mistakes, like leaving floating components with missing connections.
I’m honestly kinda disappointed. Maybe this is a weak area for it. I’ve probably had to tell deepseek more about designing the circuit in question than it has told me.
Edit: I realised I was just being dumb, since LLMs aren’t designed for this task.
General rule that’s helpful for keeping in mind with generative AI models is they can only be as knowledgeable as the subject matter they have been trained on. And even then, it’s only “can” of potential, not a guarantee, as training on the material doesn’t necessarily mean it will answer correctly with regards to that material.
Which makes intuitive sense if you compare to a human, but is easy to miss in all the black box hype surrounding AI. No matter how clever a human being is, if they don’t know something, they don’t know it and thinking about it can only do so much. Now imagine that, but also missing key capabilities that humans have, like the ability to ask questions and learn long-term information from them in real-time.
Side note: The one subject I can think of where thinking analysis alone may work functionally to uncover new knowledge is, like, mathematical proofs where it’s abstract A, B, therefore C logic, and that’s also something LLMs don’t have the design or capability for.
Deepseek has some legit reasons to have hype, but primarily it’s hype relative to other LLMs and their training. There are still a lot of hurdles in getting LLMs past common problems.