So I never got the chance to play with LLMs because of the phone number requirement. But recently duckduckgo added a chat feature which lets you talk to these models so I have been trying them out.

I know these two models aren’t state of the art but its tragic how they have devoured terabytes of corpus only to never have understood the meanings of the words they use.

I tried to talking to them about simple political issues. Once I asked why Elon Musk complains about woke while being a multibillionaire. I also asked why it is commonly said that Israel-Palestine conflict is complicated. Both times it gives very NYT-esque status quo-friendly answers which I think is very dangerous if the user is a naive person looking to delve into these topics. But as I question the premise and the assumptions of the answers, it immediately starts walking back and starts singing a completely different tune. Often I don’t even have to explain the problems I have and just asking it to explain its assumptions is enough.

I got it from saying “Musk is critical of censorship and the lack of free speech” to “Musk is a billionaire in a highly unequal society and his societal criticisms are to be taken with a ton of salt”. For the Palestine one, it started off with a list of reasons behind the complexity. Then I got it to strike them off one by one eventually concluding that the conflict is one of extreme power imbalance and that Palestinians are a clear victim of settler colonialism according to international consensus. My arguments weren’t even that strong and it just caved in almost immediately.

I’m still trying to find use cases LLMs. Specifically I would be really happy if I could find a use for a small model like TinyLlama. I find that text summarization is promising but I wouldn’t use it for a text I haven’t read before because LLM is a liar sometimes.

  • bobs_guns@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    Never use an llm as a source of truth. And if you try to use one as a sounding board, all you will get is an echo chamber. The llm is the ultimate yes man.

  • Shrike502@lemmygrad.ml
    link
    fedilink
    arrow-up
    12
    ·
    2 months ago

    LLMs are just a tool, and just like all other tools under capitalism, they are owned by the bourgeoisie and thus do what the bourgeoisie demands. Think of them as you would of factories that produce ammunition and weapons instead of tractors and useful everyday items.

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.ml
    link
    fedilink
    arrow-up
    12
    ·
    2 months ago

    It’s key to keep in mind how LLMs work. Fundamentally, they’re not really different from Markov chains. It’s a giant graph of token that the network has been trained on, and all it’s doing is predicting the next most likely token based on the input. What output it produces is directly based on what data it’s been trained on.

    I’ve found a few good uses for LLMs. I find they generally do a decent job with doing text summaries, and they can also be useful for code examples. While the code they produce isn’t necessarily correct, it’s often helpful for pointing your in the right direction and faster than looking through stuff like stack overflow. Caveat there is that you already have to understand what you’re trying to do.

    Another use that I’ve found interesting is to use LLM as a sounding board. If I’m trying to explore an idea, having a chat with it can be useful because the responses can often stimulate new ideas. Accuracy is not an issue in this case because I already understand the topic I’m trying to explore, and the value is in the phrasings that can fall out of the LLM that can give me a mental thread to pull on.

    Similarly, LLMs are also great for practising languages. I’m learning Mandarin right now, and the app has a built in LLM chat bot that you can talk to.

    I think even more interesting use cases are some of the recent stories from China where LLMs are being used for monitoring infrastructure and predicting where to do proactive maintenance. This is a great use case because these things are great at correlating large volumes of data and doing extrapolations. But the output is simply advice to the human operator who’s actually making the final decision.

    • KrasnaiaZvezda@lemmygrad.ml
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      2 months ago

      New robots are also using LLMs both for understanding their enviroment with cameras, rather than complicated sensors that might not understand the world as we do, and for controlling movement by basically taking in the data from the robot and what other LLMs understand from the enviroment and predicting what inputs are needed to move correctly for movement or doing any tasks.

      As the LLMs get better they can also come up with better strategies too, which is already being used to some extent to have them create, test and fix codes based on output and error messages and this should soon allow fully autonomous robots as well that can think by themselves and interact with the world leading to many advancements, like full automation of work and scientific discoveries.

  • amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    I can explain more later if need be, but some quick-ish thoughts (I have spent a lot of time around LLMs and discussion of them in the past year or so).

    • They are best for “hallucination” on purpose. That is, fiction/fantasy/creative stuff. Novels, RP, etc. There is a push in some major corporations to “finetune” them to be as accurate as possible and market them for that use, but this is a dead end for a number of reasons and you should never ever trust what an LLM says on anything without verifying it outside of the LLM (e.g. you shouldn’t take what it says at face value).

    • LLMs operate on probability of continuing what is in “context” by picking the next token. This means it could have the correct info on something and even with a 95% chance of picking it, it could hit that 5% and go off the rails. LLMs can’t go back and edit phrasing or plan out a sentence either, so if it picks a token that makes a mess of things, it just has to keep going. Similar to an improv partner in RL. No backtracking and “this isn’t a backstory we agreed on”, you just have to keep moving.

    • Because LLMs continue based on what is in “context” (its short-term memory of the conversation, kind of), they tend to double down on what is already said. So if you get it saying blue is actually red once, it may keep saying that. If you argue with it and it argues back, it’ll probably keep arguing. If you agree with it and it agrees back, it’ll probably keep agreeing. It’s very much a feedback loop that way.

  • Cyrazure@lemmygrad.ml
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    2 months ago

    just append “from a marxist perspective” afterwards and you should be good to go

    LLMs are trained to appear as “unbiased” as possible to establishment ideas which just happens to be liberal

    • KrasnaiaZvezda@lemmygrad.ml
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      I was using Llama 3 8B offline and Command R 104B on HuggingChat with a system prompt to always take a Marxist-Leninist/materialist point of view and it does work quite well. You might need to play around with the wording you use when prompting but it can do a lot.

      Llama 3 8B was trying to do it even for some physics stuff I was doing so I guess I either need a better system prompt or it’s the kind of things that a bigger/smarter model might deal with better. lol

  • FuckyWucky [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 months ago

    yea you can get it to say whatever you want if you manipulate it enough, but by default its liberal (being the dominant ideology and all).

    i found it useful for making small scripts and all. also, 3.5 turbo is quiet a bit dumber than 4.0