• naevaTheRat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 months ago

    Huh? a human brain is a complex as fuck persistent feedback system. When a nervous impulse starts propagating through the body/brain whether or not that one specifically has time to be integrated into consciousness has no bearing on the existence of a mind that would be capable of doing so. It’s not analogous at all.

    LLMs were designed to predict the next token, they do actually do so, but clearly they can do more than that, for example they can solve high school level math problems they have never seen before

    No see this is where we’re disagreeing. They can output strings which map to solutions of the problem quite often. Because they have internalised patterns, they will output strings that don’t map to solutions other times, and there is no logic to the successes and failures that indicate any sort of logical engagement with the maths problem. It’s not like you can say “oh this model understands division but has trouble with exponentiation” because it is not doing maths. It is doing string manipulation which sometimes looks like maths.

    human reasoning is a side effect of their ability to have lots of children.

    This is reductive to the point of absurdity. you may as well say human reasoning is a side effect of quark bonding in rapidly cooling highly localised regions of space time. you won’t actually gain any insight by paving over all the complexity.

    LLMs do absolutely nothing like an animal mind does, humans aren’t internalising massive corpuses of written text before they learn to write. Babies learn conversation turn taking long before anything resembling speech for example. There’s no constant back and forth between like the phonological loop and speech centers as you listen to what you just said and make the next sound.

    The operating principle is entirely alien and highly rigid and simplistic. It is fascinating that it can be used to produce stuff that often looks like what a conscious mind would do but that is not evidence that it’s doing the same task. There is no reason to suspect there is anything capable of supporting understanding in an LLM, they lack anything like the parts we expect to be present for that.

    • dualmindblade [he/him]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Huh? a human brain is a complex as fuck persistent feedback system

      Every time-limited feedback system is entirely equivalent to a feed-forward system, similar to how you can unroll a for loop.

      No see this is where we’re disagreeing… It is doing string manipulation which sometimes looks like maths.

      String manipulation and computation are equivalent, do you think not just LLMs but computers themselves cannot in principal do what a brain does?

      …you may as well say human reasoning is a side effect of quark bonding…

      No because that has nothing to do with the issue at hand. Humans and LLMs and rocks all have this in common. What humans and LLMs do have in common is that they are a result of an optimization process and do things that weren’t specifically optimized for as side effects. LLMs probably don’t understand anything but certainly it would help them to predict the next token if they did understand, describing them as only token predictors doesn’t help us with the question of whether they have understanding.

      …but that is not evidence that it’s doing the same task…

      Again, I am not trying to argue that LLMs are like people or that they are intelligent or that they understand, I am not trying to give evidence of this. I’m trying to show that this reasoning (LLMs merely predict a distribution of next tokens -> LLMs don’t understand anything and therefore can’t do certain things) is completely invalid