Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • Tommasi [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 year ago

    I don’t know where everyone is getting these in depth understandings of how and when sentience arises.

    It’s exactly the fact that we don’t how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don’t even know how it works, so why are these AI hypemen so sure they got it figured out?

    The only logical answer is that they don’t and it’s 100% marketing.

    Hoping computer algorithms made in a way that’s meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.

    • Nevoic@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      The philosophy of this question is interesting, but if GPT5 is capable of performing all intelligence-related tasks at an entry level for all jobs, it would not only wipe out a large chunk of the job market, but also stop people from getting to senior positions because the entry level positions would be filled by GPT.

      Capitalists don’t have 5-10 years of forethought to see how this would collapse society. Even if GPT5 isn’t “thinking”, it’s actually its capabilities that’ll make a material difference. Even if it never gets to the point of advanced human thought, it’s already spitting out a bunch of unreliable information. Make it slightly more reliable and it’ll be on par with entry-level humans in most fields.

      So I think dismissing it as “just marketing” is too reductive. Even if you think it doesn’t deserve rights because it’s not sentient, it’ll still fundamentally change society.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        So I think dismissing it as “just marketing” is too reductive.

        And I think buying into the hype enough to say that LLMs are imminently going to match and outpace living organic brains in all of their functions is too credulous.

        it’ll still fundamentally change society

        With the current capitalistic system and with who owns that technology and commands it, it’s changing it all right, for the worse.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      The problem I have with this posture is that it dismisses AI as unimportant, simply because we don’t know what we mean when we say we might accidentally make it ‘sentient’ or whatever the fuck.

      Seems like the only reason anyone is interested in the question of AI sentience is to determine how we should regard it in relation to ourselves, as if we’ve learned absolutely nothing from several millennia of bigotry and exceptionalism. Shit’s different.

      Who the fuck cares if AI is sentient, it can be revolutionary or existential or entirely overrated independent of whether it has feelings or not.

      • Tommasi [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I don’t really mean to say LLMs and similiar technology is unimportant as a whole. What I have a problem with is this kind of Elon Musk style marketing, where company spokespersons and marketing departments make wild, sensationalist claims and hope everyone forgets about it in a few years.

        If LLMs are to be be handled in a responsible way, it to have honest dialogue about what they can and cannot do. The techbro mystification about superintelligence and sentience only obfuscates that.