It’s useful for marketing hype and to make credulous consumers believe that a perfect helpmeet program that actually loves them for real is right around the corner. That’s the issue here: something being difficult to define and not well understood that is then assigned to a marketed product, in this case sentience (or even sapience) to LLMs.
People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we’re still light years away from a socialist organization of labor.
None of the previous industrial revolutions were brought on by a sentient machine, I’m not sure why it’s relevant to this technology’s potential impact.
I don’t think you’re going to change any minds with your nakedly obvious “both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.
The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction
“both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.
I don’t favor the hype, I’m just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as “sentience”. The entire proposition is ridiculous.
The entire question of sentience is irrelevant to the material impact of the technology.
I actually agree here. That part is irrelevant on its surface but it does keep getting brought up as part of the marketing hype and that part does have some effective consequences, including in this thread, where people buying into the LLM hype bring up those questions themselves and assign attributes to LLMs that simply aren’t there outside of the aforementioned marketing hype.
I’m just not naive enough to dismiss the potential impact of machine learning
That impact, so far, has been mostly harmful because of who owns and who commands the technology. Analysis of that is fine, but most claims of how “liberating” it will surely be seem like idealism to me under the current material conditions and under the present system.
EDIT: Besides, you should look again at which position is bringing the sentience talk here:
And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
I’m not actually sure there’s much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don’t mean to speak for you, please correct me if I’m wrong.
While I think the question of AI sentience is ridiculous, I still think that it wouldn’t take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn’t string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.
irrespective of what qualities of competence AI might actually have
That competence mostly applies as a net negative when it’s being used in its present state because of who owns and who commands it. The “competence” isn’t thrilling or inspiring people that are getting denied medical because a computer program “accidentally” denied them healthcare, or when they experience increasingly sophisticated profiling and surveillance technology, or when people who previously paid bills with artistic talents get outbid by cheap-to-free treat printing technology.
At a ground level among common people, outside of science fiction scenarios in their movies and shows and games, asking them to be particularly “curious” about such things when they’re only feeling downward pressure from them is condescending and I don’t blame some for being knee-jerk against it, or against those scolding them for not being enthusiastic enough.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.
That was not my position, though I do on the side mock the singularity cultists and false claims about how close the robot god’s construction is, and I also condemn reductionist derision of living human beings with edgy techbro terminology like “meat computers” while trying to boost their favorite LLM products.
No disagreement with anything you just said, apologies for misinterpreting your position.
I don’t know how to reconcile the manic singularity cultists with what I feel is a very real acceleration toward a hellscape of underemployment and hyper capitalism driven by AI. It does feel to me like the urgency AI represents deserves anxious attention, and I at least appreciate the weight those cultists place on that technology I think represents a threat. It feels like people are only either eagerly waiting for a sentient AGI, or mocking AI on those terms of sentience, leaving precious few who are actually materially concerned with what threats AI represent. And that is not at all a way of dismissing the very real ways machine learning is deployed against real people today, but I think there’s a lot of room for it to get worse and I wish people took that possibility seriously.
It’s especially frustrating because there are very real threats from the technology as it is being applied and commanded, but because the ruling class has so many tech billionaires among them, their version of perceived threats gets the attention and publicity, usually some pop culture shit about robot uprisings (against them specifically).
It’s useful for marketing hype and to make credulous consumers believe that a perfect helpmeet program that actually loves them for real is right around the corner. That’s the issue here: something being difficult to define and not well understood that is then assigned to a marketed product, in this case sentience (or even sapience) to LLMs.
People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we’re still light years away from a socialist organization of labor.
None of the previous industrial revolutions were brought on by a sentient machine, I’m not sure why it’s relevant to this technology’s potential impact.
Bullshit false equivalency to run interference for “only equally detached from reality” people like this.
https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims
I don’t think you’re going to change any minds with your nakedly obvious “both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.
The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction
I don’t favor the hype, I’m just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as “sentience”. The entire proposition is ridiculous.
I actually agree here. That part is irrelevant on its surface but it does keep getting brought up as part of the marketing hype and that part does have some effective consequences, including in this thread, where people buying into the LLM hype bring up those questions themselves and assign attributes to LLMs that simply aren’t there outside of the aforementioned marketing hype.
That impact, so far, has been mostly harmful because of who owns and who commands the technology. Analysis of that is fine, but most claims of how “liberating” it will surely be seem like idealism to me under the current material conditions and under the present system.
EDIT: Besides, you should look again at which position is bringing the sentience talk here:
https://hexbear.net/comment/4292155
I’m not actually sure there’s much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don’t mean to speak for you, please correct me if I’m wrong.
While I think the question of AI sentience is ridiculous, I still think that it wouldn’t take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn’t string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.
I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that’s hubris.
That competence mostly applies as a net negative when it’s being used in its present state because of who owns and who commands it. The “competence” isn’t thrilling or inspiring people that are getting denied medical because a computer program “accidentally” denied them healthcare, or when they experience increasingly sophisticated profiling and surveillance technology, or when people who previously paid bills with artistic talents get outbid by cheap-to-free treat printing technology.
At a ground level among common people, outside of science fiction scenarios in their movies and shows and games, asking them to be particularly “curious” about such things when they’re only feeling downward pressure from them is condescending and I don’t blame some for being knee-jerk against it, or against those scolding them for not being enthusiastic enough.
That was not my position, though I do on the side mock the singularity cultists and false claims about how close the robot god’s construction is, and I also condemn reductionist derision of living human beings with edgy techbro terminology like “meat computers” while trying to boost their favorite LLM products.
No disagreement with anything you just said, apologies for misinterpreting your position.
I don’t know how to reconcile the manic singularity cultists with what I feel is a very real acceleration toward a hellscape of underemployment and hyper capitalism driven by AI. It does feel to me like the urgency AI represents deserves anxious attention, and I at least appreciate the weight those cultists place on that technology I think represents a threat. It feels like people are only either eagerly waiting for a sentient AGI, or mocking AI on those terms of sentience, leaving precious few who are actually materially concerned with what threats AI represent. And that is not at all a way of dismissing the very real ways machine learning is deployed against real people today, but I think there’s a lot of room for it to get worse and I wish people took that possibility seriously.
No arguments from me here.
It’s especially frustrating because there are very real threats from the technology as it is being applied and commanded, but because the ruling class has so many tech billionaires among them, their version of perceived threats gets the attention and publicity, usually some pop culture shit about robot uprisings (against them specifically).