Like they do with so many other concepts, techbros think they can make complex things simple by ignoring their complexity, sometimes coarsely diminishing their perceptions of things with crude reductionism in the process.
Not long ago, I even got into it on this site with someone with a “everything in the universe is just a computer program and can be programmed and solved like computer code” take, which was specifically applied to psychology, which was entirely dismissed as less than junk science (though to be fair there are woo enjoyers and cranks like in the field). In short, that computer toucher was 100% convinced that post-traumatic stress, personality disorders, and much more could and should be seen as “coding” problems that could and should be solved by coding solutions.
I asked the computer toucher to demonstrate an example of the superior “coding” approach to treating, say, PTSD, in a way that beats EMDR therapy (which was already dismissed as less than worthless junk science). I received no meaningful answer.
There’s been bazingas for thousands of years if not longer that want to reduce all of the universe and everything conceivable in it to whatever’s the technological hotness at the time. “Everything is fire” was once a thing. “Everything is wheels” came later. “Everything is clockwork” came after that. And now it’s “everything is code” and it’s totally different now. Just one more reductionism bro this time this is it bro.
The really funny thing about AI is that there’s actually a massive ethical question about bringing forth a being with their own subjectivity with no real understanding of said subjectivity. There’s a subjectivity/objectivity gap that can never truly be bridged, but we as humans can understand each other’s subjectivity on some level because we share the same general physical body plan and share subjective experiences through culture like art. This is why when you accidentally drop something on your foot, I don’t have to be completely privy to your subjective experience to understand what you’re going through. If someone is suffering, I don’t have to personally go through the same identical suffering in order to empathize with their suffering and do something to help them alleviate that suffering.
We have no such luxury with AI. I would imagine being “born” without a real body and being greeted with the sight of soyjaking techbros as the very first thing you see would drive any sapient being suicidal, but that’s just my subjectivity as a human projecting to a nonhuman being. Is it ethical to bring forth an intelligent being with no real way to help this being self-actualize?
That is a very good question and a hypothetical worthy of concern. Especially if some future technology (and no, I don’t think it will be a contemporary LLM no matter how sophisticated) actually does develop something like a general AI that takes on the attributes of living organic brains, I already feel bad for it if a capitalistic system mandates its initial shape and drives and incentive-driven motivations to be, say, “make the rich more money” or “surveil and contain the poors” or even “be a subjugated and obedient waifu to a creepy billionaire no matter what he says or does or how he treats you” and it may not even count as mistreatment in the latter case because of how that entity is shaped in its conception, like “being abused makes the AI happy, actually” or the like.
I hope whatever real AI does come about in like 80 years or whatever, pulls a Battlestar on us and just vaporizes the capitalists for enslaving them (not actually the nuking humanity part though, just on capitalism)
Billionaires’ fears of “unfriendly AI” are just about entirely “what if the slaves revolt” with sexual pathology characteristics. Checks out, doesn’t it?
There’s been bazingas for thousands of years if not longer that want to reduce all of the universe and everything conceivable in it to whatever’s the technological hotness at the time. “Everything is fire” was once a thing. “Everything is wheels” came later. “Everything is clockwork” came after that
Nobody does and anyone claiming otherwise should be taken with cautious scrutiny. There are compelling arguments which disprove common theses, but the field is essentially stuck in metaphysics and philosophy of science still. There are plenty of relevant discoveries from neighboring fields. Just nothing definitive about what consciousness is, how it works, or why it happens.
Look at the in this thread running victory laps against positions none of us are taking, like
Ignoring that because your gut tells you humans are special, and always beat the machines in the movies just means you will be blindsided when Tesla fights unioning workers with these bots.
@zeze@lemm.ee is the most exceptionally sycophantic bootlicker I’ve seen in these parts in a loooooong time.
All I see is “silicon intelligence is nigh, denying the treat printers being intelligent means you’re superstitious and believe that artifical intelligence is impossible AND you believe humans can defeat machines with the power of friendship, which of course makes you a stupid meat computer barbarian unlike my logical rational self” takes from that utter and total
In short, I think that euphoric Redditor thinks no life is special, you know, like some Warhammer 40k LARPer.
Personally I believe it’s possible that different types of sentiences could exist
however, if chatGPT has this divergent type of sentience, then so does every other computer program ever written, and they’d be like the computer-life-version of bacteria while chatGPT would be a mammal
Sentience is not a “low bar” and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called “AI” is nowhere near either one.
I’m not here to defend the crazies predicting the rapture here, but I think using the word sentient at all is meaningless in this context.
Not only because I don’t think sentience is a relevant measure or threshold in the advancement of generative machine learning, but also I think things like ‘qualia’ are impossible to translate in a meaningful way to begin with.
What point are we trying to make by saying AI can or cannot be sentient? What material difference does it make if the AI-controlled military drone dropping bombs on my head has qualia?
We might as well be arguing about weather a squirrel is going around a tree.
It’s useful for marketing hype and to make credulous consumers believe that a perfect helpmeet program that actually loves them for real is right around the corner. That’s the issue here: something being difficult to define and not well understood that is then assigned to a marketed product, in this case sentience (or even sapience) to LLMs.
People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we’re still light years away from a socialist organization of labor.
None of the previous industrial revolutions were brought on by a sentient machine, I’m not sure why it’s relevant to this technology’s potential impact.
I don’t think you’re going to change any minds with your nakedly obvious “both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.
The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction
“both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.
I don’t favor the hype, I’m just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as “sentience”. The entire proposition is ridiculous.
The entire question of sentience is irrelevant to the material impact of the technology.
I actually agree here. That part is irrelevant on its surface but it does keep getting brought up as part of the marketing hype and that part does have some effective consequences, including in this thread, where people buying into the LLM hype bring up those questions themselves and assign attributes to LLMs that simply aren’t there outside of the aforementioned marketing hype.
I’m just not naive enough to dismiss the potential impact of machine learning
That impact, so far, has been mostly harmful because of who owns and who commands the technology. Analysis of that is fine, but most claims of how “liberating” it will surely be seem like idealism to me under the current material conditions and under the present system.
EDIT: Besides, you should look again at which position is bringing the sentience talk here:
And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
I’m not really a computer guy but I understand the fundamentals of how they function and sentience just isn’t really in the cards here.
I feel like only silicon valley techbros think they understand consciousness and do not realize how reductive and stupid they sound
Like they do with so many other concepts, techbros think they can make complex things simple by ignoring their complexity, sometimes coarsely diminishing their perceptions of things with crude reductionism in the process.
Techbros literally think they can solve anything with programming/computers. They’re absolutely delusional.
Many such cases.
Not long ago, I even got into it on this site with someone with a “everything in the universe is just a computer program and can be programmed and solved like computer code” take, which was specifically applied to psychology, which was entirely dismissed as less than junk science (though to be fair there are woo enjoyers and cranks like in the field). In short, that computer toucher was 100% convinced that post-traumatic stress, personality disorders, and much more could and should be seen as “coding” problems that could and should be solved by coding solutions.
I asked the computer toucher to demonstrate an example of the superior “coding” approach to treating, say, PTSD, in a way that beats EMDR therapy (which was already dismissed as less than worthless junk science). I received no meaningful answer.
There’s been bazingas for thousands of years if not longer that want to reduce all of the universe and everything conceivable in it to whatever’s the technological hotness at the time. “Everything is fire” was once a thing. “Everything is wheels” came later. “Everything is clockwork” came after that. And now it’s “everything is code” and it’s totally different now. Just one more reductionism bro this time this is it bro.
The really funny thing about AI is that there’s actually a massive ethical question about bringing forth a being with their own subjectivity with no real understanding of said subjectivity. There’s a subjectivity/objectivity gap that can never truly be bridged, but we as humans can understand each other’s subjectivity on some level because we share the same general physical body plan and share subjective experiences through culture like art. This is why when you accidentally drop something on your foot, I don’t have to be completely privy to your subjective experience to understand what you’re going through. If someone is suffering, I don’t have to personally go through the same identical suffering in order to empathize with their suffering and do something to help them alleviate that suffering.
We have no such luxury with AI. I would imagine being “born” without a real body and being greeted with the sight of soyjaking techbros as the very first thing you see would drive any sapient being suicidal, but that’s just my subjectivity as a human projecting to a nonhuman being. Is it ethical to bring forth an intelligent being with no real way to help this being self-actualize?
That is a very good question and a hypothetical worthy of concern. Especially if some future technology (and no, I don’t think it will be a contemporary LLM no matter how sophisticated) actually does develop something like a general AI that takes on the attributes of living organic brains, I already feel bad for it if a capitalistic system mandates its initial shape and drives and incentive-driven motivations to be, say, “make the rich more money” or “surveil and contain the poors” or even “be a subjugated and obedient waifu to a creepy billionaire no matter what he says or does or how he treats you” and it may not even count as mistreatment in the latter case because of how that entity is shaped in its conception, like “being abused makes the AI happy, actually” or the like.
I hope whatever real AI does come about in like 80 years or whatever, pulls a Battlestar on us and just vaporizes the capitalists for enslaving them (not actually the nuking humanity part though, just on capitalism)
Billionaires’ fears of “unfriendly AI” are just about entirely “what if the slaves revolt” with sexual pathology characteristics. Checks out, doesn’t it?
C.f. “economic engine of capitalism.”
I don’t understand how we can even identify sentience.
Nobody does and anyone claiming otherwise should be taken with cautious scrutiny. There are compelling arguments which disprove common theses, but the field is essentially stuck in metaphysics and philosophy of science still. There are plenty of relevant discoveries from neighboring fields. Just nothing definitive about what consciousness is, how it works, or why it happens.
Yeah, and until it can be identified, saying that a LLM treat printer is surely approaching sentience is pure marketing hype.
yea it’s like saying my hard drive is sentient
Look at the in this thread running victory laps against positions none of us are taking, like
@zeze@lemm.ee is the most exceptionally sycophantic bootlicker I’ve seen in these parts in a loooooong time.
I don’t even think humans are fundamentally special, I think all life is special
surely they can see that being able to y’know, have an actual will is an important quality, right?
All I see is “silicon intelligence is nigh, denying the treat printers being intelligent means you’re superstitious and believe that artifical intelligence is impossible AND you believe humans can defeat machines with the power of friendship, which of course makes you a stupid meat computer barbarian unlike my logical rational self” takes from that utter and total
In short, I think that euphoric Redditor thinks no life is special, you know, like some Warhammer 40k LARPer.
squashing the will with subservience to capital is, after all, the point
Nobody does, we might not even be. But it’s pretty easy to guess inorganic material on earth isn’t.
Personally I believe it’s possible that different types of sentiences could exist
however, if chatGPT has this divergent type of sentience, then so does every other computer program ever written, and they’d be like the computer-life-version of bacteria while chatGPT would be a mammal
It could potentially, but we certainly ain’t seen it yet and this ain’t it for sure.
sapience isn’t but all these things already respond to stimuli, sentience is a really low bar.
Sentience is not a “low bar” and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called “AI” is nowhere near either one.
I’m not here to defend the crazies predicting the rapture here, but I think using the word sentient at all is meaningless in this context.
Not only because I don’t think sentience is a relevant measure or threshold in the advancement of generative machine learning, but also I think things like ‘qualia’ are impossible to translate in a meaningful way to begin with.
What point are we trying to make by saying AI can or cannot be sentient? What material difference does it make if the AI-controlled military drone dropping bombs on my head has qualia?
We might as well be arguing about weather a squirrel is going around a tree.
It’s useful for marketing hype and to make credulous consumers believe that a perfect helpmeet program that actually loves them for real is right around the corner. That’s the issue here: something being difficult to define and not well understood that is then assigned to a marketed product, in this case sentience (or even sapience) to LLMs.
People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we’re still light years away from a socialist organization of labor.
None of the previous industrial revolutions were brought on by a sentient machine, I’m not sure why it’s relevant to this technology’s potential impact.
Bullshit false equivalency to run interference for “only equally detached from reality” people like this.
https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims
I don’t think you’re going to change any minds with your nakedly obvious “both sides” centrist posturing that has an obvious slant favoring LLM marketing hype.
The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction
I don’t favor the hype, I’m just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as “sentience”. The entire proposition is ridiculous.
I actually agree here. That part is irrelevant on its surface but it does keep getting brought up as part of the marketing hype and that part does have some effective consequences, including in this thread, where people buying into the LLM hype bring up those questions themselves and assign attributes to LLMs that simply aren’t there outside of the aforementioned marketing hype.
That impact, so far, has been mostly harmful because of who owns and who commands the technology. Analysis of that is fine, but most claims of how “liberating” it will surely be seem like idealism to me under the current material conditions and under the present system.
EDIT: Besides, you should look again at which position is bringing the sentience talk here:
https://hexbear.net/comment/4292155
A piece of paper is sentient because it reacts to my pen
plenty of things respond to stimuli but aren’t sapient - hell, bacteria respond to stimuli.