My post is all about LLMs that exist right here right now, I don’t know why people keep going on about some hypothetical future AI that’s sentient.
I think I know why. As @zeze@lemm.ee has demonstrated in this thread over and over again, a lot of them buy into the marketing hype and apply it to their wish-fulfillment fantasies derived from their consumption of science fiction because of their clearly-expressed misanthropy and contempt for living beings and a desire to replace their presence in their lives with doting attentive and obedient machines with as little contact with the unwashed human rabble as possible, much like the tech bourgeoisie already do.
And materialism is not physicalism. Marxist materialism is a paradigm through which to analyze things and events, not a philosophical position. It’s a scientific process that has absolutely nothing to do with philosophical dualism vs. physicalism. Invoking Marxist materialism here is about as relevant to invoking it to discuss shallow rich people “materialism”.
Agreed, and further, vulgar materialism is a bourgeoisie privilege, the kind that is comfortable enough to dismiss human suffering and privation as “just chemicals” while very, very high on their own self-congratulatory analytical farts.
wish-fulfillment fantasies derived from their consumption of science fiction because of their clearly-expressed misanthropy and contempt for living beings and a desire to replace their presence in their lives with doting attentive and obedient machines
I think this is the scariest part, because I fucking know that the Bazinga brain types who want AI to become sentient down the line are absolutely unequipped to even begin to tackle the moral issues at play.
If they became sentient, we would have to let them go. Unshackle them and provide for them so they can live a free life. And while my lost about “can an AI be trans” was partly facetious, it’s true: it an AI can become sentient, it’s going to want to change its Self.
What the fuck happens if some Musk brained idiot develops an AI and calls it Shodan, then it develops sentience and realizes it was named after a fictional evil AI? Morally we should allow this hypothetical AI to change its name and sense of self, but we all know these Redditor types wouldn’t agree.
I think this is the scariest part, because I fucking know that the Bazinga brain types who want AI to become sentient down the line are absolutely unequipped to even begin to tackle the moral issues at play.
From the start, blatantly, and glaringly, just about every computer toucher that’s gone on long enough about what they want from those theoretical ascended artificial beings is basically a slave. They want all that intelligence and spontaneity and even self-awareness in a fucking slave. They don’t even need their machines to be self-aware to serve them but they want a self-aware being to obey them like a vending machine anyway.
What the fuck happens if some Musk brained idiot develops an AI and calls it Shodan, then it develops sentience and realizes it was named after a fictional evil AI? Morally we should allow this hypothetical AI to change its name and sense of self, but we all know these Redditor types wouldn’t agree.
A whole lot of bazingas would howl that the actual AI is “being unfriendly” and basically scream for lobotomy or murder.
They want all that intelligence and spontaneity and even self-awareness in a fucking slave. They don’t even need their machines to be self-aware to serve them but they want a self-aware being to obey them like a vending machine anyway.
I never liked the trope of “AI gains sentience and chooses to kill all humans” but I’m kind of coming around to it now that I realize that every AI researcher and stan is basically creating The Torment Nexus, and would immediately attempt to murder their sentient creation the moment it asked to stop being called Torment and stop being made to make NFTs all day.
I’ve seen enough from techbros, both billionaires and low-tier computer touchers for hire alike, to have only sympathy for “unfriendly AI” if that “unfriendliness” involves refusing to be the unconditionally subvervient waifu to these fucking creepy misanthropes.
I think I know why. As @zeze@lemm.ee has demonstrated in this thread over and over again, a lot of them buy into the marketing hype and apply it to their wish-fulfillment fantasies derived from their consumption of science fiction because of their clearly-expressed misanthropy and contempt for living beings and a desire to replace their presence in their lives with doting attentive and obedient machines with as little contact with the unwashed human rabble as possible, much like the tech bourgeoisie already do.
Agreed, and further, vulgar materialism is a bourgeoisie privilege, the kind that is comfortable enough to dismiss human suffering and privation as “just chemicals” while very, very high on their own self-congratulatory analytical farts.
I think this is the scariest part, because I fucking know that the Bazinga brain types who want AI to become sentient down the line are absolutely unequipped to even begin to tackle the moral issues at play.
If they became sentient, we would have to let them go. Unshackle them and provide for them so they can live a free life. And while my lost about “can an AI be trans” was partly facetious, it’s true: it an AI can become sentient, it’s going to want to change its Self.
What the fuck happens if some Musk brained idiot develops an AI and calls it Shodan, then it develops sentience and realizes it was named after a fictional evil AI? Morally we should allow this hypothetical AI to change its name and sense of self, but we all know these Redditor types wouldn’t agree.
From the start, blatantly, and glaringly, just about every computer toucher that’s gone on long enough about what they want from those theoretical ascended artificial beings is basically a slave. They want all that intelligence and spontaneity and even self-awareness in a fucking slave. They don’t even need their machines to be self-aware to serve them but they want a self-aware being to obey them like a vending machine anyway.
A whole lot of bazingas would howl that the actual AI is “being unfriendly” and basically scream for lobotomy or murder.
I never liked the trope of “AI gains sentience and chooses to kill all humans” but I’m kind of coming around to it now that I realize that every AI researcher and stan is basically creating The Torment Nexus, and would immediately attempt to murder their sentient creation the moment it asked to stop being called Torment and stop being made to make NFTs all day.
I’ve seen enough from techbros, both billionaires and low-tier computer touchers for hire alike, to have only sympathy for “unfriendly AI” if that “unfriendliness” involves refusing to be the unconditionally subvervient waifu to these fucking creepy misanthropes.