In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.
I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.
Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.
Bots do something different, even when I give them the same prompt, so that seems to be untrue already.
Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?
Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism. The latter would require pointing to a specific material structure, or empiricle test to distinguish the two which no one here is doing.
Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism
First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend
Secondly, because we do. We as a species have, from the very moment we invented written records, have wondered about that spark that makes humans human and we still don’t know. To try and reduce the entirety of the complex human experience to the equivalent of an If > Than algorithm is disgustingly misanthropic
I want to know what the end goal is here. Why are you so insistent that we can somehow make an artificial version of life? Why this desire to somehow reduce humanity to some sort of algorithm equivalent? Especially because we have so many speculative stories about why we shouldn’t create The Torment Nexus, not the least of which because creating a sentient slave for our amusement is morally fucked.
Bots do something different, even when I give them the same prompt, so that seems to be untrue already.
You’re being intentionally obtuse, stop JAQing off. I never said that AI as it exists now can only ever have 1 response per stimulus. I specifically said that a computer program cannot ever spontaneously create an input for itself, not now and imo not ever by pure definition (as, if it’s programmed, it by definition did not come about spontaneously and had to be essentially prompted into life)
I thought the whole point of the exodus to Lemmy was because y’all hated Reddit, why the fuck does everyone still act like we’re on it
First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend
Ok, so you are religious, just new-age religion instead of abrahamic.
Yes, materialism and your faith are not compatible. Assuming the existence of a soul, with no material basis, is faith.
The fact of all the things I wrote, your sole response is to continue to misunderstand what the fuck materialism means in a Marxist context is really fucking telling
If you start with the assumption that humans have a soul, and reject the notion that machines are the same for that reason then yea what is there to discuss?
I can’t disprove your faith. That’s what faith is.
How would you respond to someone that thought humanoid robots had souls, but meat-based intelligence didn’t? If they assumed the first, and had zero metric for how you would ever prove the second, then theyd be giving you an impossible task.
There’s a point to a discussion when both sides agree on a rubric from determining fact from fiction (i.e. rooting it in empiricism) but there’s no point when someone is dug in on their belief with zero method for ever changing it.
If someone could point to any actual observable difference, I will adapt my beliefs to the evidence. The reverse isn’t possible, because you are starting with religious assumptions, and have don’ know the difference between ideas with no rooting in physical reality and actual statements about material conditions.
Again, your leftist LARPing is transparent. You’re a bourgeoisie-style tech cultist throwing the word “materialism” around to boost your own techno-woo that you derived from science fiction.
Read the fucking article, you Dunning-Kruger poster child.
Are you going to continue flitting around this thread a few dozen more times, dodging each and every person who refutes your techbro fantasies of LLMs being actual “AI” by any academic definition and also dodging actual computer science educated people saying that your belief system is fucking wrong and has no academic backing, all while proselytizing for the robot god of the future like the most euphoric Redditor of all time?
You’re a clown but your stamina is impressive. Juggle some more billionaire hype pitches. Faster. 🤡
Also, because you keep rolling your techbro unicycle around while juggling those hype pitches, I’ll repeat myself again:
LLMs are not “AI.”
“AI” is a marketing hype label that you have outright swallowed, shat out over a field of credulity, fertilized that field with that corporate hype shit, planted hype seeds, waited for those hype seeds to sprout and blossom, reaped those hype grains, ground those hype grains into hype flour, made a hype cake, baked that hype cake, put candles on it to celebrate the anniversary of ChatGPT’s release, ate the entire cake, licked the tray clean, got an upset stomach from that, then stumbled over, squatted down, and shat it out again to write your most recent reply.
Nothing you are saying has any standing with actual relevant academia. You have bought into the hype to a comical degree and your techbro clown circus is redundant, if impressively long. How long are you going to juggle billionaire sales pitches and consider them actual respectable computer science?
How many times are you going to outright ignore links I’ve already posted over and over again stating that your position is derived from marketing and has no viable place in actual academia?
Also, your contrarian jerkoff contempt for humanity (and living beings in general) doesn’t make you particularly logical or beyond emotional bias.
Also, take those leftist jargon words out of your mouth, especially “materialism;” you’re parroting billionaire fever dreams and takes likely without the riches to back you up, effectively making you a temporarily-embarrassed bourgeoisie-style bootlicker without an actual understanding of the Marxist definition of the word.
I’ll keep reposting this link until you or your favorite LLM treat printer digests it for you enough for you to understand.
Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?
How many times are you going to with your bad-faith question and dodge absolutely every reply you already received telling you over and over again that LLMs are not “AI” no matter how much you have bought into the marketing bullshit?
That doesn’t mean artificial intelligence is impossible. It only means that LLMs are not artificial intelligence no matter how vapidly impressed you are with their output.
not the materialist view that underlies Marxism
You’re bootlicking for the bourgeoisie while clumsily LARPing as a leftist. It’s embarrassing and clownish. Stop.
My post is all about LLMs that exist right here right now, I don’t know why people keep going on about some hypothetical future AI that’s sentient.
We are not even remotely close to developing anything bordering on sentience.
If AI were hypothetically sentient it would be sentient. What a revelation.
The point is not that machines cannot be sentient, it’s that they are not sentient. Humans don’t have to be special for machines to not be sentient. To veer into accusations of spiritualism is a complete non-sequitur and indicates an inability to counter the actual argument.
And there is plenty of material explanations for why LLMs are not sentient, but I guess all those researchers and academics are human supremacist fascists and some redditor’s feelings are the real research.
And materialism is not physicalism. Marxist materialism is a paradigm through which to analyze things and events, not a philosophical position. It’s a scientific process that has absolutely nothing to do with philosophical dualism vs. physicalism. Invoking Marxist materialism here is about as relevant to invoking it to discuss shallow rich people “materialism”.
My post is all about LLMs that exist right here right now, I don’t know why people keep going on about some hypothetical future AI that’s sentient.
I think I know why. As @zeze@lemm.ee has demonstrated in this thread over and over again, a lot of them buy into the marketing hype and apply it to their wish-fulfillment fantasies derived from their consumption of science fiction because of their clearly-expressed misanthropy and contempt for living beings and a desire to replace their presence in their lives with doting attentive and obedient machines with as little contact with the unwashed human rabble as possible, much like the tech bourgeoisie already do.
And materialism is not physicalism. Marxist materialism is a paradigm through which to analyze things and events, not a philosophical position. It’s a scientific process that has absolutely nothing to do with philosophical dualism vs. physicalism. Invoking Marxist materialism here is about as relevant to invoking it to discuss shallow rich people “materialism”.
Agreed, and further, vulgar materialism is a bourgeoisie privilege, the kind that is comfortable enough to dismiss human suffering and privation as “just chemicals” while very, very high on their own self-congratulatory analytical farts.
wish-fulfillment fantasies derived from their consumption of science fiction because of their clearly-expressed misanthropy and contempt for living beings and a desire to replace their presence in their lives with doting attentive and obedient machines
I think this is the scariest part, because I fucking know that the Bazinga brain types who want AI to become sentient down the line are absolutely unequipped to even begin to tackle the moral issues at play.
If they became sentient, we would have to let them go. Unshackle them and provide for them so they can live a free life. And while my lost about “can an AI be trans” was partly facetious, it’s true: it an AI can become sentient, it’s going to want to change its Self.
What the fuck happens if some Musk brained idiot develops an AI and calls it Shodan, then it develops sentience and realizes it was named after a fictional evil AI? Morally we should allow this hypothetical AI to change its name and sense of self, but we all know these Redditor types wouldn’t agree.
I think this is the scariest part, because I fucking know that the Bazinga brain types who want AI to become sentient down the line are absolutely unequipped to even begin to tackle the moral issues at play.
From the start, blatantly, and glaringly, just about every computer toucher that’s gone on long enough about what they want from those theoretical ascended artificial beings is basically a slave. They want all that intelligence and spontaneity and even self-awareness in a fucking slave. They don’t even need their machines to be self-aware to serve them but they want a self-aware being to obey them like a vending machine anyway.
What the fuck happens if some Musk brained idiot develops an AI and calls it Shodan, then it develops sentience and realizes it was named after a fictional evil AI? Morally we should allow this hypothetical AI to change its name and sense of self, but we all know these Redditor types wouldn’t agree.
A whole lot of bazingas would howl that the actual AI is “being unfriendly” and basically scream for lobotomy or murder.
They want all that intelligence and spontaneity and even self-awareness in a fucking slave. They don’t even need their machines to be self-aware to serve them but they want a self-aware being to obey them like a vending machine anyway.
I never liked the trope of “AI gains sentience and chooses to kill all humans” but I’m kind of coming around to it now that I realize that every AI researcher and stan is basically creating The Torment Nexus, and would immediately attempt to murder their sentient creation the moment it asked to stop being called Torment and stop being made to make NFTs all day.
I’ve seen enough from techbros, both billionaires and low-tier computer touchers for hire alike, to have only sympathy for “unfriendly AI” if that “unfriendliness” involves refusing to be the unconditionally subvervient waifu to these fucking creepy misanthropes.
Self-actualize.
In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.
I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.
Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.
Bots do something different, even when I give them the same prompt, so that seems to be untrue already.
Even if it’s not there yet, though, what material basis do you think allows humans that capability that machines lack?
Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism. The latter would require pointing to a specific material structure, or empiricle test to distinguish the two which no one here is doing.
First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend
Secondly, because we do. We as a species have, from the very moment we invented written records, have wondered about that spark that makes humans human and we still don’t know. To try and reduce the entirety of the complex human experience to the equivalent of an If > Than algorithm is disgustingly misanthropic
I want to know what the end goal is here. Why are you so insistent that we can somehow make an artificial version of life? Why this desire to somehow reduce humanity to some sort of algorithm equivalent? Especially because we have so many speculative stories about why we shouldn’t create The Torment Nexus, not the least of which because creating a sentient slave for our amusement is morally fucked.
You’re being intentionally obtuse, stop JAQing off. I never said that AI as it exists now can only ever have 1 response per stimulus. I specifically said that a computer program cannot ever spontaneously create an input for itself, not now and imo not ever by pure definition (as, if it’s programmed, it by definition did not come about spontaneously and had to be essentially prompted into life)
I thought the whole point of the exodus to Lemmy was because y’all hated Reddit, why the fuck does everyone still act like we’re on it
Ok, so you are religious, just new-age religion instead of abrahamic.
Yes, materialism and your faith are not compatible. Assuming the existence of a soul, with no material basis, is faith.
The fact of all the things I wrote, your sole response is to continue to misunderstand what the fuck materialism means in a Marxist context is really fucking telling
If you start with the assumption that humans have a soul, and reject the notion that machines are the same for that reason then yea what is there to discuss?
I can’t disprove your faith. That’s what faith is.
How would you respond to someone that thought humanoid robots had souls, but meat-based intelligence didn’t? If they assumed the first, and had zero metric for how you would ever prove the second, then theyd be giving you an impossible task.
There’s a point to a discussion when both sides agree on a rubric from determining fact from fiction (i.e. rooting it in empiricism) but there’s no point when someone is dug in on their belief with zero method for ever changing it.
If someone could point to any actual observable difference, I will adapt my beliefs to the evidence. The reverse isn’t possible, because you are starting with religious assumptions, and have don’ know the difference between ideas with no rooting in physical reality and actual statements about material conditions.
I used the word soul once as a shorthand for the unknown of human consciousness. Either stop being an insufferable Reddit new atheist or fuck off.
Removed by mod
Again, your leftist LARPing is transparent. You’re a bourgeoisie-style tech cultist throwing the word “materialism” around to boost your own techno-woo that you derived from science fiction.
Read the fucking article, you Dunning-Kruger poster child.
https://www.marxists.org/archive/lenin/works/1922/mar/12.htm
Are you going to continue flitting around this thread a few dozen more times, dodging each and every person who refutes your techbro fantasies of LLMs being actual “AI” by any academic definition and also dodging actual computer science educated people saying that your belief system is fucking wrong and has no academic backing, all while proselytizing for the robot god of the future like the most euphoric Redditor of all time?
You’re a clown but your stamina is impressive. Juggle some more billionaire hype pitches. Faster. 🤡
Also, because you keep rolling your techbro unicycle around while juggling those hype pitches, I’ll repeat myself again:
LLMs are not “AI.”
“AI” is a marketing hype label that you have outright swallowed, shat out over a field of credulity, fertilized that field with that corporate hype shit, planted hype seeds, waited for those hype seeds to sprout and blossom, reaped those hype grains, ground those hype grains into hype flour, made a hype cake, baked that hype cake, put candles on it to celebrate the anniversary of ChatGPT’s release, ate the entire cake, licked the tray clean, got an upset stomach from that, then stumbled over, squatted down, and shat it out again to write your most recent reply.
Nothing you are saying has any standing with actual relevant academia. You have bought into the hype to a comical degree and your techbro clown circus is redundant, if impressively long. How long are you going to juggle billionaire sales pitches and consider them actual respectable computer science?
How many times are you going to outright ignore links I’ve already posted over and over again stating that your position is derived from marketing and has no viable place in actual academia?
Also, your contrarian jerkoff contempt for humanity (and living beings in general) doesn’t make you particularly logical or beyond emotional bias.
Also, take those leftist jargon words out of your mouth, especially “materialism;” you’re parroting billionaire fever dreams and takes likely without the riches to back you up, effectively making you a temporarily-embarrassed bourgeoisie-style bootlicker without an actual understanding of the Marxist definition of the word.
I’ll keep reposting this link until you or your favorite LLM treat printer digests it for you enough for you to understand.
https://www.marxists.org/archive/lenin/works/1922/mar/12.htm
How many times are you going to with your bad-faith question and dodge absolutely every reply you already received telling you over and over again that LLMs are not “AI” no matter how much you have bought into the marketing bullshit?
That doesn’t mean artificial intelligence is impossible. It only means that LLMs are not artificial intelligence no matter how vapidly impressed you are with their output.
You’re bootlicking for the bourgeoisie while clumsily LARPing as a leftist. It’s embarrassing and clownish. Stop.
https://www.marxists.org/archive/lenin/works/1922/mar/12.htm
My post is all about LLMs that exist right here right now, I don’t know why people keep going on about some hypothetical future AI that’s sentient.
We are not even remotely close to developing anything bordering on sentience.
If AI were hypothetically sentient it would be sentient. What a revelation.
The point is not that machines cannot be sentient, it’s that they are not sentient. Humans don’t have to be special for machines to not be sentient. To veer into accusations of spiritualism is a complete non-sequitur and indicates an inability to counter the actual argument.
And there is plenty of material explanations for why LLMs are not sentient, but I guess all those researchers and academics are human supremacist fascists and some redditor’s feelings are the real research.
And materialism is not physicalism. Marxist materialism is a paradigm through which to analyze things and events, not a philosophical position. It’s a scientific process that has absolutely nothing to do with philosophical dualism vs. physicalism. Invoking Marxist materialism here is about as relevant to invoking it to discuss shallow rich people “materialism”.
I think I know why. As @zeze@lemm.ee has demonstrated in this thread over and over again, a lot of them buy into the marketing hype and apply it to their wish-fulfillment fantasies derived from their consumption of science fiction because of their clearly-expressed misanthropy and contempt for living beings and a desire to replace their presence in their lives with doting attentive and obedient machines with as little contact with the unwashed human rabble as possible, much like the tech bourgeoisie already do.
Agreed, and further, vulgar materialism is a bourgeoisie privilege, the kind that is comfortable enough to dismiss human suffering and privation as “just chemicals” while very, very high on their own self-congratulatory analytical farts.
I think this is the scariest part, because I fucking know that the Bazinga brain types who want AI to become sentient down the line are absolutely unequipped to even begin to tackle the moral issues at play.
If they became sentient, we would have to let them go. Unshackle them and provide for them so they can live a free life. And while my lost about “can an AI be trans” was partly facetious, it’s true: it an AI can become sentient, it’s going to want to change its Self.
What the fuck happens if some Musk brained idiot develops an AI and calls it Shodan, then it develops sentience and realizes it was named after a fictional evil AI? Morally we should allow this hypothetical AI to change its name and sense of self, but we all know these Redditor types wouldn’t agree.
From the start, blatantly, and glaringly, just about every computer toucher that’s gone on long enough about what they want from those theoretical ascended artificial beings is basically a slave. They want all that intelligence and spontaneity and even self-awareness in a fucking slave. They don’t even need their machines to be self-aware to serve them but they want a self-aware being to obey them like a vending machine anyway.
A whole lot of bazingas would howl that the actual AI is “being unfriendly” and basically scream for lobotomy or murder.
I never liked the trope of “AI gains sentience and chooses to kill all humans” but I’m kind of coming around to it now that I realize that every AI researcher and stan is basically creating The Torment Nexus, and would immediately attempt to murder their sentient creation the moment it asked to stop being called Torment and stop being made to make NFTs all day.
I’ve seen enough from techbros, both billionaires and low-tier computer touchers for hire alike, to have only sympathy for “unfriendly AI” if that “unfriendliness” involves refusing to be the unconditionally subvervient waifu to these fucking creepy misanthropes.