Yogthos is really relentless with all these AI posts. You’re not fighting for the poor defenseless AI technologies against the tyrannical masses with these posts.
People are clearly pissed off at the current state of these technologies and the products of it. I would have expected that here out of all places that the current material reality would matter more than the idealistic view of what could be done with them.
I don’t mean for this comment to sound antagonistic, I just feel that there’s more worthwhile things to focus on than pushing back against people annoyed by AI-generated memes and comics and calling them luddites.
This post is about what could be done with them though. It’s not about image generators, it’s about coding agents. LLMs are really good at programming certain things and it’s gotten to the point where avoiding them puts you at a needless disadvantage. It’s not like artisanally typed code is any better than what the bot generates.
That is all well and good, but this is not a conversation about “using LLMs in this specific scenario is advantageous”. I’m talking about the wider conversation mostly happening on this instance.
It’s quite frustrating when people express certain material concerns about the current state of the technology and its implications and are met with bad-faith arguments, hand-waving, and idealism. Especially when it’s not an important conversation to be happening here anyway. It’s mostly surfaced because people here react negatively to the AI-generated memes that Yogthos posts and that of course makes them irrational primitivists.
It’s needless antagonism that is not productive whatsoever over a topic that is largely out of the hands of workers anyway.
Except that it’s absolutely not out of the hands of the workers. The whole question here is whether this tech is going to be developed by corps who will decide who can use it and what content can be generated, or whether the development will be done in the open, accessible to everyone, and community driven. Rejection of these tools ensures that the former will be the case.
Firstly, “whether the development will be done in the open, accessible to everyone, and community driven” is largely not in the question when it comes to the training data itself, which is the most resource intensive aspect of this to begin with.
Secondly, “Rejection of these tools ensures that the former will be the case.”, you’re circling back again for the third time to a point that I haven’t made, and have explicitly clarified that it’s not the point that I’m discussing.
This follows the pattern of the comment threads of the other posts I’ve read on this topic, which is why I was hesitant to comment on this one to begin with. There is no point in having a conversation if you reply without showing the basic decency to read what I wrote.
Firstly, “whether the development will be done in the open, accessible to everyone, and community driven” is largely not in the question when it comes to the training data itself, which is the most resource intensive aspect of this to begin with.
This follows the pattern of the comment threads of the other posts I’ve read on this topic, which is why I was hesitant to comment on this one to begin with. There is no point in having a conversation if you reply without showing the basic decency to read what I wrote.
Frankly, I don’t understand what the actual point is that you’re trying to make if it’s not the one I’m addressing. As far as I’m concerned, the basic facts of the situation is that this technology currently exists, and it will continue to be developed. The only question that matters is how it will be developed and who will control it. If you think that’s not correct then feel free to clearly articulate your counterpoint.
I am not talking about the technology itself or what is to be done regarding it, I’m simply highlighting that the manner in which the conversation around it in this instance is conducted is more often than not antagonistic and unproductive.
Posts that are meant to educate should not be hostile, condescending, or antagonistic. People’s concerns, when they are engaging in good faith, should not be waved away and ignored.
It’s important to understand why we communicate something. What is the goal that we’re trying to achieve? It’s important to be honest about this with ourselves before we choose to speak. Is the goal of a post I’m making to try and seek opinions? Is it to gauge interest in something? Is it to educate the community on a specific topic that I have certain knowledge about? Is it to critique a particular point of view? It could also be to make a joke and share a laugh about something, or it could be to express frustrations and personal grievances.
We should answer this question before we communicate something. We could read the post or comment again and ask ourselves “does this post/comment fit with the goal I had in mind?” If I’m making posts that consistently result in unproductive conversations in the comments in a community that is otherwise quite pleasant to interact with, then I may reassess the way that I’m approaching this topic.
I chose to comment on this post not as a knee-jerk reaction to the needlessly provocative title, rather because I saw it as part of a pattern with the discussions surrounding this topic on this instance. The point I am trying to make is that it does not benefit anyone on this instance to keep spreading hostilities.
If believe that this is an important topic, one that people here should take seriously and engage with as it’s relevant to their lives and their movements, you should not resort to reducing all their concerns, opinions, and personal preferences to ignorance, primitivism, or paint them as Luddites.
As I said before, I would not afford the same patience to people expressing bigoted views or harmful historic revisionism, this is not that.
Speaking personally as an example, I do not disagree with your main premise. I do not believe that these technologies should be ignored, rejected, or shunned as a whole. I am not against automation, nor am I against the use of similar technologies in creative pursuits such as art or music in principle. I however do dislike these AI generated memes and comics. I do dislike the use cases people employ LLMs in 90% of the time. I do dislike that every single field is urged to incorporate some of these technologies somehow before a problem is even identified and trying to force-sell a solution. I do dislike the way that it’s currently used by people who are my juniors, who rely on information from corporate LLMs without having a single inkling of how they work.
We may agree, disagree, discuss certain points, I could change my perspective on something, that is all great, but you have to understand that in a case where I or other users are frustrated or annoyed by something like AI generated memes or comics on these communities, it does not automatically mean that we’re Luddites.
To recap again the point I’m trying to make: please stop using antagonistic, passive-aggressive, and condescending methods to try and get your points across regarding this topic. It does not help.
I made a post about something which I thought was interesting and insightful. A bunch of people came in to make snide comments and personal attacks. But turns out it’s my fault that the tone of the discussion the way it is. I have absolutely no problem having a civil discussion about the topic with people who themselves act in a civil way, and want to genuinely understand the subject. I simply do not have patience to deal with people who personally attack me and do what amounts to trolling.
If that’s your takeaway from everything I wrote, then there’s not much else I could say.
A bunch of people came in to make snide comments and personal attacks. But turns out it’s my fault that the tone of the discussion the way it is.
You set the tone of the discussion through the title of the post, I’m not sure why you won’t even acknowledge that this wasn’t a civil nor respective way to start an honest conversation.
That’s completely missing the point I am making. I am not advocating for ‘irrational hate’ of a technology. I am saying that people are not receptive to the current implementations of it, and that trying to combat this through pushing against this sentiment is ultimately a waste of time.
Assess the situation on what is, not on the premise of a utopian ideal.
Some people aren’t receptive to current implementations, but that doesn’t mean we can’t discuss this technology here. A Marxist forum should be a place where we can talk about new technology in a rational way, and educate people who have reactionary views on it.
Of course the topic could and should be discussed, if it’s done in an honest and rational manner. There is no disagreement here.
This is however not what I have been seeing over the past months. Even this very post, I know you haven’t written the article’s headline yourself, but you can see that it’s clearly antagonistic for no good reason.
You can imagine this with any other topic. If there are people who you are sympathetic to and are part of your cause but might have an inaccurate or ‘reactionary’ view to something, you would not meet them with antagonism. Especially since we’re not talking about a case of bigotry here or other views or actions that harm others.
We should also not infer from people disliking something that they have ‘reactionary’ views. One could dislike spice grinders and prefer a mortar and pestle, that doesn’t make them a primitivist. I would also understand if they get frustrated if they’re constantly bombarded with “here’s why spice grinders are better and you’re an idiot if you’re not using one” type posts.
I think the article is mostly sensible, and it addresses common tropes that get thrown around regarding using LLMs for coding. What the author describes largely matches my own experience. Surely we can do better than judging articles solely based on the headline. Meanwhile, people can just skip reading the article if it irks them. I don’t know why every single time there’s a post regarding AI there needs to be struggle session about it.
The point is not judging the content of the article on the headline, it’s the needlessly antagonistic phrasing of it.
I am expressing that it is very understandable that people scrolling would be bothered by seeing such posts.
As for the struggle sessions, the topic is quite controversial to begin with, and being honest, as a lurker of these posts for quite some time now, I never liked the manner in which you replied to people disagreeing in the comments. It’s no surprise that these posts often turn hostile and unproductive.
It’s also important to realise that each post does not happen on an island, there’s historical context in the community and instance. People being irked by one post and choosing to comment something have probably seen 3-4 other posts in the previous weeks that also annoyed them, and might reply with a tone of frustration as a result.
Sometimes, on a random meme, there is a bunch of hostile “you used a LLM to make the image for this meme, therefore you are an inhuman monster” comments. Its them who started the hostility, and we won’t just sit there and take it from a bunch of IP crusaders who dehumanize us by equating us to machines.
That is not always the case, you need to admit that. I have had plenty of interactions with you and sometimes you do not post constructively. You did it right herehttps://lemmygrad.ml/post/8081058/6476868
So why does it always seem to be that anyone who is making “reactionary statements that aren’t rooted in material analysis” is basically just someone who doesn’t agree with your conclusions? Are you the only one who is right?
Yogthos is really relentless with all these AI posts. You’re not fighting for the poor defenseless AI technologies against the tyrannical masses with these posts.
People are clearly pissed off at the current state of these technologies and the products of it. I would have expected that here out of all places that the current material reality would matter more than the idealistic view of what could be done with them.
I don’t mean for this comment to sound antagonistic, I just feel that there’s more worthwhile things to focus on than pushing back against people annoyed by AI-generated memes and comics and calling them luddites.
This post is about what could be done with them though. It’s not about image generators, it’s about coding agents. LLMs are really good at programming certain things and it’s gotten to the point where avoiding them puts you at a needless disadvantage. It’s not like artisanally typed code is any better than what the bot generates.
That is all well and good, but this is not a conversation about “using LLMs in this specific scenario is advantageous”. I’m talking about the wider conversation mostly happening on this instance.
It’s quite frustrating when people express certain material concerns about the current state of the technology and its implications and are met with bad-faith arguments, hand-waving, and idealism. Especially when it’s not an important conversation to be happening here anyway. It’s mostly surfaced because people here react negatively to the AI-generated memes that Yogthos posts and that of course makes them irrational primitivists.
It’s needless antagonism that is not productive whatsoever over a topic that is largely out of the hands of workers anyway.
Except that it’s absolutely not out of the hands of the workers. The whole question here is whether this tech is going to be developed by corps who will decide who can use it and what content can be generated, or whether the development will be done in the open, accessible to everyone, and community driven. Rejection of these tools ensures that the former will be the case.
Firstly, “whether the development will be done in the open, accessible to everyone, and community driven” is largely not in the question when it comes to the training data itself, which is the most resource intensive aspect of this to begin with.
Secondly, “Rejection of these tools ensures that the former will be the case.”, you’re circling back again for the third time to a point that I haven’t made, and have explicitly clarified that it’s not the point that I’m discussing.
This follows the pattern of the comment threads of the other posts I’ve read on this topic, which is why I was hesitant to comment on this one to begin with. There is no point in having a conversation if you reply without showing the basic decency to read what I wrote.
I don’t see why. For example, tools like this already exist https://github.com/bigscience-workshop/petals
Frankly, I don’t understand what the actual point is that you’re trying to make if it’s not the one I’m addressing. As far as I’m concerned, the basic facts of the situation is that this technology currently exists, and it will continue to be developed. The only question that matters is how it will be developed and who will control it. If you think that’s not correct then feel free to clearly articulate your counterpoint.
I am not talking about the technology itself or what is to be done regarding it, I’m simply highlighting that the manner in which the conversation around it in this instance is conducted is more often than not antagonistic and unproductive.
Posts that are meant to educate should not be hostile, condescending, or antagonistic. People’s concerns, when they are engaging in good faith, should not be waved away and ignored.
It’s important to understand why we communicate something. What is the goal that we’re trying to achieve? It’s important to be honest about this with ourselves before we choose to speak. Is the goal of a post I’m making to try and seek opinions? Is it to gauge interest in something? Is it to educate the community on a specific topic that I have certain knowledge about? Is it to critique a particular point of view? It could also be to make a joke and share a laugh about something, or it could be to express frustrations and personal grievances. We should answer this question before we communicate something. We could read the post or comment again and ask ourselves “does this post/comment fit with the goal I had in mind?” If I’m making posts that consistently result in unproductive conversations in the comments in a community that is otherwise quite pleasant to interact with, then I may reassess the way that I’m approaching this topic.
I chose to comment on this post not as a knee-jerk reaction to the needlessly provocative title, rather because I saw it as part of a pattern with the discussions surrounding this topic on this instance. The point I am trying to make is that it does not benefit anyone on this instance to keep spreading hostilities. If believe that this is an important topic, one that people here should take seriously and engage with as it’s relevant to their lives and their movements, you should not resort to reducing all their concerns, opinions, and personal preferences to ignorance, primitivism, or paint them as Luddites.
As I said before, I would not afford the same patience to people expressing bigoted views or harmful historic revisionism, this is not that.
Speaking personally as an example, I do not disagree with your main premise. I do not believe that these technologies should be ignored, rejected, or shunned as a whole. I am not against automation, nor am I against the use of similar technologies in creative pursuits such as art or music in principle. I however do dislike these AI generated memes and comics. I do dislike the use cases people employ LLMs in 90% of the time. I do dislike that every single field is urged to incorporate some of these technologies somehow before a problem is even identified and trying to force-sell a solution. I do dislike the way that it’s currently used by people who are my juniors, who rely on information from corporate LLMs without having a single inkling of how they work. We may agree, disagree, discuss certain points, I could change my perspective on something, that is all great, but you have to understand that in a case where I or other users are frustrated or annoyed by something like AI generated memes or comics on these communities, it does not automatically mean that we’re Luddites.
To recap again the point I’m trying to make: please stop using antagonistic, passive-aggressive, and condescending methods to try and get your points across regarding this topic. It does not help.
I made a post about something which I thought was interesting and insightful. A bunch of people came in to make snide comments and personal attacks. But turns out it’s my fault that the tone of the discussion the way it is. I have absolutely no problem having a civil discussion about the topic with people who themselves act in a civil way, and want to genuinely understand the subject. I simply do not have patience to deal with people who personally attack me and do what amounts to trolling.
If that’s your takeaway from everything I wrote, then there’s not much else I could say.
You set the tone of the discussion through the title of the post, I’m not sure why you won’t even acknowledge that this wasn’t a civil nor respective way to start an honest conversation.
10000% this. Thank you for writing this
Irrational hate of new technology isn’t going to accomplish anything of value.
That’s completely missing the point I am making. I am not advocating for ‘irrational hate’ of a technology. I am saying that people are not receptive to the current implementations of it, and that trying to combat this through pushing against this sentiment is ultimately a waste of time.
Assess the situation on what is, not on the premise of a utopian ideal.
Some people aren’t receptive to current implementations, but that doesn’t mean we can’t discuss this technology here. A Marxist forum should be a place where we can talk about new technology in a rational way, and educate people who have reactionary views on it.
Of course the topic could and should be discussed, if it’s done in an honest and rational manner. There is no disagreement here. This is however not what I have been seeing over the past months. Even this very post, I know you haven’t written the article’s headline yourself, but you can see that it’s clearly antagonistic for no good reason.
You can imagine this with any other topic. If there are people who you are sympathetic to and are part of your cause but might have an inaccurate or ‘reactionary’ view to something, you would not meet them with antagonism. Especially since we’re not talking about a case of bigotry here or other views or actions that harm others.
We should also not infer from people disliking something that they have ‘reactionary’ views. One could dislike spice grinders and prefer a mortar and pestle, that doesn’t make them a primitivist. I would also understand if they get frustrated if they’re constantly bombarded with “here’s why spice grinders are better and you’re an idiot if you’re not using one” type posts.
I think the article is mostly sensible, and it addresses common tropes that get thrown around regarding using LLMs for coding. What the author describes largely matches my own experience. Surely we can do better than judging articles solely based on the headline. Meanwhile, people can just skip reading the article if it irks them. I don’t know why every single time there’s a post regarding AI there needs to be struggle session about it.
The point is not judging the content of the article on the headline, it’s the needlessly antagonistic phrasing of it. I am expressing that it is very understandable that people scrolling would be bothered by seeing such posts.
As for the struggle sessions, the topic is quite controversial to begin with, and being honest, as a lurker of these posts for quite some time now, I never liked the manner in which you replied to people disagreeing in the comments. It’s no surprise that these posts often turn hostile and unproductive. It’s also important to realise that each post does not happen on an island, there’s historical context in the community and instance. People being irked by one post and choosing to comment something have probably seen 3-4 other posts in the previous weeks that also annoyed them, and might reply with a tone of frustration as a result.
Sometimes, on a random meme, there is a bunch of hostile “you used a LLM to make the image for this meme, therefore you are an inhuman monster” comments. Its them who started the hostility, and we won’t just sit there and take it from a bunch of IP crusaders who dehumanize us by equating us to machines.
I put effort into replying to people constructively and try to spend the time to explain my position on the subject.
That is not always the case, you need to admit that. I have had plenty of interactions with you and sometimes you do not post constructively. You did it right here https://lemmygrad.ml/post/8081058/6476868
So basically anyone who disagrees with your viewpoint is reactionary. Got it.
How do you square that view with your comment here? https://lemmygrad.ml/post/8081058/6478824
Nope, just people who make reactionary statements that aren’t rooted in material analysis.
So why does it always seem to be that anyone who is making “reactionary statements that aren’t rooted in material analysis” is basically just someone who doesn’t agree with your conclusions? Are you the only one who is right?
It’s not, that’s just a straw man you’re building here.
Every accusation is a confession
“My own comments are a straw man”
Incredible.
They are consistent, in their boosterism, so credit where credit is due.