• AssortedBiscuits [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 天前

    At this point, anti-AI sentiment is just cope. AI is here to stay. For the people against AI, what is the praxis that must be undertaken against AI? AI, like any other tool, is lifeless but has living users that use, support, and develop it, so the question of praxis against AI becomes a question of praxis against workers who use, develop, and propagate AI.

    This is why the Luddites failed. The Luddites had enough people to conduct organized raids, but the fact that those machinations were installed and continued to be installed by other workers meant that they represented a minority of workers. If they had a critical mass of workers on their side, those machinery would quite simply not be installed in the first place. Who else is going to install the machinery, the bourgeoisie, the gentry, and a bunch of merchants involved in human trafficking of Africans slaves?

    Those looms didn’t sprout legs and installed themselves. They were installed by other workers, workers who, for whatever reason, disagreed with the Luddite’s praxis or ideology. Viewed in this context, it made sense why the Luddites failed in the end. Who cares if 500 looms got smashed by the Luddites if 600 looms got installed by non-Luddite workers anyways.

    Corps are already starting to build underground data centers, so you and your plucky guerilla band of anti-AI insurgents can’t just firebomb a data center that’s build from a repurposed nuclear bunker. Pretty much all of the AI scientists who push the field forward are Chinese scientists safely located within the People’s Republic of China, so liquidating AI scientists for being class traitors is out of the question. Then what else is left in terms of anti-AI praxis besides coping about it online and downvoting pro-AI articles from some cheap knockoff of R*ddit?

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      7
      ·
      2 天前

      This is precisely what I’ve been trying to explain to people as well. Corporations will keep developing this technology. Nothing will stop this. It’s happening. So the only question that matters is: How will it be developed, and who controls it?

      The irony is that fighting against the use of this tech outside corporations guarantees corps become its sole owners. The only rational path is to back community-driven development, just like any other open-source alternatives to corporate tools. Worker-owned. Community-controlled.

      It’s mind-boggling that so many people fail to understand this.

      • GreatSquare@lemmygrad.ml
        link
        fedilink
        arrow-up
        5
        ·
        2 天前

        I don’t think that was their vibes.

        From article:

        the point is not to let ourselves be replaced by AIs, but to use them to improve ourselves and our productivity

        My take:

        The role of the programmer is ultimately to solve the problems. There are many ways to skin the cat. The better solutions comes from the better programmers.

        Bosses under capitalism have less understanding of the pros/cons of a particular solution. Hence they will often use their decision making powers to choose the quick solution rather than the best.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          4
          ·
          2 天前

          I mean that’s been the case all along, that’s why most software is janky. The problem isn’t technology itself, it’s capitalist relations and the way technology ends up being applied as a result.

          • GreatSquare@lemmygrad.ml
            link
            fedilink
            arrow-up
            4
            ·
            2 天前

            There are bugs in every system. AI will just create different types of bugs. It’s the nature of technology.

            The hype money being thrown at AI is making the F35 of software out of this shit though. Big Tech accumulated so much cash and had nothing to throw it at after VR didn’t take off.

            Then we get Skynet.

      • Are_Euclidding_Me [e/em/eir]@hexbear.net
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        2 天前

        Really? I got “if you don’t understand the code you’re producing, then that’s a real problem, not just for you but for software development as a whole”.

          • Are_Euclidding_Me [e/em/eir]@hexbear.net
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            4
            ·
            2 天前

            Hey, I don’t fucking know, I’m not a coder. Maybe people were blindly copy-pasting StackOverflow code into their projects and just hoping it worked well enough. It seems to me LLM’s make it easier to write working but dangerous code (this article also seems to say this), and I’m not sure making dangerous code easier to produce is a good idea.

            But whatever, again, I’m not a coder, I just wanted to push back a little on your extremely uncharitable reading of an article you don’t like.

              • Are_Euclidding_Me [e/em/eir]@hexbear.net
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                2 天前

                Ok? I think you’re having a fight with someone who isn’t me! I’m really just trying to say that your reading of the article about vibe coding is extremely uncharitable. The author didn’t seem, to me, like someone who is against making stuff easier for people, but instead someone with worries about whether LLM’s might actually be dangerous.

                You can disagree about their danger (you clearly do), but I’m unqualified to speak to their danger (I’m not a coder), and so that aspect of the matter isn’t something I’m eager to discuss, and isn’t something I’ve tried to discuss. All I’ve said is that I think your dismissal of the author of the article as someone who won’t be satisfied until everyone is coding in assembly is wildly off-base.

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  arrow-up
                  6
                  ·
                  2 天前

                  My view is that the author of the article is basically engaging in gatekeeping saying that people should use particular tools to do coding, and that LLMs make it too easy for people who shouldn’t be coding to produce code. The reality is that the author is not happy with the fact that the bar is being lowered.

                  The argument regarding supposed danger is pure nonsense because any professional development involves code reviews, testing, and other practices to ensure code quality. Nobody just checks in random code into projects and hopes that it works.

  • -6-6-6-@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    22
    ·
    3 天前

    The argument that workers should capture AI instead of the ruling class is interesting, but let me ask you.

    Has there been a single technology entirely captured and for the workers in history, ever? Has not every piece of technology been used primarily by the working class, yes, but the direction it develops and what value it produces is decided by the ruling class? Always has been unless we can remove them from controlling the mode of production…

    I think China is an interesting example of this, where the worker’s party controls the majority of the economy and wouldn’t let a program like DeepSeek threaten to unemploy half of it’s economy (America does probably have a larger segment dedicated to programming, though, silicon valley and all). Even then, the average worker there has more safety nets.

    • lacaio da inquisição
      link
      fedilink
      arrow-up
      11
      ·
      3 天前

      If people can build it, it can serve the people. Think of open-weights LLMs. If we got a couple of 32B models that score as high as GPT-4o and Claude-3.5, why not use them? It can be run on mid-high end hardware. There are developers out there doing a good job. It doesn’t need to be a datacenter/big tech company centered scenario.

      • -6-6-6-@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 天前

        There are many technologies that serve the people that regardless are captured and extracted value mainly by the ruling class of our mode of production. Extracting value from it ourselves and our own projects doesn’t mean that we own it.

        My point was also that despite our efforts; corporations and the ruling class will build destructive datacenters/big tech.

    • GreatSquare@lemmygrad.ml
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      3 天前

      The threat I see is the dominance of AI services provided by an oligarchy of tech companies. Like Google dominance of search. It’s a service that they own.

      Thankfully China is a source of alternative AI services AND open source models. The bonus is that Chinese companies like Huawei are also an alternative source of AI hardware. This allows you to run your own AI models so you don’t necessarily need their services.

      You’re thinking of class war. There’s only one proven way to win that war: The working class rises up, kill some MFers and takes over. There’s no point smashing the loom - kill the loom owners and take their looms.

      • -6-6-6-@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 天前

        I’m well aware, I’m just wary of the framing of the idea that we need to “take over this tool” when in reality we’ll just interact with it and use it like we do any technology under the mode of production. Any technology, any tool can realistically be turned that way. I don’t see how AI is special in this regard, though other than for its obvious uses in coding.

        The mistake I think we can avoid is letting AI making management or executive decisions as like the old IBM quote goes, they can never be held accountable.

        • GreatSquare@lemmygrad.ml
          link
          fedilink
          arrow-up
          13
          ·
          3 天前

          As I said though, AI is CURRENTLY a service as offered by the big tech oligarchy. Just like the search engine tool is dominated by Google. They use Search as a means of extracting money from the economy. It’s a form of rent.

          DeepSeek broke the service model. Others are following in their footsteps. It’s just a matter of sticking to open source models to kill off the profitability of an AI oligarchy.

          • lacaio da inquisição
            link
            fedilink
            arrow-up
            7
            ·
            3 天前

            Google destroyed the opposition when building a search engine tool, this is nothing like the case with Google. Many websites generate robots.txt and other Terms of Service that are impossible for common people to follow these days. It’s very hard to scrape, serve and be compliant at the same time. And as small fish you have to. Search engine maintenance occupies too much space and serving the pages with quality requires quick database management tools.

            This gap might be closed by AI, but not before it. Even though true alternatives like GigaBlast existed.

            The current LLM status has a vibrant open-weights scenario, which is centered on HuggingFace but it’s the code away from being served in other places. AI uses datasets/corpus of texts, which can be shared by Universities/Institutions around the world, as they are currently.

            LLM/AI is at arms reach from the people, no matter how much money Big Tech puts on Datacenters. The scary part is what Google always used to do best, lobbying for monopolization. Aside from that, we’re safe.

            • GreatSquare@lemmygrad.ml
              link
              fedilink
              arrow-up
              7
              ·
              edit-2
              2 天前

              LLM/AI is at arms reach from the people, no matter how much money Big Tech puts on Datacenters. The scary part is what Google always used to do best, lobbying for monopolization. Aside from that, we’re safe.

              I think there’s potential danger from other angles.

              Capitalist bosses are looking to downsize their workforce. AI is marketed by Big Tech as the new “outsourcing”. Bosses are dumb enough to pay for that. This is the SW version of a manufacturing robot.

              In the meantime, we kill a lot of atmosphere on the data centre electricity to make this slop.

    • 小莱卡@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      3 天前

      Has there been a single technology entirely captured and for the workers in history, ever?

      No, technology has no ideology, which is why we shouldn’t be opposed to using the tools that the ruling class uses against us. The chinese communists didn’t win the civil war without using guns or without studying military tactics and logistics.

      • mulcahey@lemm.ee
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        2 天前

        Technology absolutely has an ideology. All technology produces winners and losers, complicates previous tasks while making some easier, and overlaps heavily with futurism. If tech doesn’t have an ideology, then we would say Luddites and Amish are merely social clubs, and not social movements.

        • 小莱卡@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          2 天前

          people do ideology, not tech. tech can be used to serve an ideological purpose, but this does not mean that tech has an ideology, it is the people using it that do. To quote michael parenti:

          “It is said that cameras don’t lie, but we must remember that liars use cameras.” - Michael Parenti

          Luddites and the amish refusing to use tech is not due to tech discriminating them, but because their ideology discriminates tech, sometimes as absurd as saying that tech is the devil.

          tech is built on laws of nature, think of gravity, does gravity act differently on an anarchist than it does on a libertarian? absolutely not.

          • -6-6-6-@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            2 天前

            I’m not advocating for primitivism or reactionary views against using it. I’m trying to point out that people aren’t going to embrace or accept this technology as much when it does more harm than good and will continue to do so just as the existence of Linux or other open-source projects doesn’t impede capitalism or it’s destruction in anyway. As this tech is being utilized in an ideological purpose, it will always be utilized more effectively and powerfully than any open-source case under the dominant ideology who controls the economy.

            If there is a clear, distinct use-case of this technology that benefits our cause and doesn’t harm workers, great! The one example of it being used in that news channel rainpizza mentioned is reasonable.

      • -6-6-6-@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 天前

        Absolutely not. I’m not saying that we shouldn’t, I suppose looking at my response to Yogthos explains my position better.

        Also, I think the framing of the idea that people are against it because it doesn’t have a clear, distinct use-case in politics or against the capitalists yet isn’t being anti-A.I nor reactionary. I think being cautious with any new technology is reasonable.

      • -6-6-6-@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 天前

        Correct. We can use carbines and rifle equivalents while the enemy is building massive data-centers in third world countries and marginalized communities as the technology is used on their side to ramp up global exploitation of the third world, squeezing out their minor white-collar industries for even more productivity as they use it to race and keep up with ever-lowering wages as productivity sky-rockets globally.

        I’m glad while this happens we can have an open-source equivalent. Do you see why people are so glum or dismal about it?

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      13
      ·
      3 天前

      I mean, technology will be used to oppress workers under capitalism. That is why Marxists fundamentally reject capitalist relations. However, given that people in the west do live under capitalism currently, the question has to be asked whether this technology should be developed in the open and driven by community or owned solely by corporations. This is literally the question of workers owning their own tools.

      • -6-6-6-@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 天前

        It already is, as far as I’m aware. The issue that I’m having is the idea of it being framed as a technology we as Marxists can co-opt. If it has it’s uses in coding or for projects within Marxism, sure, but as far as I’m concerned I don’t really see a valid use in integrating it as it exists within parties or politically other than data storage/organization…which I imagine there is better options for that. Maybe in the future, though.

        As long as capitalism exists, I don’t think we “own” any tools without a proper worker’s party to enforce regulations and protect workers in the West. That is the reason I brought up China. I have no objections to open-source alternatives though, but I don’t think us developing open-source tools is going to stop the majority of the use of this tool harming workers. Hence my issue with the idea of “owning it”. We certainly can use it though.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          8
          arrow-down
          2
          ·
          3 天前

          The only way to know whether a particular technology has application is by keeping up with it and by using it. I see plenty of people confidently regurgitate misconception about this tech because they either haven’t actually tried using it, or they haven’t kept up with the latest iterations of it.

          Meanwhile, we absolutely can own tools under capitalism. This has nothing to do with a worker party enforcing anything. This is about people doing the work to create tools by the workers and for the workers. Lemmy itself is an example of this. The same type of tool can be in the hands of corporations and the workers. There’s no contradiction here.

          • -6-6-6-@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            2 天前

            I’m well aware of the current existing use-cases. I’m well aware of how far it comes and by the time I’m done typing this it is already advanced further. This is not a case of “ignorance” or “regurgitation”.

            Personal ownership is different than a class owning it. There are many tools a worker can own, but the working class owning it? It’ll be like any other tool, that the rich and elite have much more powerful and effective versions of that they can apply in situations that we couldn’t with ownership of the mode of production AKA unemploying people and harming the working class. Acknowledging that isn’t being a luddite.

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              2 天前

              Do you seriously not understand that the scenario where the rich control this tool exclusively is worse than one where there’s a community owned version of the tool? Do you not understand the problems with closed operating systems like Windows that open alternatives like Linux solve?

              • -6-6-6-@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 天前

                Do you seriously not understand that despite community ownership or use of this tool; it’s main purpose will be for the ruling class to extract more value and productivity from labor and that it will do more harm than good?

                Does the existence and use of Linux in our community stop the harm that Windows does? Is it not cognizant to recognize and point out the harm Windows does just like the companies that will dominate the field of A.I in the future?

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  2 天前

                  The main purpose of how the ruling class uses this tool will be the same regardless of whether there is a community version of the tool or not. Period.

                  You’re conflating two separate things here which have no actual relationship between them. The question I ask you once again, is it better that Linux exists and provides an alternative for people or not?

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              6
              ·
              3 天前

              Thanks for for the kind words, and that’s a really good application of this tech actually that’s making it possible to produce quality content on a budget.

              • rainpizza@lemmygrad.ml
                link
                fedilink
                arrow-up
                5
                ·
                3 天前

                Have you read about Firebase Studio?

                That’s another interesting application of the AI. From any walks of life(hairdressers, junior devs, restaurant owners) could use it to create a simple app and put it online. Wish to have your thoughts on that one.

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  arrow-up
                  8
                  ·
                  3 天前

                  I’ve heard of it, but haven’t had a chance to actually try it out. The concept does seem reasonable on the surface though. I think an interactive feedback loop is really critical for this sort of stuff, where the user can ask the agent to build a feature, then can try it out and see that it does what they need, and iterate on that.

                  A lot of the apps people need are very simple in nature, there tends to be some input form, to collect data, and then some visualization to display data, and talking to some endpoints to send out emails or whatever. It doesn’t need to be beautiful or hyper efficient, just needs to work well enough to solve a problem a particular person has. Currently, unless you’re a dev you’d have to pay tens of thousands of dollars for somebody to build even a simple app for you. This kind of stuff has the potential to lower that barrier dramatically.

  • mkkhan@lemmygrad.ml
    link
    fedilink
    arrow-up
    31
    ·
    4 天前

    LLMs really might displace many software developers. That’s not a high horse we get to ride. Our jobs are just as much in tech’s line of fire as everybody else’s have been for the last 3 decades. We’re not East Coast dockworkers; we won’t stop progress on our own

    why did I do computer science god I fucking hate every person in this field it’s amazing how much of an idiot everyone is.

    • footfaults@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      4 天前

      You can tell the ones that got A’s in their comp sci classes and C’s in their core/non-major classes by how bloodthirsty they are.

      Me, the enlightened centrist, just got C’s in everything

  • SlayGuevara@lemmygrad.ml
    link
    fedilink
    arrow-up
    19
    ·
    4 天前

    My party is trying their best to understand and implement AI and it’s causing some friction within the party. The official stance that is now adopted is the one of ‘we need to understand it and use it to our advantage’ and ‘we need to prevent AI being solely a thing of the ruling class’ and to me that makes sense. I wasn’t around at the time but I imagine it was the same with the coming of the internet some decades ago and we can see how that ended. I hope socialist orgs don’t miss the boat this time.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      18
      ·
      3 天前

      I think that’s precisely the correct stance. As materialists we have to acknowledge that this technology exists, and that it’s not going away. The focus has to be on who will control this tech and how it will be developed going forward.

    • footfaults@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 天前

      Is there any reading that I can do, around those discussions? Any debate? I would like to read more about those deliberations if possible

    • mulcahey@lemm.ee
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      2 天前

      The difference is the internet wasn’t built on theft with the explicit goal of disempowering workers

      • SlayGuevara@lemmygrad.ml
        link
        fedilink
        arrow-up
        4
        ·
        2 天前

        But neither is AI. It is not sentient. It is pushed in the direction of the ones controlling it. Which currently is the tech oligarchy. Realising that and trying to find a way to navigate that will put you at an advantage.

  • darkernations@lemmygrad.ml
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    4 天前

    Thanks for sharing these AI posts.

    Paid employment could mean retraining under socialism. Remember communism is moneyless, stateless and classless. The aim of society is the socialisation of all labour to free up time to do more leisure including art. People will still want art from humans without AI but there’s a difference between that and the preservation of regression through ludditism to maintain less productive paid labour.

    Equating anti-capitalism to anti-corporatism, the appeal to ludditism, the defense of proprietorship, or the appeal to metaphysical creativity is not going to cut it, and that is a low bar to clear for marxists.

    https://lemmygrad.ml/post/7917393/6409037

  • footfaults@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    8
    ·
    4 天前

    Every six months the tone of these “why won’t you use my hallucinating slop generator” get more and more shrill.

    • freagle@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      22
      ·
      4 天前

      I think his point that you basically give a slop generator a fitness function in the form of tests, compilation scripts, and static analysis thresholds, was pretty good. I never really thought of forcing the slop generator to generate slop randomly until it passes tests. That’s a pretty interesting idea. Wasteful for sure, but I can see it saving someone a lot of time.

      • Imnecomrade@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 天前

        Wasteful for sure, but I can see it saving someone a lot of time.

        Many degrowth proposals call for some aggregate reduction of energy use or material throughput. The issue with these proposals is that they conflict with the need to give the entire planet public housing, public transit, reliable electricity, modern water-sewage services, etc., which cannot be achieved by “shrinking material throughput”. According to modeling from Princeton University (this may be outdated), it suggests that zeroing emissions by 2050 will require 80 to 120 million heat pumps, up to 5 times an increase in electricity transmission capacity, 250 large or 3,800 nuclear reactors, and the development of a new carbon capture and sequestration industry from scratch. Degrowth policies, while not intending to result in ecological austerity, effectively do so through their fiscal commitment to budgetary constraints which inevitably require government cuts.

        The reason for the above paragraph is to give an analogy to the controversy of “AI wastefulness”. Relying on manual labor for software development could actually lead to more wastefulness long term and a failure to resolve the climate crisis in time. Even though AI requires a lot of power, creating solutions faster (especially in green industries as well as emissions reduced from humans such as commuting to work) could lead to a better and faster impact on reducing emissions.

        • freagle@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 天前

          While an interesting point, it relies on a specific assumption - that LLMs are useful in solving the problems you’re talking about. Unfortunately, as we’ve seen from nearly all other advances in human productivity, we just take surplus labor and apply it to completely wasteful projects:

          • DRM
          • IP litigation
          • IP violation detection
          • Marketing and sales
          • Public relations (7x as many people in PR as in journalism)
          • Imperial militarism and the resultant defense against such
          • Imperial propaganda and the defense against such
          • FOMO
          • More entertaining entertainment
          • Actor salaries
          • Stadium concerts
          • Homographs of celebrities
          • Funkopops and other purely wasteful plastic treats
          • Corporate rebranding
          • Branded corporate swag
          • Drop shipping
          • The war on drugs
          • Student debt collection
          • Medical debt collection
          • Financial abstractions
          • Vanity
          • Fast fashion
          • Fried ranch dressing and other food where unhealthy is the point
          • Analyzing, exploiting, and litigating addiction
          • For profit prisons and the parole system

          I could go on. This is what we choose to spend our surplus labor on. So AI time savings just isn’t going to save us. AI would have to fundamentally change the way solve certain problems, not improve the productivity of the billions of people who are already wasting most of the careers working on things that make the problems you’re talking about worse and not better.

          Yes, neural network techniques are useful in scientific applications to fundamentally change how we solve problems. Helping a mid level programmer get more productive at building AdTech, MarTech, video games, media distribution, subscription services, ecommerce, DRM, and every other waste of human productivity relative to the problems you’re raising is done by LLMs, which are not useful for protein folding, physics simulations, materials analysis, and all the other critical applications of AI that we need to solve.

          • Imnecomrade@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 天前

            I was just implying after revolution and when we live in a socialist society, we would use AI for productive means and not for these wasteful projects. Your list above are mostly projects in a capitalist society that serve the ruling class’s interests.

            The only real potential benefit of AI in a capitalist society, besides potentially using it to make tools, services, and content for workers and communist parties to fight back against the system, is the proletarianization and hopefully radicalization (toward socialism) of labor aristocrats as the deepening contradictions of capitalism lead to more unrest amongst the working class.

            • freagle@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              6
              ·
              2 天前

              Yeah, so, I am not saying AI is too wasteful to exist. I am saying it’s too wasteful to be used for worker productivity in the current global capitalist system. Workers in China, Vietnam, India, Pakistan, Bulgaria, Nigeria, and Venezuela are going to use LLMs first and foremost to remain competitive in the race to the bottom for their wages in the global capitalist system, because that’s where we are at and will be for as long as it takes.

              • -6-6-6-@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                1
                ·
                edit-2
                2 天前

                This is exactly what I’m talking about. Lots of privileged first worlders have no idea what this means for the globe as well. It’s not going to liberate the third world; instead it will mantle a much heavier burden of productivity upon them as wealth extraction of the meager white-collar is kicked into overdrive as databases for these are built in the periphery or even in poor places at home.

                I mean look at where Colossus is being built. While we as workers might be able to use AI reasonably and responsibly; it has no more meaning and impact than using paper/plastic straws in comparison to the ruling class running giant data-centers and energy-intensive slop machines to our “open source!” equivalent.

                The effect of weakening labor aristocracy is secondary.

                • Imnecomrade@lemmygrad.ml
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  2 天前

                  First, I am only meaning to provide my perspective, even if it turns out to be not perfect or 100% accurate to reality, and I am willing to be corrected or to learn from others. I’m not just some first worlder set in their ways.

                  I think you misunderstood some of the points I made, just as I interpreted freagle’s point about AI wastefulness to be in a broader sense than it was. I don’t believe AI as it exists now in the global capitalist system will liberate the third world. My guess is the ruling class’s use of the AI will probably have a greater impact against the working class’s interests than the impact the working class will have to counter AI through its own use. We have no control over AI’s existence. It’s a reality we have to live with. The only way AI would have an improvement on the working class’s lives across the entire planet would be for a socialist system to become dominant across the world and the global capitalist hegemony to be overthrown.

                  I’m not denying the ruling class’s use of AI is a much greater detriment to us than any gains we get from the weakening of the labor aristocracy. However, I believe as those people start losing their jobs, communist parties will need to start reaching out to them, educate them, and bring them to our cause so we can develop the numbers and power to overthrow the ruling class, which I believe is the upmost importance. I honestly don’t believe most labor aristocrats, especially in the West, will be radicalized until they become proletarianized and their material conditions greatly worsen.

      • footfaults@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        22
        ·
        4 天前

        you basically give a slop generator a fitness function in the form of tests, compilation scripts, and static analysis thresholds, was pretty good.

        forcing the slop generator to generate slop randomly until it passes tests.

        I have to chuckle at this because it’s practically the same way that you have to manage junior engineers, sometimes.

        It really shows how barely “good enough” is killing off all the junior engineers, and once I die, who’s going to replace me?

        • freagle@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          20
          ·
          4 天前

          This is absolutely the crisis of aging hitting the software engineering labor pool hard. There are other industries where 60% or more of the trained people are retiring in 5 years. Software is now on the fast track to get there as well.

          • footfaults@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            8
            ·
            4 天前

            This is a great point. I think what is most jarring to me is the speed at which this is happening. I may be wrong but it felt like those other industries, it took at least a couple decades for it to happen, and it feels like tech is doing it in a matter of months?

            • freagle@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              9
              ·
              4 天前

              Nah. It’s two different phenomena with the same end point. Those other industries lost young entrants because of the rise of the college pursuit. And yes that took decades. But for software we’re still 20 years out at least before the we have a retirement crisis.

              Although, we already one back in 2000 when not enough working age people knew COBOL.

              Anyway, it’s a historical process. It’s just one we’ve seen over and over and just don’t learn the lesson.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        arrow-up
        16
        ·
        4 天前

        I’d much rather the slop generator wastes its time doing these repetitive and boring tasks so I can spend my time doing something more interesting.

        • footfaults@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          21
          ·
          edit-2
          4 天前

          wastes its time doing these repetitive and boring tasks

          To me, this is sort of a code smell. I’m not going to say that every single bit of work that I have done is unique and engaging, but I think that if a lot of code being written is boring and repetitive, it’s probably not engineered correctly.

          It’s easy for me to be flippant and say this and you’d be totally right to point that out. I just felt like getting it out of my head.

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            arrow-up
            18
            arrow-down
            1
            ·
            4 天前

            If most of the code you write is meaningful code that’s novel and interesting then you are incredibly privileged. Majority of code I’ve seen in the industry is mostly boring and a lot of it just boilerplate.

            • 小莱卡@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              7
              ·
              edit-2
              3 天前

              Absolutely, coders should be spending time developing new and faster algorithms, things that AI cannot do, not figuring out the boilerplate of a dropbox menu on whatever framework. Heck, we dont even need frameworks with AI.

            • footfaults@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              10
              ·
              4 天前

              meaningful code that’s novel and interesting then you are incredibly privileged

              This is possible but I doubt it. It’s your usual CRUD web application with some business logic and some async workers.

                • footfaults@lemmygrad.ml
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  ·
                  4 天前

                  Not really. It’s Django and Django Rest Framework so there really isn’t a lot of boilerplate. That’s all hidden behind the framework

        • freagle@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          12
          ·
          4 天前

          It’s more that the iterative slop generation is pretty energy intensive when you scale it up like this. Tons of tokens in memory, multiple iterations of producing slop, running tests to tell it’s slop and starting it over again automatically. I’d love the time savings as well. I’m just saying we should keep in mind the waste aspect as it’s bound to catch us up.

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            arrow-up
            14
            ·
            4 天前

            I don’t really find the waste argument terribly convincing myself. The amount of waste depends on how many tries it needs to get the answer, and how much previous work it can reuse. The quality of output has already improved dramatically, and there’s no reason to expect that this will not continue to get better over time. Meanwhile, there’s every reason to expect that iterative loop will continue to be optimized as well.

            In a broader sense, we waste power all the time on all kinds of things. Think of all the ads, crypto, or consumerism in general. There’s nothing uniquely wasteful about LLMs, and at least they can be put towards producing something of value, unlike many things our society wastes energy on.

            • freagle@lemmygrad.ml
              link
              fedilink
              English
              arrow-up
              10
              ·
              4 天前

              I do think there’s something uniquely wasteful about floating point arithmetic, which is why need specialized processors for it, and there is something uniquely wasteful about crypto and LLMs, both in terms of electricity but also in terms of waste heat. I agree that generative AI for solving problems is definitely better than crypto, and it’s better than using generative AI to produce creative works, do advertising and marketing, etc.

              But it’s not without it’s externalities and putting that in an unmonitored iterative loop at scale requires us to at least consider the costs.

              • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                link
                fedilink
                arrow-up
                10
                ·
                4 天前

                Eventually we most likely will see specialized chips for this, and there are already analog chips being produced for neural networks which are a far better fit. There are selection pressures to improve this tech even under capitalism, since companies running models end up paying for the power usage. And then we have open source models with people optimizing them to run things locally. Personally, I find it mind blowing that we’ve already can run local models on a laptop that perform roughly as well as models that required a whole data centre to run just a year ago. It’s hard to say when all the low hanging fruit is picked, will improvements start to plateau, but so far it’s been really impressive to watch.

                • freagle@lemmygrad.ml
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  3 天前

                  Yeah, there is something to be said for changing the hardware. Producing the models is still expensive even if running the models is becoming more efficient. But DeepSeek shows us even production is becoming more efficient.

                  What’s impressive to me is how useful the concept of the stochastic parrot is turning out to be. It doesn’t seem to make a lot of sense, at first or even second glace, that choosing the most probable next word in a sentence based on the statistical distribution of word usages across a training set would actually be all that useful.

                  I’ve used it for coding before and it’s obvious that these things are most useful at reproducing code tutorials or code examples and not at all for reasoning, but there’s a lot of code examples and tutorials out there that I haven’t read yet and never will read. The ability of a stochastic parrot to reproduce that code using human language as it’s control input is impressive.

  • 小莱卡@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    3 天前

    Do you like fine Japanese woodworking? All hand tools and sashimono joinery?

    this should sell it, why would anyone want something more expensive just because it was hand made instead of mass produced.

  • amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    4 天前

    I find the tone kind of slapdash. Feel like the author could have condensed it to a small post about using AI agents in certain contexts, as that seems to be the crux of their argument for usefulness in programming.

    I do think they have a valid point about some in tech acting squeamish about automation when their whole thing has been automation from day one. Though I also think the idea of AI doing “junior developer” level of work is going to backfire massively on the industry. Seniors start out as juniors and AI is not going to progress fast enough to replace seniors probably within decades (I could see it replacing some seniors, but not on the level of trust and competency that would allow it to replace all of them). But AI could replace a lot of juniors and effectively lock the field into a trajectory of aging itself out of existence, due to it being too hard for enough humans to get the needed experience to take over the senior roles.

    Edit: I mean, it’s already the case that dated systems sometimes use languages nobody is learning anymore. That kind of thing could get much worse.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      15
      ·
      3 天前

      The developer pipeline is the big question here. My experience using these tools is that you absolutely have to know what you’re doing in order to evaluate the code LLMs produce. Right now we have a big pool of senior developers who can wrangle these tools productively and produce good code using them because they understand what the proper solution should look like. However, if new developers start out using these tools directly, without building prior experience by hand, then it might be a lot harder for them to build such intuition for problem solving.

  • Carl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 天前

    See, for coding AI makes a lot of sense, since a lot of it is very tedious. You still need to understand the output to be able to debug it and make novel programs though, because the limitation is that the LLM can only recreate code its seen before in fairly generic configurations.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      10
      ·
      4 天前

      Right, I agree with the author of the article that LLMs are great at tackling boring code like boilerplate, and freeing you up to actually do stuff that’s interesting.

  • m532@lemmygrad.ml
    link
    fedilink
    arrow-up
    9
    ·
    4 天前

    Turns out my assumptions about how LLM-assisted programming works were completely wrong or outdated. This new way sounds super efficient.

  • NotMushroomForDebate@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    9
    ·
    4 天前

    Yogthos is really relentless with all these AI posts. You’re not fighting for the poor defenseless AI technologies against the tyrannical masses with these posts.

    People are clearly pissed off at the current state of these technologies and the products of it. I would have expected that here out of all places that the current material reality would matter more than the idealistic view of what could be done with them.

    I don’t mean for this comment to sound antagonistic, I just feel that there’s more worthwhile things to focus on than pushing back against people annoyed by AI-generated memes and comics and calling them luddites.

    • Water Bowl Slime@lemmygrad.ml
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      4 天前

      This post is about what could be done with them though. It’s not about image generators, it’s about coding agents. LLMs are really good at programming certain things and it’s gotten to the point where avoiding them puts you at a needless disadvantage. It’s not like artisanally typed code is any better than what the bot generates.

      • NotMushroomForDebate@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        8
        ·
        4 天前

        That is all well and good, but this is not a conversation about “using LLMs in this specific scenario is advantageous”. I’m talking about the wider conversation mostly happening on this instance.

        It’s quite frustrating when people express certain material concerns about the current state of the technology and its implications and are met with bad-faith arguments, hand-waving, and idealism. Especially when it’s not an important conversation to be happening here anyway. It’s mostly surfaced because people here react negatively to the AI-generated memes that Yogthos posts and that of course makes them irrational primitivists.

        It’s needless antagonism that is not productive whatsoever over a topic that is largely out of the hands of workers anyway.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          6
          ·
          3 天前

          Except that it’s absolutely not out of the hands of the workers. The whole question here is whether this tech is going to be developed by corps who will decide who can use it and what content can be generated, or whether the development will be done in the open, accessible to everyone, and community driven. Rejection of these tools ensures that the former will be the case.

          • NotMushroomForDebate@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            6
            ·
            3 天前

            Firstly, “whether the development will be done in the open, accessible to everyone, and community driven” is largely not in the question when it comes to the training data itself, which is the most resource intensive aspect of this to begin with.

            Secondly, “Rejection of these tools ensures that the former will be the case.”, you’re circling back again for the third time to a point that I haven’t made, and have explicitly clarified that it’s not the point that I’m discussing.

            This follows the pattern of the comment threads of the other posts I’ve read on this topic, which is why I was hesitant to comment on this one to begin with. There is no point in having a conversation if you reply without showing the basic decency to read what I wrote.

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              7
              ·
              3 天前

              Firstly, “whether the development will be done in the open, accessible to everyone, and community driven” is largely not in the question when it comes to the training data itself, which is the most resource intensive aspect of this to begin with.

              I don’t see why. For example, tools like this already exist https://github.com/bigscience-workshop/petals

              This follows the pattern of the comment threads of the other posts I’ve read on this topic, which is why I was hesitant to comment on this one to begin with. There is no point in having a conversation if you reply without showing the basic decency to read what I wrote.

              Frankly, I don’t understand what the actual point is that you’re trying to make if it’s not the one I’m addressing. As far as I’m concerned, the basic facts of the situation is that this technology currently exists, and it will continue to be developed. The only question that matters is how it will be developed and who will control it. If you think that’s not correct then feel free to clearly articulate your counterpoint.

              • NotMushroomForDebate@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                8
                ·
                edit-2
                3 天前

                I am not talking about the technology itself or what is to be done regarding it, I’m simply highlighting that the manner in which the conversation around it in this instance is conducted is more often than not antagonistic and unproductive.

                Posts that are meant to educate should not be hostile, condescending, or antagonistic. People’s concerns, when they are engaging in good faith, should not be waved away and ignored.

                It’s important to understand why we communicate something. What is the goal that we’re trying to achieve? It’s important to be honest about this with ourselves before we choose to speak. Is the goal of a post I’m making to try and seek opinions? Is it to gauge interest in something? Is it to educate the community on a specific topic that I have certain knowledge about? Is it to critique a particular point of view? It could also be to make a joke and share a laugh about something, or it could be to express frustrations and personal grievances. We should answer this question before we communicate something. We could read the post or comment again and ask ourselves “does this post/comment fit with the goal I had in mind?” If I’m making posts that consistently result in unproductive conversations in the comments in a community that is otherwise quite pleasant to interact with, then I may reassess the way that I’m approaching this topic.

                I chose to comment on this post not as a knee-jerk reaction to the needlessly provocative title, rather because I saw it as part of a pattern with the discussions surrounding this topic on this instance. The point I am trying to make is that it does not benefit anyone on this instance to keep spreading hostilities. If believe that this is an important topic, one that people here should take seriously and engage with as it’s relevant to their lives and their movements, you should not resort to reducing all their concerns, opinions, and personal preferences to ignorance, primitivism, or paint them as Luddites.

                As I said before, I would not afford the same patience to people expressing bigoted views or harmful historic revisionism, this is not that.

                Speaking personally as an example, I do not disagree with your main premise. I do not believe that these technologies should be ignored, rejected, or shunned as a whole. I am not against automation, nor am I against the use of similar technologies in creative pursuits such as art or music in principle. I however do dislike these AI generated memes and comics. I do dislike the use cases people employ LLMs in 90% of the time. I do dislike that every single field is urged to incorporate some of these technologies somehow before a problem is even identified and trying to force-sell a solution. I do dislike the way that it’s currently used by people who are my juniors, who rely on information from corporate LLMs without having a single inkling of how they work. We may agree, disagree, discuss certain points, I could change my perspective on something, that is all great, but you have to understand that in a case where I or other users are frustrated or annoyed by something like AI generated memes or comics on these communities, it does not automatically mean that we’re Luddites.

                To recap again the point I’m trying to make: please stop using antagonistic, passive-aggressive, and condescending methods to try and get your points across regarding this topic. It does not help.

                • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                  link
                  fedilink
                  arrow-up
                  7
                  arrow-down
                  1
                  ·
                  3 天前

                  I made a post about something which I thought was interesting and insightful. A bunch of people came in to make snide comments and personal attacks. But turns out it’s my fault that the tone of the discussion the way it is. I have absolutely no problem having a civil discussion about the topic with people who themselves act in a civil way, and want to genuinely understand the subject. I simply do not have patience to deal with people who personally attack me and do what amounts to trolling.

      • NotMushroomForDebate@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        8
        ·
        edit-2
        4 天前

        That’s completely missing the point I am making. I am not advocating for ‘irrational hate’ of a technology. I am saying that people are not receptive to the current implementations of it, and that trying to combat this through pushing against this sentiment is ultimately a waste of time.

        Assess the situation on what is, not on the premise of a utopian ideal.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          4 天前

          Some people aren’t receptive to current implementations, but that doesn’t mean we can’t discuss this technology here. A Marxist forum should be a place where we can talk about new technology in a rational way, and educate people who have reactionary views on it.

          • NotMushroomForDebate@lemmygrad.ml
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            6
            ·
            4 天前

            Of course the topic could and should be discussed, if it’s done in an honest and rational manner. There is no disagreement here. This is however not what I have been seeing over the past months. Even this very post, I know you haven’t written the article’s headline yourself, but you can see that it’s clearly antagonistic for no good reason.

            You can imagine this with any other topic. If there are people who you are sympathetic to and are part of your cause but might have an inaccurate or ‘reactionary’ view to something, you would not meet them with antagonism. Especially since we’re not talking about a case of bigotry here or other views or actions that harm others.

            We should also not infer from people disliking something that they have ‘reactionary’ views. One could dislike spice grinders and prefer a mortar and pestle, that doesn’t make them a primitivist. I would also understand if they get frustrated if they’re constantly bombarded with “here’s why spice grinders are better and you’re an idiot if you’re not using one” type posts.

            • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              11
              arrow-down
              2
              ·
              4 天前

              I think the article is mostly sensible, and it addresses common tropes that get thrown around regarding using LLMs for coding. What the author describes largely matches my own experience. Surely we can do better than judging articles solely based on the headline. Meanwhile, people can just skip reading the article if it irks them. I don’t know why every single time there’s a post regarding AI there needs to be struggle session about it.

              • NotMushroomForDebate@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                6
                ·
                4 天前

                The point is not judging the content of the article on the headline, it’s the needlessly antagonistic phrasing of it. I am expressing that it is very understandable that people scrolling would be bothered by seeing such posts.

                As for the struggle sessions, the topic is quite controversial to begin with, and being honest, as a lurker of these posts for quite some time now, I never liked the manner in which you replied to people disagreeing in the comments. It’s no surprise that these posts often turn hostile and unproductive. It’s also important to realise that each post does not happen on an island, there’s historical context in the community and instance. People being irked by one post and choosing to comment something have probably seen 3-4 other posts in the previous weeks that also annoyed them, and might reply with a tone of frustration as a result.

                • m532@lemmygrad.ml
                  link
                  fedilink
                  arrow-up
                  7
                  ·
                  4 天前

                  Sometimes, on a random meme, there is a bunch of hostile “you used a LLM to make the image for this meme, therefore you are an inhuman monster” comments. Its them who started the hostility, and we won’t just sit there and take it from a bunch of IP crusaders who dehumanize us by equating us to machines.

              • footfaults@lemmygrad.ml
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                6
                ·
                3 天前

                So why does it always seem to be that anyone who is making “reactionary statements that aren’t rooted in material analysis” is basically just someone who doesn’t agree with your conclusions? Are you the only one who is right?

    • footfaults@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      6
      ·
      4 天前

      Yogthos is really relentless with all these AI posts.

      They are consistent, in their boosterism, so credit where credit is due.