• carpoftruth [any, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    40
    ·
    1 day ago

    But the global innovation order is shifting. According to the latest Edelman Trust Barometer, change is under way not only in technical capability but also in public sentiment. In China, 72 per cent of people trust artificial intelligence (AI), compared to 32 per cent in the United States and 28 per cent in the United Kingdom. Similar patterns hold across India, Indonesia, Malaysia and Thailand as developing Asian markets consistently outperform Western and developed peers on public trust in innovation.

    that’s pretty wild as a difference

    • UmbraVivi [he/him, she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      1
      ·
      1 day ago

      Generative AI is not an inherently evil technology. If I had any trust in Western institutions whatsoever I wouldn’t have as much of an issue with it.

      • 100-com It could be a tool with amazing potential. The latest Steve Yegge blog post is one of the most depressing things I’ve read about software engineering in years…

        This turned out to be the biggest surprise of the new world: agentic coding is addictive. You will hear it more and more often, because it bewitches people once they’ve got the hang of it. Agentic coding is like a slot machine, where each of your requests is a pull of the lever with potentially infinite upside or downside. On any given query, you don’t know if it’s going to one-shot everything you wished for, or delete your repo and send weenie pics to your grandma.

        Every time something good happens, which is often, you get rewarded with dopamine. And when something bad happens, also often, you get adrenaline. The intermittent reinforcement of those dopamine and adrenaline hits creates the core addictive pull. It can become near-impossible to tear yourself away. We had to drag several vibe coders off stage at a conference I was at recently. As we escorted them away from the podium, they would still be wailing, “It’ll work on the next try!”

        How do you know if you’re doing AI right at your company? We’ve noticed that the companies that are winning with AI – the ones happy with their progress – tend to be the ones that encourage token burn. Token spend per developer per unit time is the new health metric that best represents how well your company is doing with AI: an idea proposed by Dr. Matt Beane and playing out in the field as we speak. I see companies saying, “If our devs are spending $100-$300 a day, that’s much less than paying for another human engineer. So if AI makes our devs twice as productive, or in some cases only 50% more, we’re winning.”

        Amp is also more fun. It takes a different design approach, being intentionally team-centric. Amp gamifies your agentic development by making it public, with leaderboards and friendly competition, as well as liberal thread sharing. It all manages to be low-pressure

        • BodyBySisyphus [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          16
          ·
          edit-2
          23 hours ago

          I read both this one and RotJD and boy howdy was that a wild ride. It seems like he veers off course with the occasional wild claim like:

          No matter whose vision you subscribe to, something big will happen with AI in the next two years. Compute power for AI is doubling around every 100 days, which makes it likely that we hit AGI by 2028 if not sooner. But I take an optimistic view: Our problems will always be harder than anyone can fully understand, even superhuman intelligences. It will be nice to have them around to help us solve those giant problems.

          But maybe I’m still too skeptical. I’m thinking about the news that Louisiana is building a bunch of new natural gas plants to serve Meta’s new data center. Regardless of how well “AI” functions at solving actual problems (instead of just coming up with ever more elaborate ways to serve ads) or when it plateaus, it seems like we’re lashed to the mast. this-is-fine

          • fox [comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            8
            ·
            23 hours ago

            Compute power doubling every 100 days certainly means the text extruder will manifest intelligence, right? The deterministic statistical matrix will definitely come alive and help us tangibly improve stakeholder value, right?

            • BodyBySisyphus [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              22 hours ago

              The whole blog series was about how agentic LLM coders could conceivably turn the software industry on its head, which I think is plausible, but then he does seamlessly segue into stuff like the quote on the basis of no credible information. There’s also been some noise about world models, which are supposed to better approximate human reasoning, as a way of getting around the limitations of LLMs, but I don’t know how credible those claims are. I think the current scenario is these things being useful enough to cause substantial disruptions in tech but the promised resolutions to the contradictions of capitalism will always be over the next horizon. However, if LLMs have demonstrated anything, it’s that it doesn’t take as much as you’d think to fool a large number of people into believing that we’ve reached AGI and the implications of that are a little scary.

              • danisth [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                5
                ·
                20 hours ago

                So I haven’t read anything by Steve Yegge before, but looking into it now I see he’s the head of a company that sells tooling that leverages the very models/agents that he’s saying will turn the industry on its head. Not saying he’s wrong, just seems like everyone that says AI will do X is person who will profit very much if everyone believes that AI will do X.

                • BodyBySisyphus [he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  17 hours ago

                  Yeah, the problem is if the AI is convincing enough at appearing to do X and the rush to adopt happens very quickly, then there’s the potential a lot of damage could get done.

                  Or if AI does in fact do X, it’ll just punch the accelerator on every negative trend in tech.