• BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    22 hours ago

    The whole blog series was about how agentic LLM coders could conceivably turn the software industry on its head, which I think is plausible, but then he does seamlessly segue into stuff like the quote on the basis of no credible information. There’s also been some noise about world models, which are supposed to better approximate human reasoning, as a way of getting around the limitations of LLMs, but I don’t know how credible those claims are. I think the current scenario is these things being useful enough to cause substantial disruptions in tech but the promised resolutions to the contradictions of capitalism will always be over the next horizon. However, if LLMs have demonstrated anything, it’s that it doesn’t take as much as you’d think to fool a large number of people into believing that we’ve reached AGI and the implications of that are a little scary.

    • danisth [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 hours ago

      So I haven’t read anything by Steve Yegge before, but looking into it now I see he’s the head of a company that sells tooling that leverages the very models/agents that he’s saying will turn the industry on its head. Not saying he’s wrong, just seems like everyone that says AI will do X is person who will profit very much if everyone believes that AI will do X.

      • BodyBySisyphus [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 hours ago

        Yeah, the problem is if the AI is convincing enough at appearing to do X and the rush to adopt happens very quickly, then there’s the potential a lot of damage could get done.

        Or if AI does in fact do X, it’ll just punch the accelerator on every negative trend in tech.