• BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 day ago

    I read both this one and RotJD and boy howdy was that a wild ride. It seems like he veers off course with the occasional wild claim like:

    No matter whose vision you subscribe to, something big will happen with AI in the next two years. Compute power for AI is doubling around every 100 days, which makes it likely that we hit AGI by 2028 if not sooner. But I take an optimistic view: Our problems will always be harder than anyone can fully understand, even superhuman intelligences. It will be nice to have them around to help us solve those giant problems.

    But maybe I’m still too skeptical. I’m thinking about the news that Louisiana is building a bunch of new natural gas plants to serve Meta’s new data center. Regardless of how well “AI” functions at solving actual problems (instead of just coming up with ever more elaborate ways to serve ads) or when it plateaus, it seems like we’re lashed to the mast. this-is-fine

    • fox [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      Compute power doubling every 100 days certainly means the text extruder will manifest intelligence, right? The deterministic statistical matrix will definitely come alive and help us tangibly improve stakeholder value, right?

      • BodyBySisyphus [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        The whole blog series was about how agentic LLM coders could conceivably turn the software industry on its head, which I think is plausible, but then he does seamlessly segue into stuff like the quote on the basis of no credible information. There’s also been some noise about world models, which are supposed to better approximate human reasoning, as a way of getting around the limitations of LLMs, but I don’t know how credible those claims are. I think the current scenario is these things being useful enough to cause substantial disruptions in tech but the promised resolutions to the contradictions of capitalism will always be over the next horizon. However, if LLMs have demonstrated anything, it’s that it doesn’t take as much as you’d think to fool a large number of people into believing that we’ve reached AGI and the implications of that are a little scary.

        • danisth [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          5
          ·
          24 hours ago

          So I haven’t read anything by Steve Yegge before, but looking into it now I see he’s the head of a company that sells tooling that leverages the very models/agents that he’s saying will turn the industry on its head. Not saying he’s wrong, just seems like everyone that says AI will do X is person who will profit very much if everyone believes that AI will do X.

          • BodyBySisyphus [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            22 hours ago

            Yeah, the problem is if the AI is convincing enough at appearing to do X and the rush to adopt happens very quickly, then there’s the potential a lot of damage could get done.

            Or if AI does in fact do X, it’ll just punch the accelerator on every negative trend in tech.