I read both this one and RotJD and boy howdy was that a wild ride. It seems like he veers off course with the occasional wild claim like:
No matter whose vision you subscribe to, something big will happen with AI in the next two years. Compute power for AI is doubling around every 100 days, which makes it likely that we hit AGI by 2028 if not sooner. But I take an optimistic view: Our problems will always be harder than anyone can fully understand, even superhuman intelligences. It will be nice to have them around to help us solve those giant problems.
But maybe I’m still too skeptical. I’m thinking about the news that Louisiana is building a bunch of new natural gas plants to serve Meta’s new data center. Regardless of how well “AI” functions at solving actual problems (instead of just coming up with ever more elaborate ways to serve ads) or when it plateaus, it seems like we’re lashed to the mast.
Compute power doubling every 100 days certainly means the text extruder will manifest intelligence, right? The deterministic statistical matrix will definitely come alive and help us tangibly improve stakeholder value, right?
The whole blog series was about how agentic LLM coders could conceivably turn the software industry on its head, which I think is plausible, but then he does seamlessly segue into stuff like the quote on the basis of no credible information. There’s also been some noise about world models, which are supposed to better approximate human reasoning, as a way of getting around the limitations of LLMs, but I don’t know how credible those claims are. I think the current scenario is these things being useful enough to cause substantial disruptions in tech but the promised resolutions to the contradictions of capitalism will always be over the next horizon. However, if LLMs have demonstrated anything, it’s that it doesn’t take as much as you’d think to fool a large number of people into believing that we’ve reached AGI and the implications of that are a little scary.
So I haven’t read anything by Steve Yegge before, but looking into it now I see he’s the head of a company that sells tooling that leverages the very models/agents that he’s saying will turn the industry on its head. Not saying he’s wrong, just seems like everyone that says AI will do X is person who will profit very much if everyone believes that AI will do X.
Yeah, the problem is if the AI is convincing enough at appearing to do X and the rush to adopt happens very quickly, then there’s the potential a lot of damage could get done.
Or if AI does in fact do X, it’ll just punch the accelerator on every negative trend in tech.
No matter whose vision you subscribe to, something big will happen with AI in the next two years
Although I’m always wrong about everything, I’m still open to Ed Zitron’s “something big”, and the bottom falls out of it (although I’m sure it’s too big to fail by now)
I read both this one and RotJD and boy howdy was that a wild ride. It seems like he veers off course with the occasional wild claim like:
But maybe I’m still too skeptical. I’m thinking about the news that Louisiana is building a bunch of new natural gas plants to serve Meta’s new data center. Regardless of how well “AI” functions at solving actual problems (instead of just coming up with ever more elaborate ways to serve ads) or when it plateaus, it seems like we’re lashed to the mast.
Compute power doubling every 100 days certainly means the text extruder will manifest intelligence, right? The deterministic statistical matrix will definitely come alive and help us tangibly improve stakeholder value, right?
The whole blog series was about how agentic LLM coders could conceivably turn the software industry on its head, which I think is plausible, but then he does seamlessly segue into stuff like the quote on the basis of no credible information. There’s also been some noise about world models, which are supposed to better approximate human reasoning, as a way of getting around the limitations of LLMs, but I don’t know how credible those claims are. I think the current scenario is these things being useful enough to cause substantial disruptions in tech but the promised resolutions to the contradictions of capitalism will always be over the next horizon. However, if LLMs have demonstrated anything, it’s that it doesn’t take as much as you’d think to fool a large number of people into believing that we’ve reached AGI and the implications of that are a little scary.
So I haven’t read anything by Steve Yegge before, but looking into it now I see he’s the head of a company that sells tooling that leverages the very models/agents that he’s saying will turn the industry on its head. Not saying he’s wrong, just seems like everyone that says AI will do X is person who will profit very much if everyone believes that AI will do X.
It’s a bunch of people selling shovels trying to convince everyone else there’s a gold rush
Yeah, the problem is if the AI is convincing enough at appearing to do X and the rush to adopt happens very quickly, then there’s the potential a lot of damage could get done.
Or if AI does in fact do X, it’ll just punch the accelerator on every negative trend in tech.
Although I’m always wrong about everything, I’m still open to Ed Zitron’s “something big”, and the bottom falls out of it (although I’m sure it’s too big to fail by now)
I’m pretty sure this was the plot of a Pinky and the Brain episode.