ChatGPT is certainly no good at a lot of aspects of storytelling, but I wonder how much the author played with different prompts.
For example, if I go to GPT-4 and say, “Write a short fantasy story about a group of adventurers who challenge a dragon,” it gives me a bog standard trope-ridden fantasy story. Standard adventuring party goes into cave, fights dragon, kills it, returns with gold.
But then if I say, “Do it again, but avoid using fantasy tropes and cliches,” it generates a much more interesting story. Not sure about the etiquette of pasting big blocks of ChatGPT text into Lemmy comments, but the setting turned from generic medieval Europe into more of a weird steampunk-like environment, and the climax of the story was the characters convincing the dragon that it was hurting people and should stop.
I find it silly how there are specific AI tools to accomplish what they’re trying to do here, but they dont even give that a mention in the article? They instead opt towards using chatgpt, which can technically do it but isn’t meant for this. Them writing “Put plainly: AI sucks at this.” just seems disingenuous when they did not do the proper research into the proper ai’s for this, because AI can do this rather well if you know how.