Raccoonn@lemmy.ml to Memes@lemmy.ml · 3 days agoAI will never be able to write like me.lemmy.mlimagemessage-square123fedilinkarrow-up11.56Karrow-down117 cross-posted to: memes@lemmy.world
arrow-up11.54Karrow-down1imageAI will never be able to write like me.lemmy.mlRaccoonn@lemmy.ml to Memes@lemmy.ml · 3 days agomessage-square123fedilink cross-posted to: memes@lemmy.world
minus-squaresaigot@lemmy.calinkfedilinkarrow-up4·3 days agoIf it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.
minus-squareAzzu@lemm.eelinkfedilinkarrow-up4·3 days agoThat doesn’t work, you can’t train models on another model’s output without degrading the quality. At least not currently.
minus-squareFooBarrington@lemmy.worldlinkfedilinkarrow-up1·2 days agoNo, that’s not true. All current models use output from previous models as part of their training data. You can’t solely rely on it, but that’s not strictly necessary.
minus-squareVashtea@sh.itjust.workslinkfedilinkEnglisharrow-up1·edit-23 days agoI don’t think he was suggesting training on another model’s output, just using ai to filter the training data before it is used.
If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.
That doesn’t work, you can’t train models on another model’s output without degrading the quality. At least not currently.
No, that’s not true. All current models use output from previous models as part of their training data. You can’t solely rely on it, but that’s not strictly necessary.
I don’t think he was suggesting training on another model’s output, just using ai to filter the training data before it is used.