Arthur Besse@lemmy.ml to Technology@beehaw.orgEnglish · edit-21 year agoSarah Silverman and other authors are suing OpenAI and Meta for copyright infringement, alleging that they're training their LLMs on books via Library Genesis and Z-Librarywww.thedailybeast.comexternal-linkmessage-square128fedilinkarrow-up1215arrow-down10cross-posted to: piracy@lemmy.ml
arrow-up1215arrow-down1external-linkSarah Silverman and other authors are suing OpenAI and Meta for copyright infringement, alleging that they're training their LLMs on books via Library Genesis and Z-Librarywww.thedailybeast.comArthur Besse@lemmy.ml to Technology@beehaw.orgEnglish · edit-21 year agomessage-square128fedilinkcross-posted to: piracy@lemmy.ml
minus-squareag_roberston_author@beehaw.orglinkfedilinkEnglisharrow-up60·1 year agoThis is a strawman. You cannot act as though feeding LLMs data is remotely comparable to reading.
minus-squareag_roberston_author@beehaw.orglinkfedilinkEnglisharrow-up18·1 year agoBecause reading is an inherently human activity. An LLM consuming data from a training model is not.
minus-squareTheBurlapBandit@beehaw.orglinkfedilinkEnglisharrow-up7·1 year agoLLMs forcing us to take a look at ourselves and see if we’re really that special. I don’t think we are.
minus-squareDominic@beehaw.orglinkfedilinkEnglisharrow-up6·1 year agoFor now, we’re special. LLMs are far more training data-intensive, hardware-intensive, and energy-intensive than a human brain. They’re still very much a brute-force method of getting computers to work with language.
minus-squarepips@lemmy.filmlinkfedilinkEnglisharrow-up5·1 year agoBecause the LLM is also outputting the copyrighted material.
minus-squareDouble_A@discuss.tchncs.delinkfedilinkEnglisharrow-up3·1 year agoSo could any human that got inspired by something…
This is a strawman.
You cannot act as though feeding LLMs data is remotely comparable to reading.
Why not?
Because reading is an inherently human activity.
An LLM consuming data from a training model is not.
LLMs forcing us to take a look at ourselves and see if we’re really that special.
I don’t think we are.
For now, we’re special.
LLMs are far more training data-intensive, hardware-intensive, and energy-intensive than a human brain. They’re still very much a brute-force method of getting computers to work with language.
Because the LLM is also outputting the copyrighted material.
So could any human that got inspired by something…