You communicate with co-workers using natural languages but that doesn’t make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
You communicate with co-workers using natural languages but that doesn’t make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.
Sure, in those situations. I find that it doesn’t take that much effort to write a prompt that gets me something useful in most situations, though. You just need to make some effort. A lot of people don’t put in any effort, get a bad result, and conclude “this tech is useless.”
It also isn’t telepathic, so the only thing it has to go on when determining “what you want” is what you tell it you want.
I often see people gripe about how ChatGPT’s essay writing style is mediocre and always sounds the same, for example. But that’s what you get when you just tell ChatGPT “write me an essay about X.” It doesn’t know what kind of essay you want unless you tell it. You have to give it context and direction to get good results.
“Just give me this and I’ll do the rest” is actually a pretty great workflow, in my experience. AI isn’t at the point where you can just set it loose to work on its own but as a collaborator it saves me a huge amount of hassle and time.
To flee North Korea requires playing a long game, marry someone you hate.
The “I Want To Live” project is going to need a bunch of Korean translators.
You get out ahead of the locomotive knowing that most of the directions you go aren’t going to pan out. The point is that the guy who happens to pick correctly will win big by getting out there first. Nothing wrong with making the attempt and getting it wrong, as long as you factored that risk in (as McDonalds’ seems to have done given that this hasn’t harmed them).
Sometimes peoples’ heads just fall apart.
Well, rarely.
Okay, once. Sheer coincidence that it was such a prominent person and in such a prominent moment. And that Lee Harvey Oswald just happened to be cleaning his gun by that window at the time. But hey, sometimes weird coincidences happen.
Training an AI does not involve copying anything so why would you think that fair use is even a factor here? It’s outside of copyright altogether. You can’t copyright concepts.
Downloading pirated books to your computer does involve copyright violation, sure, but it’s a violation by the uploader. And look at what community we’re in, are we going to get all high and mighty about that?
Training an AI on something doesn’t involve copying it.
And under copyleft licensing, they’re allowed to do that. Both to GitHub repositories and Wikipedia.
Why would that matter? You can fork such projects too.
I don’t think you’ve thought through the logistics required for the sort of war where you’d just go around and shoot everyone who lives in hundreds of solar systems. Even assuming they do nothing at all to defend themselves, how do you even find them all?
If you want to argue that Lemmy doesn’t represent users at large, or that the people complaining about AI are a loud minority, go for it.
Yes, that’s exactly what I’m doing. Though specifically this community, not Lemmy as a whole (I’m not a Lemmy user myself for that matter).
Of course it is! We are simultaneously facing a labor shortage and mass unemployment. The important thing is to keep being angry and frightened, the specific subject you’re angry about at any given time is flexible.
You made an assertion about what end users want. I’m an end user and my desires are not the same as your desires.
But if the sentiment is that common, maybe there’s something to it.
Or maybe it’s just a common fallacy. Like argumentum ad populum.
My advice against getting too deeply invested applies to those companies and communities as well.
FTX was a cryptocurrency exchange, how is that remotely similar to NVIDIA?
Can you remind me how those technologies are related, other than the mere accusation of them being “buzzwords”?
Cryptocurrency is actually doing fine, BTW. Just because you don’t find it useful doesn’t mean it’s not useful to other people.
Remember when piracy communities thought that the media companies were wrong to sue switch manufacturers because of that?
It baffles me that there’s such an anti-AI sentiment going around that it would cause even folks here to go “you know, maybe those litigious copyright cartels had the right idea after all.”
We should be cheering that we’ve got Meta on the side of fair use for once.
Look up “overfitting.” It’s a flaw in generative AI training that modern AI trainers have done a great deal to resolve, and even in the cases of overfitting it’s not all of the training data that gets “memorized.” Only the stuff that got hammered into the AI thousands of times in error.