I promise this question is asked in good faith. I do not currently see the point of generative AI and I want to understand why there’s hype. There are ethical concerns but we’ll ignore ethics for the question.
In creative works like writing or art, it feels soulless and poor quality. In programming at best it’s a shortcut to avoid deeper learning, at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.
When I see AI ads directed towards individuals the selling point is convenience. But I would feel robbed of the human experience using AI in place of human interaction.
So what’s the point of it all?
My last three usages of it:
- A translation
- Looking up what actors from Mars Attacks had shared work on another movie. I recognized that Pierce Brosnan and John Doe Baker had done Goldeneye and wondered if there were more.
- Name suggestions for a black and white cat - I got some funny suggestions like Oreo and a kick-ass suggestion for Domino
I have a friend with numerous mental issues who texts long barely comprehensible messages to update me on how they are doing, like no paragraphs, stream of consciousness style… and so i take those walls of text and tell chat gpt to summarize it for me, and it goes from a mess of words into an update i can actually understand and respond to.
Another use for me is getting quick access to answered id previously have to spend way more time reading and filtering over multiple forums and stack exchanges posts to answer.
Basically they are good at parsing information and reformatting it in a way that works better for me.
I recently had to digitize dozens of photos from family scrapbooks, many of which had annoying novelty pattern borders cut out of the edges. Sure, I could have just cropped the photos more to hide the stupid zigzagged missing portions. But I had the beta version of Photoshop installed with the generative fill function, so I tried it. Half the time it was garbage, but the other half it filled in a bit of grass or sky convincingly enough that you couldn’t tell the photo was damaged. +1 acceptable use case for generative AI, I guess.
I use it in a lot of tiny ways for photo-editing, Adobe has a lot of integration and 70% of it is junk right now but things like increasing sharpness, cleaning noise, and heal-brush are great with AI generation now.
Fake frames. Nvidia double benefits.
Note: Tis a joke, personally I think DLSS frame generation is cool, as every frame is “fake” anyway.
Learning how to use Linux
“at worst it spits out garbage code that you spend more time debugging than if you had just written it by yourself.”
I’ve not experienced this. Debugging for me is always faster than writing something entirely from scratch.
100% agree with this.
It is so much faster for me to give the ai the api/library documentation than it would be for me to figure out how that api works. Is it a perfect drop-in, finished piece of code? No. But that is not what I ask the ai for. I ask it for a simple example which I can then take, modify, and rework into my own code.
Amazing for reading docs
Absolutely this. I’ve found AI to be a great tool for nitty-gritty questions concerning some development framework. While googling/duckduckgo’ing, you need to match the documentation pretty specifically when asking about something specific. AI seems to be much better at “understanding” the content and is able to match with the documentation pretty reliably.
For example, I was reading docs up and down at ElasticSearch’s website trying to find all possible values for the status field within an aggregated request. Google only lead me to general documentations without the specifics. However, a quick loosely worded question to chatGPT handed me the correct answer as well as a link to the exact spot in the docs where this was specified.
AI saves time. There are few use cases for which AI is qualitatively better, perhaps none at all, but there are a great many use cases for which it is much quicker and even at times more efficient.
I’m sure the efficiency argument is one that could be debated, but it makes sense to me in this way: for production-level outputs AI is rarely good enough, but creates really useful efficiency for rapid, imperfect prototyping. If you have 8 different UX ideas for your app which you’d like to test, then you could rapidly build prototype interfaces with AI. Likely once you’ve picked the best one you’ll rewrite it from scratch to make sure it’s robust, but without AI then building the other 7 would use up too many man-hours to make it worthwhile.
I’m sure others will put forward legitimate arguments about how AI will inevitably creep into production environments etc, but logistically then speed and efficiency are undeniably helpful use cases.
I wish I could have an AI in my head that would do all the talking for me because socializing is so exhausting
Other people would then have AIs in their heads to deal with the responses.
A perfect world, where nothing is actually being said, but goddamn do we sound smart saying it
Yes
I generate D&D characters and NPCs with it, but that’s not really a strong argument.
For programming though it’s quite handy. Basically a smarter code completion that takes the already written stuff into account. From machine code through assembly up to higher languages, I think it’s a logical next step to be able to tell the computer, in human language, what you actually are trying to achieve. That doesn’t mean it is taking over while the programmer switches off their brain of course, but it already saved me quite some time.
I use LLMs for search results when conventional search engines aren’t providing relevant results, and then I can fact-check whatever answers the LLMs give me. Especially using them to ask questions that are easy to verify, like mathematical questions where I can check the validity of the answers. Or similarly programming questions where I can read through the solution, check the documentation for any functions used, and make sure the output is logical, and make any tweaks if the LLM gives a nearly-correct answer. I always ask LLMs to cite their sources so I can check those too.
I also sometimes use LLMs for formatting, like when I copy text off a PDF and the spacing is all funky.
I don’t use LLMs for this, but I imagine that they would be a better replacement for previous automated translation tools. Translation seems to be one of the most obvious applications since LLMs are just language pattern recognition at the end of the day. Obviously for anything important they need to be checked by a human, but they would e.g. allow for people to participate in online communities where they don’t speak the community’s language.
I use it to re-tone and clarify corporate communications that I have to send out on a regular basis to my clients and internally. It has helped a lot with the amount of time I used to spend copy editing my own work. I have saved myself lots of hours doing something I don’t really like (copy-editing) and more time doing the stuff I do (engineering) because of it.
There is no point. There are billions of points, because there are billions of people, and that’s the point.
You know that there are hundreds or thousands of reasonable uses of generative AI, whether it’s customer support or template generation or brainstorming or the list goes on and on. Obviously you know that. So I’m not sure that you’re asking a meaningful question. People are using a tool to solve various problems, but you don’t see the point in that?
If your position is that they should use other tools to solve their problems, that’s certainly a legitimate view and you could argue for it. But that’s not what you wrote and I don’t think that’s what you feel.
There are some great use cases, for instance transcribing handwritten records and making them searchable is really exciting to me personally. They can also be a great tool if you learn to work with them (perhaps most importantly, know when not to use them - which in my line of work is most of the time).
That being said, none of these cases, or any of the cases in this thread, is going to return the large amounts of money now being invested in AI.
Generative AI is actually really bad at transcription. It imagines dialogues that never happened. There was some institution, a hospital I think? They said every transcription had at least one major error like that.
This is an issue if it’s unsupervised, but the transcription models are good enough now that with oversight then they’re usually useful: checking and correcting the AI generated transcription is almost always quicker than transcribing entirely by hand.
If we approach tasks like these assuming that they are error-prone regardless whether they are done by human or machine, and will always need some oversight and verification, then the AI tools can be very helpful in very non-miraculous ways. I think it was Jason Koebler said in a recent 404 podcast that at Vice he used to transcribe every word of every interview he did as a journalist, but now transcribes everything with AI and has saved hundreds of work hours doing so, but he still manually checks every transcript to verify it.