The real problem with LLM coding, in my opinion, is something much more fundamental than whether it can code correctly or not. One of the biggest problems coding faces right now is code bloat. In my 15 years writing code, I write so much less code now than when I started, and spend so much more time bolting together existing libraries, dealing with CI/CD bullshit, and all the other hair that software projects has started to grow.
The amount of code is exploding. Nowadays, every website uses ReactJS. Every single tiny website loads god knows how many libraries. Just the other day, I forked and built an open source project that had a simple web front end (a list view, some forms – basic shit), and after building it, npm informed me that it had over a dozen critical vulnerabilities, and dozens more of high severity. I think the total was something like 70?
All code now has to be written at least once. With ChatGPT, it doesn’t even need to be written once! We can generate arbitrary amounts of code all the time whenever we want! We’re going to have so much fucking code, and we have absolutely no idea how to deal with that.
This is so true. I feel like my main job as a senior software engineer is to keep the bloat low and delete unused code. Its very easy to write code - maintaining it and focusing on the important bits is hard.
This will be one of the biggest and most challenging problems Computer Science will have to solve in the coming years and decades.
It’s easy and fun to write new code, and it wins management’s respect. The harder work of maintaining and improving large code bases and data goes mostly unappreciated.
I don’t think it’s gonna go that way. In my experience the bigger the chunk of code you make it generate the more wrong it’s gonna be, not just because it’s a larger chunk of code, it’s gonna be exponentially more wrong.
It’s only good for generating small chunks of code at a time.
It won’t be long (maybe 3 years max) before industry adopts some technique for automatically prompting a LLM to generate code to fulfill a certain requirement, then iteratively improve it using test data to get it to pass all test cases. And I’m pretty sure there already are ways to get LLM’s to generate test cases. So this could go nightmarishly wrong very very fast if industry adopts that technology and starts integrating hundreds of unnecessary libraries or pieces of code that the AI just learned to “spam” everywhere so to speak. These things are way dumber than we give them credit for.
Oh that’s definitely going to lead to some hilarious situations but I don’t think we’re gonna see a complete breakdown of the whole IT sector. There’s no way companies/institutions that do really mission critical work (kernels, firmware, automotive/aerospace software, certain kinds of banking/finance software etc.) will let AI write that code any time soon. The rest of the stuff isn’t really that important and isn’t that big of a deal it if breaks for a few hours/days because the AI spazzed out.
Agreed, don’t expect it to break absolutely everything but I expect that software development is going to get very hairy when you have to use whatever bloated mess AI is creating.
I’m here for it, it’s already a complete shitshow, might as well go all the way.
If you have seen the crunch before demos for military projects you might start to think the other way. I doubt the bigger vendors will change much but you definetly could see contracts being won for shit that will just be ai generated because they got some base manager to eat up their proposal filled with buzz words. I’d be more worried about it, causing more contract bloat and wasted resources in critical systems going to these vapor ware solutions. Then you take general government contracts which go to the lowest bidder and you are gonna see a ton of AI bullshit start cropping up and bloating our systems because some high-school kid got chatgpt to make a basic website and no thinks he is the AI website God. Plus I work in the financial sector now and they have been eating up all the AI buzzwords like fucking hot cakes, the devs all know it will be a shit show but the ego from the executives thinking it’s a great idea won’t hear any of it, because think of the efficiency and bonuses they could get if they cut the implementation timeline down to a quarter. Not realizing the vulnerability, maintainence cost, and lack of understanding from the llm that will cause massive long-term issues regardless if they can get a buggy alpha created.
deleted by creator
Yes I agree. I meant the fundamental problem with the idea of LLMs doing more and more of our code, even if they get quite good.
There’s the other half of this problem, which is that the kind of code that LLMs are relatively good at pumping out with some degree of correctness are almost always the bits of code that aren’t difficult to begin with. A sorting algorithm on command is nice, but if you’re working on any kind of novel implementation then the hard bits are the business logic which in all likelihood has never been written before and is either sensitive information or just convoluted enough to make turning into a prompt difficult. You still have to have coders who understand architecture and converting requirements into raw logic to do that even with the LLMs.
Makes the Adeptus Mechanicus look like a realistic future. Really advanced tech, but no one knows how it works
uhhhh let me code here
Every time I’ve used it to generate code, it invents frameworks that don’t exist
CTO material right there.
Lmaooo
Just like an average enterprise architect.
I’ve had some success with it if I’m giving it small tasks and describe in as much detail as possible. By design (from what I gather) it can only work on stuff it was able to use in training, which means the language needs to be documented extensively for it to work.
Stuff like Wordpress or MediaWiki code it does generally good at, actually helped me make the modules and templates I needed on mediawiki, but for both of those there’s like a decade of forum posts, documentation, papers and other material that it could train with. Fun fact: in one specific problem (using a mediawiki template to display a different message whether you are logged in or not), it systematically gives me the same answer no matter how I ask. It’s only after enough probing that GPT tells me because of cache issues, this is not possible lol. I figure someone must have asked about this same template somewhere and it’s the only thing it can work off of from its training set to answer that question.
I also always double-check the code it gives me for any error or things that don’t exist.
If I’m going to use AI for something, I want it to be right more often than I am, not just as often!
It actually doesn’t have to be. For example the way I use Github Copilot is I give it a code snippet to generate and if it’s wrong I just write a bit more code and the it usually gets it right after 2-3 iterations and it still saves me time.
The trick is you should be able to quickly determine if the code is what you want which means you need to have a bit of experience under your belt, so AI is pretty useless if not actively harmful for junior devs.
Overall it’s a good tool if you can get your company to shell out $20 a month for it, not sure if I’d pay it out of my own pocket tho.
It… it was a joke. I was implying that 52% was better than me.
Ah ok I guess I misread that. My point is that by itself it’s not gonna help you write either better or shittier code than you already do.
GitHub Copilot is just intellisense that can complete longer code blocks.
I’ve found that it can somewhat regularly predict a couple lines of code that generally resemble what I was going to type, but it very rarely gives me correct completions. By a fairly wide margin, I end up needing to correct a piece or two. To your point, it can absolutely be detrimental to juniors or new learners by introducing bugs that are sometimes nastily subtle. I also find it getting in the way only a bit less frequently than it helps.
I do recommend that experienced developers give it a shot because it has been a helpful tool. But to be clear - it’s really only a tool that helps me type faster. By no means does it help me produce better code, and I don’t ever see it full on replacing developers like the doomsayers like to preach. That being said, I think it’s $20 well spent for a company in that it easily saves more than $20 worth of time from my salary each month.
Relatable
deleted by creator
Because it was marketing hype (read: marketing propaganda).
The trick is you have to correct for the hallucinations, and teach it to revert back to a health path when going off course. This isn’t possible with current consumer tools.
Wait a second here… I skimmed the paper and GitHub and didn’t find an answer to a very important question: is this GPT3.5 or 4? There’s a huge difference in code quality between the two and either they made a giant accidental omission or they are being intentionally misleading. Please correct me if I missed where they specified that. I’m assuming they were using GPT3.5, so yeah those results would be as expected. On the HumanEval benchmark, GPT4 gets 67% and that goes up to 90% with reflexion prompting. GPT3.5 gets 48.1%, which is exactly what this paper is saying. (source).
deleted by creator
Hmm that’s incorrect. ChatGPT (if you pay for it) does both.
deleted by creator
Oh ok! Got it. I read it as you saying ChatGPT doesn’t use GPT 4. It’s still unclear what they used for part of it because of the bit before the part you quoted:
For each of the 517 SO questions, the first two authors manually used the SO question’s title, body, and tags to form one question prompt3 and fed that to the Chat Interface [45] of ChatGPT.
It doesn’t say if it’s 4 or 3.5, but I’m going to assume 3.5. Anyway, in the end they got the same result for GPT 3.5 that it gets on HumanEval, which isn’t anything interesting. Also, GPT 4 is much better, so I’m not really sure what the point is. Their stuff on the analysis of the language used in the questions was pretty interesting though.
Also, thanks for finding their mention of 3.5. I missed that in my skim through obviously.
deleted by creator
Yeah I think you’re right on about the students not being able to afford GPT4 (I don’t blame them. The API version gets expensive quick). I agree though that it doesn’t seem super well put together.
Whatever GitHub Copilot uses (the version with the chat feature), I don’t find its code answers to be particularly accurate. Do we know which version that product uses?
If we are talking Copilot then that’s not ChatGPT. But I agree it’s ok. Like it can do simple things well but I go to GPT 4 for the hard stuff. (Or my own brain haha)
Is GPT4 publicly available?
Yes available to anyone in the API or anyone who pays for ChatGPT subscription.
Yes… If you pay $20 a month
Oh that’s possible, not sure which one they used either.
I used ChatGPT once. It created non functional code. But, the general idea did help me get to where I wanted. Maybe it works better as a rubber duck substitute?
Use it as a boilerplate blaster, for shit you could write yourself
Yeah, generating some ideas to get you going might be the best use for this kind of stuff.
That’s how I view AI generated art. It can come up with some really cool mash ups. But you have to do the rest. Anyone just using what it outputs like that’s the end of the story isn’t ‘using it right’ in my opinion.
I’m not sure there’s a way to ‘use art right.’
You’re obviously not an artist. And you managed to completely miss my point.
No, but my husband is and he’s been refining keywords and using all sorts of loras and all other types of jargon that I don’t recall because I’m not interested in doing it myself.
And I didn’t miss your point, I just don’t agree with it.
So what, are you the art version of a military wife? Just throwing "aCtuALly"s out into the void because your husband types words into a field?
Better than being the art version of an asshole, but I see you’re already filling that role.
Right, I expect stuff like stable diffusion will become a part of the toolkit actual artists use. The workflows with this stuff are already getting pretty intricate where people use control net for posing, and inpainting of specific details, and so on. I would liken it to doing photography. You can’t just give a camera to anybody and get good results, it takes a person with a skill and taste to produce an interesting image.
I did my first game jam with the help of chat gpt. It didn’t write any code in the game, but I was able to ask it how to accomplish certain things generally and it would give me ideas and it would be up to me to implement.
There were other things I knew my engine could do but i couldn’t figure out using the documentation, ao I would ask chat gpt “how do you xyz in godot” and it would give me step by step. This was especially useful for the things that get done in the engine ui and not in code.
I used it to code small things and it worked eventually whereas if I just decided to learn coding I’d be stuck cos I don’t do computers, I do hvac.
I was pretty impressed with it the other day, it converted ~150 lines of Python to C pretty flawlessly. I then asked it to extend the program by adding a progress bar to the program and that segfaulted, but it was immediately able to discover the segfault and fix it when I mentioned. Probably would have taken me an hour or two to write myself and ChatGPT did it in 5 minutes.
deleted by creator
Condorcet sobs “so close”.
deleted by creator