- cross-posted to:
- localllama@sh.itjust.works
- cross-posted to:
- localllama@sh.itjust.works
further details:
Meanwhile, the French financing will include commitments from the United Arab Emirates, American and Canadian investments funds and French companies like telecommunications firms Iliad and Orange, and aerospace and defense group Thales
A few days before France’s AI Action Summit, which kicked off on Monday, the UAE said it would invest between 30 billion euros and 50 billion euros in the construction of a one-gigawatt AI data center in France as part of a campus focused on the technology’s development.
https://www.cnbc.com/2025/02/10/frances-answer-to-stargate-macron-announces-ai-investment.html
My taxes at work to fund shitty generators. People around me will cheer for this since they are not working in the programming industry and don’t know how it’s yet another way to lower the quality of all that we’ve done so far. We’re fucked.
It will be private investment, venture capitalists, that provide the money and extract the data and wealth. All Macron is doing is giving it the official seal of approval from the government. He might relax some rules and make it easier to invest but your tax Euros are safe.
I don’t know how you can say this when programming is one of the best uses for AI
As a senior dev, I have no use for it in my workflow. The only purpose it would serve for me is to reduce the amount of typing I do. I spend about 5-10% of my time actually writing code. The rest of my dev time is spent in architecting, debugging, testing, or documenting. LLMs aren’t really good at most of those things once you move past the most superficial levels of complexity. Besides, I don’t actually want something to reduce the amount I’m typing. If I’m typing too much and I’m getting annoyed then it’s a sure sign that I’ve done something bad. If I’m writing boilerplate then it’s time to write an abstraction to eliminate that. If I’m writing repetitive tests then it’s a sign I need to move to a property based testing framework like Hypothesis. If the LLM spits all of this out for me, I will end up writing code that is harder to understand and maintain.
LLMs are fine for learning and junior positions where you’ll have more experienced folks reviewing code, but it just is not that helpful past a certain point.
Also, this is probably a small thing, but I have yet to find an LLM that writes anything other than shitty, terrible shell scripts. Please for the love of God don’t use an LLM to write shell scripts. If you must, then please pass the results through shellcheck and fix all of the issues there.
I’ve seen it mainly used to assist with python scripts which work well not sure on how well it does shell scripts
Python is my primary language. For the way I write code and solve problems, it’s the language where I need the least help from an LLM. Python lets you write code that is incredibly concise while still being easy to read. There’s more of a case to be made for something like Go, since it seems like every single god damned function call ends up being
variable, err := someFuckingShit()
and then aif err!=nil
and manually handling it instead of having nice exception handling. Even there, my IDE does that for me without requiring a computationally expensive LLM to do the work.Like, some people have a more conversational development style and I guess LLMs work well for them. I end up constantly context switching between code review mode and writing code mode which is incredibly disruptive.
Eh, copilot is still more miss than hit whenever I use it. It’s probably equally dogshit for other uses as well unless your goal is to just generate bullshit and you need to hit a specific word count.
Heh for me even the newest models like the new Claude are only really useful when I did the thinking and the initial code writing, and i ask it to simplify it or to make it use more efficient libraries/features. Because when asking it to do my work it produces shit, and im very junior level
yeah it’s definitely an assistant not a cheap developer… or is it :O
Devin just came to take your software job… will code for $8/hr https://www.youtube.com/watch?v=GhIm-Dk1pzk /s
I’m learning javascript and love it, so much easier to query Mistral/Qwen/Deepseek Distilled than scrolling through endless search results hoping someone ran into the same problem I did
I also run the AI models in LM Studio on my own machine so I’m happy with that as well, I try to self host where I can
I know it can’t take my job because I tried to make it do my job. Spoiler, it can’t. And that’s because most jobs aren’t doing things that have been done so often that Claude has an example in its training data set. If your job is that basic then yes, an AI will take it from you. Most of the programming job is actually solving a problem within the context of the codebase, not the coding itself. I am working with old and archaic technology from the 60s to the 90s and let me tell you, using the official doc is way more factual than asking any AI model about information because it will start spewing bullshit after the second prompt
A lot of comments in that YouTube thread for Devin are not positive.
Every time I try to use it it hallucinates bugs. It’s a waste of time for me.
deleted by creator
Sorry but no.
It’s good when what you are trying to do has been done in the past by thousand of people (thanks to the free training data). But it’s really bad for new use case. After all it’s a glorified and expensive auto-complete tool trained on code they parsed. It’s not magic, it’s math.
But you don’t get intelligence, creativity from these tools. It’s math! Math is the least creative domain on earth. Since when being a programmer is just typing portion of code from boilerplate / examples from internet?
It’s the logical thinking, taking into account all the parameters and constraints, breaking problems into piece of code, checking it, testing it, deploying it, supporting it.
Ok, programming goal is to solve a problem. But usually not all the parameters of the problem can be reduced to its mathematical form.
IA are far from being able to do that and the ratio gain/cost is not proven at all. These companies are so committed to AI (in term of money invested) that THEY MUST make you use their AI products, whatever its quality. They even use a marketing term to hide their product bad answer: hallucinations. Hallucination is just a fancy word to not say: totally wrong.
Do you find normal to buy a solution that never produces 100% good results (more around 20% of failure)?
In my industry, this IA trend (pushed mainly from managers not knowing what really is programming and of course “AI”) generate a lot of bad quality code from our junior devs. And it’s not something i want to push in production.
In fact, a lot of PoC around ML never goes from the R&D phase to the real production. It’s too risky for the business (as human life could be impacted).