

that’s how you get committed
that’s how you get committed
it’s insufferable sometimes how easy it is to scare the shit out of these professional-managerial types. i might do that just to mess with them
seems like it was unintentional probably. elon’s gonna freak
no i mean is the ceo of alibaba referring to llms
Sure, but that also shows that you don’t need to train models from scratch going forward. The work has already been done and now it can be leveraged to make better models on top of it.
yeah but you gotta count the emissions by the datacenters running the old models. i don’t think that accounting is being done by openai, and i don’t think it’s possible for deepseek. actually, i don’t think openai is doing any accounting.
is this the same kind of ai as above? idk, the unqualified term “ai” is kind of ambiguous to me.
when you use the wrong formula and get the right answer
yeah, i guess it’s fair to do vibe coding or whatever. idk, when it comes to existing codebases i hate the thought of having ai contributions mixed in with real contributions. but i guess realistically, if there are no developers anyway, and if the model is running locally, none of my hangups apply
it seems dubious to make the claim that large language models write more efficient code. the popularity of node.js alone makes me doubt there’s all that much efficient code out there for it to train on, at least percentage-wise. i mean, the most popular app for hardcore gamers to run in the background packages and runs on its own copy of google chrome. add to that hallucinations and code quality and whatever else and i doubt its code is achieving the efficiency of a high school coding class, at least in the general case
have you tried giving them tiny ant-sized balls
Nobody is advocating for the service model companies like openai use here. I think this tech should be done using open source models that can be run locally.
this is definitely fair. i think my big issue with it is the inordinate amount of capital (land, carbon emissions, water) that go into it. maybe i’ve unfairly associated all ai with openai and gemini and meta.
Now compare that with DeepSeek.
my understanding of deepseek is that most of their models are trained by engaging in dialogue with existing models. the cost of training and running those models should be taken into account in that case. if it is from scratch that might change things, if the carbon and water numbers are good.
In the future, he said, 98% of AI applications in the market will serve industrial and agricultural needs while only 2% will serve consumers directly.
i think that’s a problem with the definition of ai. it’s not clear to me what tim huawei defines ai as. i’m not arguing against the concept of machine learning, to be clear. i thought we were talking specifically about language models and photo and video generation and whatnot
It’s easy to dismiss this stuff when you already have a bias against it and don’t want it to work, but the reality is that it’s already a useful tool once you learn where and when to use it.
yeah that’s fair enough. i didn’t mean to get into a huge discussion over llms because there’s definitely an element of that in my head. idk, i guess my point in saying that was that you can shit out a more-or-less working piece of code in any language pretty quickly, if you don’t need it to be idiomatic or maintainable. my understanding was ai was kind of the same in that regard.
i guess if training large language models can be done with negligible emissions and cooled with gray or black water, i can’t be against it. programming is definitely the main field where there’s no arguing that llms aren’t useful at all. i’m still unconvinced that’s what’s happening, even with deepseek, but if they’re putting their datacenters on 3-mile island and using sewage to cool their processors, i guess that would assuage my concerns.
LLMs get the brunt of it because alfalfa has more uses than chatgpt. maybe it’s the result of my own bias but i would consider golf courses more useful than chatgpt. LLMs aren’t even close to the sole contributor to climate change, but they are emblematic of venture capitalists more than i think anything else. but it’s hard for me to justify the creation and use of these things when they have very narrow use cases, often create as much work as they save, and suck down clean drinking water like i suck down whiskey sours
was that comment written in 1996
the IEA report was made in mid-2023, and i would imagine ai electricity usage has skyrocketed since then. as mentioned in the mit source, dating to may 2025, electricity usage by ai is 48% dirtier than the us average. my problem with ai isn’t that it violates intellectual property rights, it’s that llms are a net-negative to society because of their climate effects. if ai datacenters were built using clean energy and cooled using dirty water, it would likely be little more than a mild annoyance for me. as it stands, we are putting the global south underwater so that people who are surrounded by yes-men can have yes-robots too.
are there quintillions of states
The efficiency has already improved dramatically
the mit article was written this may, and as it notes, ai datacenters still use much more electricity than other datacenters, and that electricity is generated through less environmentally-friendly methods. openai, if it is solvent long enough to count, will
build as many as 10 data centers (each of which could require five gigawatts, more than the total power demand from the state of New Hampshire)
even the most efficient models take several orders of magnitude more energy to create than to use:
it’s estimated that training OpenAI’s GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy
and overall, ai datacenters use
millions of gallons of water (often fresh, potable water) per day
i’m doubtful that the uses of llms justify the energy cost for training, especially when you consider that the speed at which they are attempting to create these “tools” requires that they use fossil fuels to do it. i’m not gonna make the argument that aggregate demand is growing, because i believe that the uses of llms are rather narrow, and if ai is being used more, it’s because it is being forced on the consumer in order for tech companies to post the growth numbers necessary to keep the line growing up. i know that i don’t want gemini giving me some inane answer every time i google something. maybe you do.
if you use a pretrained model running locally, you know the energy costs of your queries better than me. if you use an online model running in a large datacenter, i’m sorry but doubting the environmental costs of making queries seems to be treatler cope more than anything else. even if you do use a pretrained model, the cost of creation likely eclipses the benefit to society of its existence.
EDIT: to your first point, it takes a bit to learn how to write idiomatic code in a new paradigm. but if you’re super concerned about code quality you’re not using an llm anyway. at least unless they’ve made large strides since i last used one.
you’ll never believe this
i don’t use ubiquiti, but the only thing you need to do with your firewall to get better-than-NAT security is allow only outgoing connections/disallow incoming connections. usually on consumer routers that’s the default setting anyway or there’s a checkbox to that effect.
i gotta disagree. kids need their unsupervised play time or else they grow up to be depressed pieces of shit like me. source