- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
The paper said that after an AI tool was implemented at a large materials-science lab, researchers discovered significantly more materials—a result that suggested that, in certain settings, AI could substantially improve worker productivity. That paper, by Aidan Toner-Rodgers, was covered by The Wall Street Journal and other media outlets.
The paper was championed by MIT economists Daron Acemoglu, who won the 2024 economics Nobel, and David Autor.
In a press release, MIT said it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.”
The university said the author of the paper is no longer at MIT.
MIT standing up to the pro-AI momentum tastes kinda odd, but I’ll accept it.
The paper must be really fucking inaccurate for this move.
Here’s the thing: They’re actually a natural fit for it, because if anyone ought to understand the use cases, strengths, weaknesses, and implications of a technology, it would be a university that’s centered around research on technology.
So they looked carefully at this guy’s paper, realized he was making outrageous and unsupportable claims about what AI could do, failed to reproduce his results, and concluded he was full of shit. That’s what we really should be able to expect from MIT.
I feel 50% of research funding should be for reproduction studies.
reproduction studies.
I volunteer as a tribute😏
Absolutely. It might be the janitorial work of “the academy” but that work is important.
I’m actually not sure if the problem right now is funding that work or the unfortunate fact that there’s rarely any accolades for it. And “publish or perish” is still too true.
Especially in Medicine when it comes to wetlab stuff
it would be a great way to fund early labs to trying to get on their feet.
I guess most of the universities and big labs would be very opposed to this thanks to the dead corpses lying around in their caves
It wasn’t peer revised and shouldn’t be treated as anything but an opinion piece?
The guy fabricated it completely. Just made the experiment and data up and got caught when the company he mentioned in the paper sued him. What a waste of a Stanford phd.
Exactly, this has nothing to do with MIT being anti-AI.
A student made up a research paper and was kicked out. The fact that the topic of the research paper was AI is largely irrelevant.
Here’s a story of a behavioral science professor (who, ironically, studies dishonesty) at Harvard who was caught making up results: https://www.npr.org/2023/06/26/1184289296/harvard-professor-dishonesty-francesca-gino
You wouldn’t look at that article and come to the conclusion that “Harvard is standing up to the pro-Behavioral Science momentum”, because fake research has always been against the rules.
Not just inaccurate, by the fact the author is “no longer at MIT” is a soft implication that they were kicked out (quite possibly for fraud).
Or graduated and moved on.
Quick search says he was a second year PhD for 2024-2025. So doubtful about the graduation.
“The paper was championed by MIT economists Daron Acemoglu, who won the 2024 economics Nobel, and David Autor. The two said they were approached in January by a computer scientist with experience in materials science who questioned how the technology worked, and how a lab that he wasn’t aware of had experienced gains in innovation.”
It sounds like this hypothetical materials science lab maybe did not actually exist. Actual materials scientist reached out and went “Hey, I never heard of that lab, who are they and how did they use AI?” Oh… THAT lab? Yeah, it’s in Canada, you don’t know it…
Or just the average AI: hallucinations galore. If you can’t trust the output it confidently gives you, what’s even the fucking point!?
For LLMs, yes.
But, theoretically, AI should be extremely good at sifting through mountains of data, and much faster than all other methods we have, identifying which data a human should take a closer look at. That’s what I presume this paper supposedly demonstrated.
My guess here is that a lazy student decided to take the easy path and fake data to “demonstrate” results that nobody would be surprised by and want to look closer at the data, but somebody looked anyway, probably because the student was a known slacker, and it wasn’t the results of the research that surprised them, but just that the student did the research at all.
For LLMs, yes.
Thank you. As useful as LLMs can be under certain circumstances, they are not the only type of AI.
You could boil these material scientist’s jobs down to two things: discovering materials and documenting them. If AI takes over the documentation, then that leaves more time to discover.
Of course AI won’t take over documentation. It turns out that precise and good communication works better if you understand what you’re writing about.
Only if they don’t spend more time reviewing and fixing errors in the generated documentation than they would have just writing it in the first place.
“If AI does the menial jobs, that leaves more time for humans to pursue the arts!”
Reality: LLMs creating AI slop and putting artists out of work.
I wouldn’t be too hopeful that the humans get to do the enjoyable work.
AI’s motto: Confidently Incorrect
Am I correct? The paper itself was not written using AI tools. It covered the user of AI tools. MIT let go of the student who wrote it.
Yes. The article provides more details. I’m not sure if the paper’s data was fabricated or obtained unethically or both. It’s not terribly clear.
I think that ambiguity is deliberate. They’re trying to get exposure using “student used AI” ragebait by implying he did it. I’ve never been a reader of the wallstreet journal but seems like a horseshit publication.