Scientists warn against reading too much into a small experiment generating a lot of buzz.
It definitely does in my experience. I have intentionally used it for specific tasks for defined periods of time. And then stopped and used only my normal online search tools and a text editor without AI assistance. My projects were written concept development, plus some light coding to create utility scripts.
From just my own experience there is definitely a real cognitive hazard associated with using LLMs at all, for all but the most specialized tasks where an LLM is really warranted.
The scripts worked fine, as they were quite simple python utilities for some data cleaning, so I see a use there. But I found that the concepts never caught fire in my imagination, whereas usually a good share of concepts developed manually turn into something that gets a deeper treatment, even a prototype design at least.
I find myself thinking harder and learning more when I use AI. I’m constantly thinking what I can do to double check it. I constantly look at what it writes and consider whether it did the task I asked it to do or the task I need done.
I’m on track to rewrite 25000 lines of code from one testing framework to another in 3 days, and I started out not knowing either framework and not having really written in typescript in years. And I’m pretty sure I can write the tests from scratch in my primary project that is just getting started.
This one anecdote doesn’t disprove a study, of course, but it seems to me that the findings are not universally true for some reason. Whether it’s a matter of technique or brain chemistry, I don’t know. Ideally, people could be taught to use AI to improve their thinking rather than supplant it.
Does walking change your brain activity?
There seems to hundreds of studies on that and there seems to be a fairly uniform “Yes” and “More than you would guess”, etc.
Here is one: https://journals.sagepub.com/doi/10.3233/ADR-220062
yea, bumps more blood. You no need more