Prompted by Pete from DragonChasers, Nait recently wrote about AI (mostly LLMs but the terms get used interchangeably these days), and I kind of figured I would dig up an old draft of mine that talks about LLM Dependency, something that Pete calls “LLM Brain”.
For Context
Over on Mastodon, Pete got inspired by a post from a user named Jonny who was talking about how one of his programming students puts every single problem into ChatGPT without ever thinking for themselves. He describes this as “LLM Brain”.
Pete then decided to talk about the term “LLM Brain”, which is used pejoratively to describe someone who habitually feeds every error or coding problem into ChatGPT without stopping to think independently.
Pete rejects the idea of it being a bad thing in that it’s merely a tool that aids you. He compares LLMs and AI to calculators, personal computers, and cell phones and how he is still OK after having used those, so LLMs are just another tool in that way.
Naithin, however, dives deeper into the original post that prompted Pete’s post, talking about how it wasn’t merely an issue of a programming student solving issues with ChatGPT but rather the programming student being there to study how to program in the first place, and ChatGPT hindering them from actually studying. You know, it’s in the name of “student”, innit?
And well, Naithin is taking a more nuanced stance here that agrees that AI provides benefits but also warns of the dangers.
The Kool-Aid
So, this is something that Nait touches upon as well, but comparing LLMs to calculators, personal computers, and mobile phones doesn’t inherently work because you use them very differently, typically.
Let’s take an abacus, for example: It helps you count things. That doesn’t mean that you forget how to count through its use but rather that it aids you in counting. It’s a tool. An aid.
A calculator basically just speeds up things by helping you calculate things (duh) – but to use it you need to know how math works and what numbers need to go where in your formulas. You can program calculators to do all the annoying stuff but you still have to use your brain to understand the results, units, and where to go next with said results.
Let me link a video right below here where Verizon sees the necessary numbers to do a calculation and then proceeds to calculate the bill a customer has to pay using a calculator only to give them a bill for 71 dollars instead of 71 cents. They did use a calculator and they did nothing wrong with that calculator. They still got the wrong result.
So, similar to an abacus, a calculator helps with math but it doesn’t replace your brain completely as you still have to manage the units and think about what calculations you have to make in the first place. Wanna calculate the third side of a triangle when only two sides are given? You can use the calculator for that but the calculator won’t just tell you how to calculate that. You still need to have the fundamental knowledge of how math is mathing.
Large-Language Models, however, don’t require the fundamental mathematical knowledge. If you describe the math problem with the two sides of a triangle and the third side being an unknown, the LLM will immediately describe how to do it and then give you the answer. The issue here is, however, that most people that use LLMs these days will ignore the “why” and “how” and just take the answer.
Jonny, the original poster, says in regards to this: “The problems I’m seeing from someone I am currently teaching are indistinguishable from illiteracy…”
As such, let’s dive a bit deeper into other dangers of overreliance on tools.
Cognitive Offload Overuse and LLM Dependency
For starters, I will call this LLM Dependency but in science, it often is also being referred to as Cognitive Offloading Overuse. COO isn’t an LLM-specific term btw. Cognitive offloading describes the phenomenon where the usage of external tools leads to reduced mental effort (like using a calculator or writing notes). Cognitive Offloading Overuse, however, is when folks offload too much, leading to an erosion of internal problem-solving or writing skills.
To use another example, when people use Google Maps or other navigation apps too much, they offload spatial memory to GPS, and as a result of COO might not be able to get around without its use. These days, people in my city don’t know where any of the streets are in their neighbourhoods because they rely so much on GPS. I get not knowing all streets by heart but the neighbourhood should be the bare minimum, no?
Similarly, scientists have seen people overusing calendars and reminders having a harder time with remembering dates. Cognitive Offloading Overuse is a serious problem.
And as such, we are seeing very similar results with LLMs. People don’t just look up how to do something, but instead check out the solution to not bother with the way there in the first place.
In this study here, MIT researchers examined the cognitive cost of using ChatGPT while writing essays.
To investigate this, they essentially split 54 people (ages 18-39) into three groups:
- LLM Group: People used ChatGPT to “assist” in essay writing
- Search Engine Group: People used Google or similar tools for their writing
- Brain-Only Group: No external tools were used here.
So, they had these folks write essays in three sessions with consistent tools, and then in the 4th session, they made LLM users switch to Brain-Only, and vice versa.
Anyway, the key findings essentially were that, utilizing an EEG, the brain-only writers had shown the most active and widespread neural connectivity, with Search Engine users ranking intermediate and ChatGPT users having had the weakest engagement, particularly in alpha and brainwave connectivity which are linked to problem-solving and creativity.
The study also goes into detail over how LLM users reported low ownership of their essays and how they struggled to quote their own writings – which was essentially a whole lot worse for that group compared to other groups. At the same time, the essays from the LLM group were a lot more uniform and less original. There were also other findings but those are less important for this post, I think.
As a side note, the study is not peer-reviewed yet and the sample size is pretty darn small and also included mostly students from nearby academies… which means it shouldn’t be used to overgeneralize or anything.
That said, though, it does show trends that are in line with observations made in other studies. The “Your Brain on ChatGPT” study reveals reduced neural engagement, poorer memory retention, and diminished creative originality when using these tools, at least amongst the participants in the study and compared only to other participants.
There’s a wealth of other research into how reminders, cell phones, smart watches, calendars and navigation services impair a lot of cognitive skills. Yes, people can still write physically and calculate things even after phones came into existence and into the mainstream but there are a lot of other dangers, including the hippocampus’ gray matter shrinking.

A Cautionary Tale
I’ve heard similar stories to what Jonny talked about in my bubble. People who are otherwise great programmers and very talented individuals are nowadays just putting everything into ChatGPT to fix whatever coding errors they have. It will spit out bad or inefficient solutions, often even contradicting itself, and these people would notice them… if only they didn’t trust ye ol’ machine as much as they do. And that’s a shame.
Yes, of course, one shouldn’t completely rely on it, and as Pete states, if you simply use it for the sake of not having to Google some technical problem that you have no idea about, that’s definitely an option… but if you work as a programmer or if you try to learn how to program, then you should put at least a little effort into it, no?
Similarly, I study English Studies at University and I wasn’t sure if it’s cautionary or cautionairy. I always get confused by that, so my instinct was to google it… but in line with the post right here, I decided to instead pull out the big guns… a dictionary (which btw is spelt without the “i”). And I found the word very quickly and also found out that caustic is pronounced differently from how I thought it’s pronounced…
ANYWAY, it wasn’t that hard to just look up a word in a book and while it may be impractical, it definitely was also very unnecessary… but I don’t regret doing it for the bit. I needed the screen break anyway and got to get up and walk to the book shelf. I also saw other words and I’m pretty darn sure that I’ll remember looking up this stupid word for the future.
cautionary
huh.
Long story short, LLM Brain or COO is a phenomenon we can definitely see. Shrugging it off as “I used all these other tools and I’m still okay” is a little dangerous, possibly, especially since we do have decreased memory capacity, lower attention spans, and in many ways, other problems with our thinking skills because of how much we offload already. Kids these days, eh?
Sometimes I don’t know how to google something when I’m looking stuff up, especially with English not being my first or second language, so I use ye ol GPT for that so that I can then use the term in question to do my research. In the same way, I use it for the German that is used in paperwork to just get a quick explanation that even a five-year-old kid could understand.
But I wouldn’t rely on it more than that, really, because it can put out a lot of false information.
LLMs predict, they don’t think.
That’s our job, after all. We should do the thinking because LLMs cannot do that for us – and sadly, there are a lot of people who don’t think at all and just use LLMs. They’ve become slaves to the machine.
This post was originally written by Dan Dicere from Indiecator.
If you see this article anywhere other than Indiecator.org then this article has been scraped. Please let me know about this via E-Mail.

Is this assumption true? How would we know? (I don’t disagree, by the way, it is frightening how much some people accept without questioning.)
How can we teach them better so that they don’t “just take the answer?” Why do they just take the answer?
—
I was fortunate enough to have an education where teachers did instill critical thinking skills and taught us to question text (or other media) presented to us. The bias and slant on US news channels is mind-boggling, for example.
It’s not just an AI-LLM specific issue. People are falling afoul of scams (some of them prettified by AI used for nefarious means.) People get sucked into predatory microtransaction mobile games or choose to play gacha games. Students are arriving at higher education institutes lacking skills needed for university level courses (writing, math, knowing what files and folder structure are.)
[And are some of those things I listed above really an issue, if the times have moved on, or if the person walks into it with eyes open? Should we prescribe or legislate that they need protection from themselves?]
LikeLike
There are a bunch of studies that show that most users (students especially) tend to to use it almost exclusively to summarise text, get ready-made answers and even essay/homework drafts. Part of the reason is apparently the satisfaction you get from it since ChatGPT and other LLMs are made to give out answers with a servant-esque attitude, although that is just speculation on a bunch of researchers’ parts. That said, there are people who use it to learn and they tend to use it unquestionably. Media literacy is dead. Apparently, people also just trust the bot even when it contradicts itself.
As for the “how can we teach them better”, we can… using human tutors. While there have been studies into human tutors versus ChatGPT as a tutor, the learning gains are increased a lot with human tutors. Material taught by human tutors/teachers tends to stick around far longer than anything the machine puts out. I’ve noticed that myself with me trying to learn some programming again using the machine. I don’t learn any of it. So, I’m now instead using free resources online to teach myself how to do what I want to do and it’s been much better. The stuff I learn with is still online but it is curated by humans rather than predicted by machines.
I definitely don’t think that ChatGPT in its entirety is a bad thing. As I mentioned, there are cases where I use it… but I don’t use it everywhere because of how much it impedes humans.
—
Yeah, critical thinking is important but most students have unsupervised and unrestricted access to the internet which results in them getting fucked up. When I tutored a kid who was on the brink of failing school because of her bad grades, I literally told her parents to take away her phone after our lessons were over because she’d just drown out everything she learned with YouTube, social media and the like… and it worked. Tech’s not great for us.
Heck, when my phone was broken and I had to wait for a replacement, life felt a lot worse for me. Looking up streets at home, not being able to reach family all the time and not knowing the time… stuff like that. I felt lost.
Yeah, society as a whole is struggling. AI is also being used to fake people’s voices, so banks over here don’t permit any actions to be taken via phone anymore. I don’t think it’s people’s faults for falling for predatory microtransactions in games. Rather, those games are specifically designed to not let you go once you’ve tried spending even a miniscule amount on their game. Those kinds of games prey on a specific type of person rather than all people. Also Gacha games don’t have to be bad. It’s just that most of them tend to be bad.
And yes, way too many of my fellow students just don’t know how to speak English fluently, despite majoring in English.
“Times moving on” is an odd way of describing this affair. For one thing, I don’t think students should be allowed to use this while unsupervised. The internet as a whole, while we’re at it. Parents are at fault for their kids turning out this way, plain and simple. Beyond that, though, companies should also not be allowed to use people’s data, blog posts, art work, etc. without any consent and/or repercussions to train AI. It’s a huge breach of both copyright and privacy. Somehow, however, all companies are getting away with it – and that’s really bad. It doesn’t bode well for humanity.
LikeLike