The Cognitive Debt of Using LLMs
The hidden trade-off every time you reach for ChatGPT
Something weird is happening. Ever since using LLMs, I have felt more productive, yet I’m not retaining information nearly as well. It seems harder to sit down, focus and learn things, but I’m being more productive than ever. Are these separate problems or are they connected? Are AI tools actually harming our capacity for learning?
“AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.”
This is a direct quote from an MIT research paper1 published in July 2025 on the effects of using LLMs on productivity. The participants of the study were asked to write an essay. They split participants in 3 groups: a brain-only group, a search-engine-only group and an LLM-only group. The participants were asked about their experience, and the essays were then reviewed for a multitude of different things as well as being reviewed by human teachers.
The Cognitive Effects of using LLMs
There are five key outcomes of the study that are all connected. Most likely you already know it unconsciously, but it is important to put it into words explicitly:
By using LLMs instead of writing a text yourself, you massively decrease the amount of ownership you feel over the written text. This is understandable, but it decreases your retention of the written text. You can barely cite anything from the text and even find it hard to remember the core thesis/idea of the text.
The ideas expressed in LLM-dominant essays are less diverse and more homogeneous versus authentically written texts. On top of that, people exposed to your LLM-written text are able to recognise that it is not written by a human without being able to explain the exact reason. And maybe the most important result of all of this: There is almost no long-term learning that is achieved through using an LLM to write a text versus a human-written text.
The Implications
What are the implications of the results of this study when you use LLMs to do (part of) your job? It means that in that specific part that you use the LLM you will actually incur a cognitive debt. It’s a debt of foregone effort to get to the output. So the next time you have to produce the same output but without an LLM, it will be cognitively just as hard as never having done it before.
Of course, LLMs are not going away, so you can just use the LLM again. But producing a presentation, writing a report, or doing an analysis is never just the goal or the task in and of itself. It quite often is a means to an end. Next week you need to present the next step of the project, or follow up last week’s analysis with a deep dive in a specific topic. That’s where we start running into issues. If we can’t remember the previous LLM-generated text that well, don’t actually feel ownership, haven’t learned from the output of last week, how can we progress? Does that mean we should stop using LLMs altogether? But by not using LLMs, doesn’t that put me in an unfavourable position with my manager vs. my colleague who does use LLMs and seems so much more productive than me?
Luckily, the MIT study comes with a great, yet surprising, solution. If we first come up with ideas or write the first draft ourselves, we already have to put cognitive effort into it. Then subsequently using an LLM for editing, sparring or extending the idea or text, we actually get most of the benefits of the brain-only group.
This lines up with a recent Economist article on the impact of AI on different white-collar jobs2. We already see this playing out: “Roles that combine technical expertise with oversight and co-ordination have enjoyed the biggest gains. [...] Other occupations which combine deep expertise in maths-related fields with problem-solving are also thriving.”
Here we see the core of the idea, roles combining deep expertise with oversight and co-ordination have seen the biggest gains since the advent of LLMs. Firstly, you can only oversee work that you understand, so the quality limit of the output you can get out of an LLM is limited by your own expertise. This means investing in the skills that will help you long-term. Secondly, oversight & co-ordination becomes the main task, not full automation of your own work.
How to Find Out When to Use AI
So how do we know for what task we can incur the cognitive debt of using LLMs because it isn’t that interesting vs. where we cannot afford to incur a cognitive debt, because we need the cognitive investment to build on the knowledge next week/month/year? We all know that the secret to investment, knowledge, and skill building in general is compounding interest. Getting 1% better every week adds up in the long run, so putting the effort where it counts is the most important skill. It will make you more capable of overseeing and leveraging AI in that domain compared to someone who does not have the same level of skill.
In other words: Let’s use a physical analogy. We have a heavy weight that we need to lift. The important thing to know is: Are you in the gym or in a warehouse? In the gym, the effort is the point. That’s how you get stronger. In a warehouse, the effort is just an obstacle, so use a forklift.
This is the question we need to constantly keep balancing in our mind when using LLMs. Are we just being rewarded for the output, or is there an intrinsic long-term personal benefit of going through the process of generating the output. That essay you had to write in high school on penguins wasn’t because the teacher wanted to learn more about penguins (they didn’t care about penguins), but the assignment was given to you to make you learn to go through the process of reading, organising information and presenting it in a coherent manner.
So yes, by using LLMs we are becoming more productive and harming our capacity to learn at the same time. That is the debt that we are incurring, and every debt will come due eventually. But instead of incurring it unconsciously, you now can make the trade-off deliberately every time you decide to reach for an LLM.
1. N. Kosmyna et. al | 10 Jun. 2025 | Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
2. The Economist | 26 Jan. 2026 | Why AI won’t wipe out white-collar jobs
