Is using AI making us dumber? And can we do something about it?
Decoding MIT's research on cognitive debt build-up by using LLMs and practical strategies to combat it.
What did the MIT research actually say?
MIT media lab released a research report on how using LLMs impacts cognitive processes. They asked students to write essays and divided them into 3 groups for the study. The first group was brain only, students couldn’t use any tools, not even google. The second group was allowed to use search engines and the third group was allowed to use LLMs. After 3 sessions of writing essays, the brain only group was reassigned to the LLM usage group and vice versa. The researchers used EEG scans to record brain neural activity, conducted interviews with the students and got the essays scored by evaluators.
The researchers found that students in the LLM group showed less neural activity and connectivity than the other two groups. The students in the LLM group had a low sense of ownership of the work they had produced and a lower ability to quote from the essay they wrote a few minutes back. Previous research has also shown a decline in critical thinking skills and problem-solving skills over time in people using AI to offload cognitive tasks.
Is the research relevant for me if I am not a student writing an essay?
The short answer is yes.
Essay writing typically involves research, analysis, summarization and synthesis of information. It has scope for practicing critical thinking and decision making which are critical life skills applicable across tasks and jobs. We have all been including AI in our day-to-day personal tasks as well as job related workflows and it is relevant to understand the long-term impact on our ability to think independently and creatively.
From oral to paper to digital to search to AI: Isn’t this just a fear of technology?
When search engines became ubiquitous, there were similar concerns about declining cognitive skills, especially memory. And that has been proven true in some sense. With the wide spread use of search engines, people don’t remember the information but remember how to find the information instead. This is called the Google Effect.
“We remember less through knowing information itself than by knowing where the information can be found.” Betty Sparrow, Columbia University psychologist published in Science magazine.
There is difference between using search engines and LLMs. While using search engines, a user looks at divergent views about a topic, does their own reading and understanding, and creates a summary about their understanding and opinion on the topic. With an LLM – the information is already analyzed, categorized and summarized. The effort required to find, read, understand and reconcile different information is outsourced to the LLM. As the LLMs are trained to stay on topic and be efficient, the ability to go off into tangents and broaden our understanding gets limited.
Research has also shown that dopamine pathways are activated in web searches as users enjoy the feeling of found the information they are looking for.
I want to make a personal confession. I find it hard to read the content I generated using 100% AI, I skim over it. Whereas an email I believe I have written particularly well, I go back and re-read it multiple times, congratulating myself.
What happens if we keep using AI for all cognitive work?
Taking a black and white view to illustrate extreme consequences, there can be no new research if we keep using AI and don’t learn how to research something new where AI doesn’t have the answers. But that kind of total domination of AI over our thinking is neither near nor inescapable.
A more important aspect to consider is self-efficacy beliefs. Self-efficacy is an important component of motivations. It determines whether people feel confident in their knowledge and ability to take action. The LLM group in the study did not feel confident in their ability to do the thing, since they actually did not do it, they got AI to do it. This has implications for skill building in the future.
Writing something on your own provides the clarity. Once you put it out on paper it frees up cognitive resources and the thoughts don’t keep spiraling inside your mind.
“I write entirely to find out what I am thinking, what I’m looking at, what I see and what it means.” Joan Didion in Why I Write.
The Cognitive Load Theory
The main theory used by the researchers to correlate the EEG results to cognitive decline was the Cognitive Load Theory. Cognitive load theory says there are 3 types of cognitive loads during learning and problem solving
1. Internal cognitive load which depends on complexity of the topic and prior knowledge on the topic
2. External cognitive load which depends on the way information is presented
3. Germane cognitive load which depends on the way information is digested and saved in our mind in a schema integrating with what we already know
High internal and external cognitive loads can worsen learning, but higher germane loads increase learning. In the study conducted by MIT, it was found that use of LLMs reduced all 3 types of loads and it is the reduction of germane cognitive load points to possibility of cognitive decline in the future.
Why are these skills important
Reading, deep thinking and critical reasoning are the core of how we learn new things. The only way to survive and grow in this world today is to be adaptive and keep learning. If AI takes away our ability to learn new things effectively, any edge we have by using AI will not be relevant within a year anyway.
Simple principles to counteract the negative impacts of using LLMs
Another interesting finding of the study was that higher competence users use LLMs in a different way. They participate in active learning even while using an LLM for their task. But low competence users use it in an ad hoc manner, typically outsourcing all the steps involved in learning and problem solving to the LLM and just using the final output. And this I believe is the key to keep using LLMs without suffering the negative cognitive impacts.
After reading this research paper, the first thing I did was to switch off the AI overview in my chrome browser. I followed the step-by-step guide on androidauthority.com here.
Over time I have committed to making small practical changes to how I use AI. Of course, many times I am just lazy and will give a high level 3-word sentence, but it is practice over perfection.
Note I am using ChatGPT here, but please substitute for whichever LLM you use.
Get ChatGPT to help you build a better prompt
Before giving ChatGPT the task, describe the context and objective then instruct ChatGPT to ask you at least 10 questions to give it all the information it needs to give a high-quality output. While filling out the answers, you will get clarity on the dicey issues – for e.g. who is the audience, what is the objective, what are the constraints we need to work with which will spur your own ideation.
Read the answer from ChatGPT and write it in your own words
If you use AI to give you a first draft, read it thoroughly and then write your own version of it, instead of copy- pasting the draft directly.
Instead of asking for the output ask for questions about the topic
Use ChatGPT to brain storm questions, perspectives, frameworks to consider to be able to generate new ideas on your own
Look into a diverse set of views on the topic
Ask ChatGPT to list the controversial opinions on the topic, hot takes, lesser-known facts, alternate theories to avoid falling into an echo chamber
Ask ChatGPT to explain best practices
Instead of getting the output directly, ask it for best practices to do the task and take a shot at it
Get ChatGPT to critique not rewrite
Ask ChatGPT to critique and give suggestions for improvement for you draft and specifically ask it not to give examples or do any rewriting
No screenshots, write the ask in your own words
Instead of copy pasting a screenshot and expecting ChatGPT to figure out the context, write out your exact query and don’t give your own view/assumption to it.
Everyone wants to use AI, no one wants to sound like AI
If you need motivation to start using these principles, think about how you are able to tell, almost viscerally, when something has been written by AI and how quickly you lose interest. There is now a sub category of content detailing ‘AI tells’ in your writing and how to fix those. Ask yourself this question- if you didn’t put any effort in writing it, why should anyone make an effort to read it.
Thumbnail Photo by Cash Macanaya on Unsplash