Science

Are Chatbots Eroding Our Thinking Abilities?

Are Chatbots Eroding Our Thinking Abilities?
 Chatbots can certainly make getting answers easier, but at what cost to our brains?
Chatbots can certainly make getting answers easier, but at what cost to our brains?
View 1 Image
 Chatbots can certainly make getting answers easier, but at what cost to our brains?
1/1
Chatbots can certainly make getting answers easier, but at what cost to our brains?

Last year, researchers at the MIT Media Lab shared preliminary research investigating the cognitive costs of using a chatbot for writing essays. Though the study hadn’t gone through peer-review yet – where other scientists check the quality of the work and help spot errors – it still hit the headlines via major news outlets like TIME Magazine.

At first glance, the study appeared to vindicate an idea that felt right: generative AI applications, which include large language models like ChatGPT, are rotting the brain. It’s certainly depressing to look under any mundane, asinine post on the social media site formerly known as Twitter and see an army of users outsourcing their critical thinking by asking, “@Grok, is this true?”

But is it actually damaging our brains? The story is more complicated and nuanced. The authors of the MIT study themselves point to several caveats: the study was small, the conclusions only apply to the essay-writing task and might not generalize to other tasks in the real world, and more studies are needed to track the long-term impact of AI use.

Curiously, the MIT Media Lab is no longer taking part in media interviews about the study after a change in internal policy. Sarah Beckmann, who runs communications at MIT Media Lab, told Refractor that “we don’t do media interviews or promote research that hasn’t yet been peer-reviewed or formally published. This helps us ensure the integrity and accuracy of the science we share with the public.”

So we reached out to a range of psychology and social science experts to put the research into context.

How are chatbots affecting learning?

Web search first became widely available in the 1990s, fundamentally changing the way people found information. Rather than going to a library and searching for information in books manually, anyone with an internet connection could get a curated list of websites containing an appreciable percentage of the entire world’s knowledge.

Psychologists discovered some people wouldn’t remember simple facts that were easily searchable and became reliant on search engines to draw out this knowledge. In 2011, psychologists coined a name for the phenomenon: the Google Effect.

With many students, teachers, and professors using generative AI, does it have a similar impact on the way we learn and remember information?

Like Google, chatbots facilitate cognitive offloading – outsourcing memory and thinking to digital tools. Where Google mostly offloads memory and still requires active interaction from users to find information, chatbots provide a far more passive user experience.

Instead of providing a list of sources, if I ask ChatGPT to explain how brain cells work, it will write out a short, structured explanation almost instantaneously. Anecdotally, and judging by click-through traffic from ChatGPT and other popular LLMs, most people don’t ask these chatbots for sources, or check that the information they provide is accurate.

In Google’s heyday, teachers taught students how to check that the sources they find online are reliable. The large language models that power chatbots provide an output that’s harder to assess; They work by comparing the patterns of words in queries to training data, pulling out patterns using statistical models and predicting the most likely text to answer with.

The way large language models work makes them prone to homogenization — where the answers they provide over time will tend to become similar. Chatbots are also designed to build a rapport with you, and are often sycophantic to keep you using them longer.

Because of the way they work, Dirk Lindebaum, professor at the University of Bath, told Refractor that large language models fundamentally undermine our epistemic agency: The ability to take responsibility for, acquire, evaluate, and synthesize information. Rather than epistemic agents, he argues that chatbots transform us into passive consumers of knowledge.

Lindebaum also cautions that LLMs will have a negative impact on the ability to think critically about our thought processes, an ability called metacognition.

“There is emerging empirical evidence that suggests that the use of AI for learning and task completion purposes undermines our metacognitive skills,“ said Lindebaum. “It may lead to a short-term boost on specific tasks,” but in the long run undermines the ability to ask the kinds of important questions that provide context to information.

Still, many university professors are incorporating this form of AI into the classroom, many saying that its expansion into everyday life is inevitable, the workforce of tomorrow will need to use AI skills, and thus it’s imperative that it’s incorporated into tertiary learning.

“I am very skeptical if I hear esteemed academic colleagues using the same mantra, because it brings marketing claims of AI companies into academia and into academic debates,” said Lindebaum. “If I were to accept the inevitability of AI, I would at the same time abdicate my epistemic responsibility as a social scientist.“

Lindebaum has applied for the AI-free scholarship trademark as part of his commitment to ensure the research and education he provides is free of AI.

The cost of offloading cognitive tasks

As early as 2023, professor Mathias Stadler of LMU in Munich, Germany, started wondering about the impact of large language models on his students. He conducted one of the earliest studies of chatbots before they became ubiquitous. Stadler recalls having to leave instructions so that the study participants understood how to use ChatGPT.

The study involved giving 91 students a task to research nanoparticles in sunscreen, something they weren’t very knowledgeable about, and afterward, without notes, write an explanation. Half used Google (before it integrated AI answers) and the other half used ChatGPT. While the students who used the chatbot felt the task was very easy, “they gave surprisingly bad answers,” Stadler told Refractor.

Their poor performance, he believes, reflects the cost of cognitive offloading to chatbots. Now, with new versions of chatbots that provide more detailed answers, he worries that people may have got better at offloading work to these chatbots, and they might also believe the information they’re getting is even more accurate.

The effects of chatbots on cognitive offloading and critical thinking might also depend on individual factors. A study published this year in the journal Lindebaum edits, Academy of Management, Learning & Education, found something similar.

Researchers randomized 150 business school students. First, they all completed a case study without any AI. Then, they were provided with another case study, but this time, half of the students were allowed to use ChatGPT. Those who did poorly on the first task benefited from using the chatbot, while those who performed well showed diminished performance with the chatbot.

In this case, Lindebaum believes that chatbots speed up task completion while reducing knowledge translation, explaining why poor students performed better while better students performed worse with ChatGPT’s assistance.

Cognitive offloading may have an impact on critical thinking as well. A 2025 study surveyed more than 600 individuals about their AI usage and also tested their critical thinking skills. The participants who used AI the most reported higher levels of cognitive offloading and scored lowest on critical thinking.

Another randomized, controlled experiment with a cohort of 160 students is in the works to understand the impact of using generative AI on cognitive effort in a writing task. The study includes eye-tracking and brain imaging to collect objective data on which parts of the brain are activated while thinking through the task.

“It’s fast food,” said Stadler. In a pinch, fast food is better than nothing. Eating nothing but fast food, on the other hand, is far from healthy.

Do we need to get used to AI?

Many of the experts who spoke with Refractor believe that people need to learn to use AI – they believe it's here to stay.

Barbara Larson, a professor at Northeastern University, teaches her students critical thinking because she said it may help them learn to use AI wisely. Learning about the limitations of LLMs, pushing back on their responses, and asking them to “explain” and source their output could minimize some of the negative outcomes.

Not everyone buys into the hype. Although generative AI is near-ubiquitous in 2026, companies like OpenAI and Anthropic are burning through billions of dollars year over year. Despite these concerns, many universities around the world are signing contracts with AI companies to provide access to the chatbots to students and faculty.

Some faculty members are pushing back against their university administrators, hoping to stop them from renewing contracts with AI companies. “Many academics seem to confuse marketing claims of AI companies with scientific facts,” said Lindebaum. “That is something I find rather bewildering.”

Without a clear understanding of the risks, some may default to using chatbots because they are easier. Many students and researchers are using these models without proper disclosure. The matter is complicated by the lack of reliable AI detection tools.

For USC associate professor and researcher Cheryl Wakslak, the opportunity to study the impacts of generative AI could help the social science field find redemption after failing to proactively study the impact of social media.

“We need, as a social science community, to do this better this time,” Wakslak told Refractor.

So far, explained Wakslak, social scientists and psychologists have captured the effects of chatbots in a few very specific moments in time. She and other scientists are still gathering data to understand what happens with continuous AI use and how it impacts the brain.

Another reason there’s no clear answer to the question in part because of how fast large language models are updated and released. By the time well-conducted research is published, the AI chatbots it used are already obsolete, explained Stadler. If researchers run the study quickly, it won’t be rigorous.

The slowly-evolving scientific verdict on generative AI is not yet satisfying and doesn’t lend itself to catchy headlines. But while there aren’t many rigorously designed, peer-reviewed studies on the long-term effects these chatbots have on the brain, there’s more than enough reason for experts to raise concerns.

No comments
0 comments
There are no comments. Be the first!