
Is ChatGPT actually making us less intelligent and more complacent?
Since the launch of ChatGPT in 2022, generative AI has rapidly integrated into our work, studies, and personal lives, accelerating research, content creation, and more at an unmatched pace. Interest in generative AI tools has understandably surged, experiencing a faster adoption than the Internet or personal computers. However, experts caution that we must advance with care. Like any new technology, generative AI has the potential to propel society forward in several ways, but it can also lead to negative consequences if not properly managed.
Natasha Govender-Ropert, Head of AI for Financial Crimes at Rabobank, is one of those voices. She joined TNW founder Boris Veldhuijzen van Zanten in a recent episode of “Kia’s Next Big Drive” to discuss AI ethics, bias, and the potential of outsourcing our cognitive abilities to machines.
Consider this question: as we increasingly rely on generative AI for responses, what effects could this dependence have on our intelligence? A recent MIT study examining the use of ChatGPT for essay writing has resulted in numerous sensational headlines, such as “Researchers claim using ChatGPT can damage your brain” and “ChatGPT might be making you lazy and less intelligent.” Is that truly accurate?
Here’s what the research actually revealed: 54 students from the Boston area were assigned an essay task. One group utilized ChatGPT, another used Google (without AI assistance), and the third group relied solely on their own thinking. During the writing process, their brain activity was monitored with electrodes. After three sessions, the group that wrote without any tools displayed the highest levels of mental connectivity, whereas the ChatGPT users showed the least. It appeared that those using AI were operating on autopilot, while the other participants had to exert more mental effort to compose their essays.
In the fourth round, the groups switched roles. The brain-only group could now use ChatGPT, while the AI group had to rely on their own abilities. The outcome? The former improved their essays, while the latter had difficulty recalling their initial thoughts. Overall, the study indicated that throughout the four months it took place, participants who wrote without any tools surpassed the others in neural, linguistic, and behavioral metrics, while those using ChatGPT spent less time writing, often resorting to copy-pasting.
Teachers who evaluated the essays noted they lacked originality and “soul.” This may sound alarming, but the reality is more nuanced than the sensational headlines suggest. The findings point less to brain deterioration and more to the use of mental shortcuts, suggesting that over-reliance on large language models (LLMs) can diminish engagement. However, with careful and thoughtful usage, these risks may be mitigated. The researchers also stressed that while the study raises intriguing questions for future exploration, its scale was too limited and simplistic to reach definitive conclusions.
The decline of critical thinking? While the results (which await peer review) do call for more extensive investigation and contemplation on how we should employ this tool in educational, professional, and personal contexts, it may be the misleading, clickbait headlines that could be negatively affecting our critical thinking skills.
The researchers share these concerns; they established a website with an FAQ section urging journalists to avoid using inaccurate and sensationalized language regarding their findings.
Ironically, they linked the ensuing “noise” to reporters utilizing LLMs for summarizing the study and added, “Your HUMAN feedback is very welcome if you read the paper or parts of it. Also, as a reminder, the research includes a list of limitations that we clearly outline in both the paper and on the website.”
Two conclusions can be safely extracted from this study:
1. There is a vital need for further research into how LLMs should be utilized in educational environments.
2. Students, journalists, and the general public must maintain a critical approach to the information we receive, whether it originates from media sources or generative AI.
Researchers from Vrije Universiteit Amsterdam express concern that with our growing dependence on LLMs, critical thinking—our capability and willingness to question and alter social norms—could be at risk. They highlighted that students may become less inclined to perform thorough searches, relying instead on the authoritative and informed tone of the generative AI output. They may be less likely to question or even recognize the unspoken perspectives latent in the output, neglecting to consider which viewpoints are overlooked and the assumptions that are taken for granted.
These risks hint at a deeper issue within AI. When we accept its outputs uncritically, we might miss embedded biases and unquestioned assumptions. Tackling these challenges requires not only technical solutions but also critical reflection on what we regard as bias.
These issues are pivotal to Natasha Govender-Ropert’s work at Rabobank, where she focuses on developing responsible and trustworthy AI by eliminating bias. However, as she pointed out in her conversation with Boris Veldhuijzen van Zanten, bias is a subjective concept and must be defined individually and contextually

Is ChatGPT actually making us less intelligent and more complacent?
Is ChatGPT making us complacent? We explore MIT’s research on AI, critical thinking, and what our dependence on generative tools truly signifies for our minds.