A new study from the MIT Media Lab has raised concerns about the long-term effects of using artificial intelligence tools like ChatGPT on human cognition. The research, released this month, suggests that relying on large language models (LLMs) such as ChatGPT could impair an individual’s critical thinking abilities, especially with prolonged use.
Researchers observed participants over several months as they completed SAT-style essay assignments. The subjects were divided into three groups: one using ChatGPT, another using Google’s search engine, and a third group relying solely on their own thinking—dubbed the “brain-only” group.
To analyze brain activity during the writing tasks, researchers used electroencephalography (EEG) to monitor neural engagement across different regions of the brain. The results showed a stark difference in cognitive involvement among the groups.
According to the study, those using ChatGPT demonstrated the lowest level of brain engagement. Over time, these participants began to rely more heavily on the AI, eventually moving from asking structural questions to simply copying and pasting complete essays. The researchers noted that this group “consistently underperformed at neural, linguistic, and behavioral levels.”
Participants who used Google showed moderate brain activity, while the “brain-only” group displayed the strongest and most widespread neural activity, indicating deeper cognitive involvement throughout the writing process.
The study’s lead author, Nataliya Kosmyna, emphasized the urgency of the findings, particularly as AI tools become more integrated into education.
“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6–8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” Kosmyna told Time
magazine. “Developing brains are at the highest risk.”
The study highlights growing concerns among educators about how easily accessible AI tools are enabling academic dishonesty and changing how students learn. Despite these concerns, AI integration in classrooms appears to be accelerating.
In April, former President Donald Trump signed an executive order promoting the use of AI in American schools. The policy aims to prepare young students for a future economy shaped by AI advancements.
“The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal,” White House staff secretary Will Scharf said at the time.
As the debate over AI’s role in education continues, this new research may fuel broader discussions on how to balance technological innovation with cognitive development—especially for younger generations.
SOURCE: MIT RESEARCH
Steve Clyburn
December 11, 2023 at 2:53 pm
Four elections, none federal, out of hundreds if not thousands, do not indicate widespread fraud.
Jack
December 11, 2023 at 4:05 pm
To have four elections overturned because of fraud is four too many and should not be trivialized.
In this day of close elections, one or two fraudulent votes could produce false results.
Too many jurisdictions discovered fraud but decided it wasn’t massive enough to continue the investigation. Yet, since the investigation didn’t continue to the end, how do they know there wasn’t enough fraud to matter?
And then there’s the Registrar in Pr Wm County who changed the vote totals to her liking.
No fraud should ever be tolerated. None at all.