The term “Luddite” is used to describe persons opposed to new technology. In 1800s England, the first Luddites opposed new automated machinery being introduced into textile mills for fear of job losses and reduced worker pay. The last few hundred years of technological development have been positive, with economic growth lifting billions out of poverty worldwide. However, the advent of AI poses ethical challenges that may prove present-day Luddites partly correct in their fears.
As millennials growing up, our parents often told us that video games would rot our brains and make us into zombies. This has been proven false in certain circumstances, as shown by the study “Using Video Games to Improve Capabilities in Decision Making and Cognitive Skill: A Literature Review” by Reynaldo et al.
Video games can be used to train persons in unique skills such as flying aircraft in Microsoft Flight Simulator. Video games can engage the brain in problem-solving activities and pattern recognition, the brain is not turned off during a game session.
However, with the advent of AI Large Language Models, new technology has arrived that if misused will cause cognitive decline. Persons who offload all of their writing and critical thinking skills to AI models like ChatGPT are at risk, since this dependence actually disengages the use of the brain.
The misuse of ChatGPT and other AI large language models over long periods of time can cause brain decline, according to a shocking new study based on brain data from EEG tests.
In a mammoth 200-page paper entitled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” by Kosmyna, Nataliya, et al, researchers at MIT published their data showing that over a four-month period, persons who relied heavily on ChatGPT for writing and reasoning tasks experienced cognitive decline measured by weaker neural connectivity.
Persons who relied heavily on ChatGPT were compared against a “brain only” group who were forbidden from using AI for writing and reasoning tasks for the four-month period.
“As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring,” the study said.
The researchers used the EEG (electroencephalogram), a test that measures and records the electrical activity of the brain, and found the brains of persons who over-used ChatGPT had weaker neural activity when compared to those who did not use it at all.
In their conclusion, the researchers warned that overuse of ChatGPT reduced users’ inclination to critically evaluate the AI’s output. ChatGPT’s outputs are based on curated training data and its priorities are arguably biased as a result.
Persons who used ChatGPT to write essays felt less of a connection to the work and had a much weaker ability to quote the essays. This can arguably mean worse learning outcomes for persons who overuse AI to do schoolwork.
This study should inform educational policy in T&T, as students should be taught to use AI responsibly and not fall victim to cognitive decline at a young age by relying on ChatGPT.
Recommendations for ethical AI use
Despite the dangers that this MIT study highlighted, it is still critical for everyone to know how to use AI responsibly to keep up in the modern workplace.
As a writer and an attorney at law, I am keenly aware that ChatGPT should be used with caution. I usually write important letters myself and let ChatGPT improve the tone by making letters more diplomatic than my personal writing style may prefer.
When using ChatGPT for research, I always ask it to provide me with links to the research papers or cases it is suggesting so that I can read those papers myself. Without strict instructions, Large Language models have a habit of hallucinating books and authors that do not exist.
ChatGPT is particularly good at critiquing your existing writing and suggesting improvements or perspectives you did not consider at first. This critique can be used to improve essays or work assignments. However, I strongly recommend rewriting ChatGPT’s critique in your own words and not copying and pasting wholesale for several reasons.
It is important to be critical of whatever an AI outputs, as it may be factually incorrect. Furthermore, believe that rewriting whatever good ideas ChatGPT may produce into your own words would have the same effect as reading a top author on a subject and modifying their ideas to suit your needs. It should improve memory and spark some degree of critical thinking and therefore avoid the dangerous outcomes found in the latest MIT study on AI causing cognitive decline.