Is ChatGPT Use in the Classroom Truly Inevitable?

Is ChatGPT Use in the Classroom Truly Inevitable?

AI-powered tools in education are not a foregone conclusion, but it’s time for educators and policymakers to proactively imagine how they want AI to be used with students — before it’s too late.

ith the release of ChatGPT in November 2022, the general buzz about artificial intelligence (AI) overnight went from a gentle simmer to a rolling boil. Educators are excited and frightened by the implications of easy-to-use AI-powered tools readily accessible to students. Teachers most excited about ChatGPT think it will soon make personalized learning a reality, quickly and easily differentiating for all learning styles and learning differences. Many think ChatGPT could also help students generate ideas or inspiration for assignments, provided they properly cite any content the tool gins up. Others fear ChatGPT, Microsoft’s Bing Chat, and other tools of this ilk will help students cheat on assessments or generate content for papers and essays.

Education researchers and policymakers are fond of stating that schooling and education are contested spaces; there are widely differing opinions on how best to write and deliver content and how much emphasis to place on any particular subject. The contested nature of schooling means that nothing in education is immutable. Despite supposed inevitability, how and when we use AI in education is undecided. Like any powerful digital tool, AI has the potential to be an instrument of liberatory education or an insidious toolset for monitoring and monetizing student and teacher performance data.

AI research has continued apace for decades, particularly the quest for Artificial General Intelligence (AGI). AGI is what many people imagine when they think of AI; AGI could theoretically perform any mental task a human can do with complete autonomy. Notions of AGI have filled the pages and screens of science fiction novels and films. Some portrayals are benign agents who placidly serve humans, such as the ship’s computer in Star Trek: The Next Generation. Perhaps more often than not, depictions of AGI are not so benign — think of the time-traveling assassins in the Terminator film series or the murderously condescending ship computer, HAL, in the film 2001. And there are valid fears beyond fictitious imaginings. In fact, many leaders at the helm of the very companies leading AGI development have beseeched policymakers to take action to slow, or at least limit, AGI research.

ChatGPT is far from AGI, but its abilities have captivated and terrified the public since its release. ChatGPT (Chat Generative Pre-Trained Transformer) is a chatbot wrapper for GPT-4, a proprietary large language model (LLM) created by OpenAI. Based on neural networks and trained on large data sets, tools like ChatGPT and Microsoft’s Bing Chat generate text based on a user’s query mainly by guessing what word should come next. However, there is a crucial issue with this method. LLM chatbots like ChatGPT cannot determine whether their answers to users’ queries are factual or make sense. When they do produce fallacies or nonsense, AI researchers affectionately call these episodes “hallucinations.” In response to this bug, many see an “opportunity” for educators using ChatGPT to ask students to use critical thinking to determine if the chatbot’s results are factual.

“Definitions are vital starting points for the imagination. What we cannot imagine cannot come into being. A good definition marks our starting point and lets us know where we want to end up.” -bell hooks

The time for educators and policymakers to act is now. If we do not, technology companies will undoubtedly create and deploy these tools as they see fit. It is particularly dire for educators and policymakers to imagine how they would like AI to be used in schools before vendors present AI-powered tools as the latest educational panacea to decision-makers. Once superintendents, principals, and technology directors sign long-term contracts, enter into multiyear subscriptions, and commit professional development hours to train teachers for these tools, it will be difficult to extricate teachers and students from a future technology companies are only too happy to imagine for them. Sadly, many educators have already adopted an “if you can’t beat them, join them” attitude borne of the sheer volume of articles and news stories about the inevitability of AI in schooling. To accept AI as inevitable, unstoppable, or the rampant use of AI tools in schools as a forgone conclusion is to succumb to technological fatalism. Paul Licker differentiates this particular kind of fatalism as either strong (technology is unstoppable, has an overpowering and self-same motive force independent of human influence) or weak (humans naturally create tools and technology to improve our lives, make tasks easier, automate processes). If we want to prevent the former, we must make the latter the basis of our imagining.

Now that AI-driven chatbots like ChatGPT are available to the public, how can educators envision best practices for a tool they might know nothing about? Luckily, many thought experiments concerning humanity’s interactions with AI exist, readily mineable to help us imagine a future in which they extend and enrich teaching and learning. Science fiction has long served as either a cautionary tale of future threats or to fill our minds with utopian visions of futures made better through technology, innovation, or social harmony.

Consider Mary Shelley’s Frankenstein. Shelley partially penned her most well-known work in response to what she observed during a period of technological upheaval, much like the period ushered in by the release of ChatGPT. Luddites, members of a 19th-century labor movement reacting to the technological innovations newly introduced to the textile industry, destroyed automated knitting machines and voiced their opposition to new policies enacted by manufacturers to drive down wages and replace skilled labor. Frankenstein contains a message of caution for AI researchers, particularly the dangers of developing AI in isolation — without open dialogue among researchers and policymakers. Victor Frankenstein worked alone, did not engage his peers, and ultimately forsook his morals to complete his tragic gesamtkunstwerk.

Another example is Michael Crichton’s 1969 novel The Andromeda Strain. The harrowing events of the novel primarily take place in a sprawling, top-secret, underground laboratory complex as a group of scientists labors to understand an extraterrestrial pathogen that traveled to Earth in a damaged satellite. Having killed the population of a small town, save for some reason an older man and an infant, the pathogen could potentially wipe out all life on Earth. A group of experts works furiously to understand and find an antidote to the mysterious pathogen. The influence of The Andromeda Strain, and the film of the same name, was its realistic depiction of the dangers of microbial pathogens escaping from a secure laboratory setting or being brought back from space by astronauts. It is not difficult to imagine the dangers of AGI escaping laboratory settings and entering networked systems worldwide.

Neal Stephenson’s 1995 novel The Diamond Age: Or, A Young Lady’s Illustrated Primer features a far-reaching example of AI/AGI in education. In the book, a young girl named Nell receives a technological marvel in the form of a book. The device, the “Young Lady’s Illustrated Primer: a Propædeutic Enchiridion”, adapts to Nell’s needs and the evolving environments in which she finds herself and teaches her what she needs to know as she grows up. The Primer becomes Nell’s trusted friend. Interestingly, The Primer in Diamond Age was one inspiration for the development of the One Laptop Per Child (OLPC) initiative (the narrative learning platform which ran on OLPC devices was named “Nell”).

But the essential lessons in Nell’s education were no more than a scripted curriculum performed by humans. The Primer is a technological marvel, but its most critical functions were a thin interface behind which labored a human operator. Beyond simple computation, The Primer is similar to the fraudulent chess-playing automaton constructed in 1770 to delight Empress Maria Theresa of Austria and her court. Named “The Turk,” the device was purported to be a completely autonomous machine capable of playing chess. However, hidden inside the base of The Turk was a human operator responsible for all logic and action. The Turk is an apt metaphor for the state of AI today; what exists is limited, and the most critical functionalities are merely an interface behind which a human operator is at work.

At its core, The Diamond Age: Or, A Young Lady’s Illustrated Primer is a story about class struggle. The Primer was never intended for Nell, who is poor and lacks any social designation — someone steals it and gives it to her. An ultra-wealthy individual commissions the Primer for his granddaughter, who hopes it will guide her to what he terms an “interesting life.” Although wealthy and powerful, he sees an interesting life as one that subverts the dominant social order. Perhaps this is the end we must imagine when considering how AI and AGI will eventually integrate with education. The problems proponents of AI-powered education tools seek to mitigate are, in actuality, caused by hyper inequality, structural racism, and the privatization of schooling. If we develop true AGI for students, the ideal function of those tools may be helping students discover that interesting life, disrupting the human-borne issues plaguing education and subverting the inequality that causes us to dream of technological panacea.