Last week, I wrote two related | posts here and at between drafts, respectively, on the (self-)deceptive belief that LLM chat windows are wellsprings of “ideas and inspiration.” But that’s just one problem with LLM use in writing and education.
Josh Nudell, in his update on LLMs/ChatGPT in education (referencing in the following passage a paywalled article from The Intelligencer):
[T]the student who worries me most in the article is not the one from the lead anecdote, but Wendy, a young woman profiled in the article who uses ChatGPT even though it impedes her learning because it improves her grades. Warner describes this having “internalized a transactional model of education that has turned students into customers” and relegated the education itself to the second.
The “customer” view in education has been a pet peeve of mine for years, and the use of ChatGPT and its ilk now feeds right into it. Nudell goes on to explain how he has dealt with the use of ChatGPT by requiring his students to disclose which tech tools they’ve used in their written assignments, which is exactly what we do at the university I teach at.
Which works reasonably well, but only so far:
The most interesting thing about these disclosures is that not a single student said they used an LLM to generate a meaningful amount of text, perhaps believing that doing so would constitute a clear-cut case of academic dishonesty according to university regulations. Instead, their AI use generally fell into an ethical gray area—extensive use of shortcuts and cognitive offloading that I personally believe are the true dangers of AI, but about which students are receiving mixed messages not only from their professors but also from society at large.
Add to that the basket of problems I repeatedly wrote about, from generating—seemingly innocuous—outlines to have ChatGPT summarize sources instead of actually reading them:
There were also a growing number of students who found the texts I assigned (both translations of ancient texts and modern readings) too difficult and so ran them through LLM programs to rephrase the readings in easier-to-understand language and/or to summarize the key ideas. The effect, I found, was broadly similar to having not done the reading or having turned on SparkNotes. The students had ostensibly completed the assignment, but they struggled to formulate questions about the reading or participate in discussions. Readings were simply a vehicle for some essence to be extracted and mechanically replicated rather than a necessary foundation for inquiry.
Nudell’s term for all this, “cognitive offloading,” hits the spot and marks the issue.
But banning LLM use in education is not a solution. We all know that. It would create more problems than it could possibly solve, and amount to shutting the barn door after the horses left. What’s more, using LLMs is well on its way to virtually becoming impossible to avoid as so-called “AI assistance,” together with its griftslop entourage, is being force-fed with reckless abandon into every piece of “standard” software used in education and elsewhere.
Instead, following Nudell, a solution must consist of “fundamental cultural changes,” which in turn necessitates addressing AI “head-on”:
This fall, I will be updating and clarifying my disclosure guidelines as well as adding to my kit a discussion about cognitive off-loading and the reality that tech companies don’t want most students to learn foundational skills that threaten their business model. In my first-year seminar on speculative fiction, I plan to complement this discussion by having the students read Plato’s Allegory of the Cave from The Republic as a way to talk about the way in which digital worlds mediate reality.
Music to my ears.