ResearchGate Twitter Flickr Tumblr

GPT: Like Crocodile Tears in Rain

If you don’t live under a rock, you might have noticed a few remarkable—but not altogether unpredictable—advances in natural language processing and reinforcement learning, prominently text-to-image models and ChatGPT. And now, every concerned pundit comes out of the woodwork, decrying how terrible these developments are for our educational system and for our starving artists.

Is this deluge of AI-generated images and texts terrible? Of course it is. But don’t let them tell you that this is the problem. It’s only the symptom of a problem.

Let’s start with all those people suddenly feeling deeply concerned about the death of the college essay. Education, if you think of it, should do three things: make children and students curious about as many subjects as possible; give them the tools to develop interests around these subjects; and facilitate the acquisition of skills, knowledge, and understanding along these interests. For these ends, virtually every new technology would be useful one way or another. Our educational systems’ priority, however, is feeding children and students standardizable packets of information—a lot with very short best-before dates stamped on them—for evaluation needs and immediate workplace fitness. Just think of it: the world wide web became accessible for general use around 1994! During all that time, almost thirty years, the bulk of written and oral exams didn’t adapt to integrate the internet, but has been kept isolated from it meticulously. Nor, for that matter, has the underlying system changed of keeping information expensive and disinformation free, an infrastructure into which AI-generated nonsense can be fed now with abandon. And all this gate-keeping for what: when there’s a potential majority in any given country to elect or keep electing fascists into high office, the survival of the college essay probably isn’t the most pressing thing on our plate with regard to education.

Then, the exploitation of artists. Could these fucking techbros have trained their fucking models on work that artists consented to or whose work is in the public domain? That’s what they should’ve done and of course didn’t, but please spare me your pundity tears. While it is thoroughly reprehensible, it’s only possible because at that intersection where the tech and creative industries meet, a towering exploitation machine has stood all along—co-opting or replacing or straightaway ripping off the work of artists and freelance artists, and practically everybody who doesn’t own an army of copyright lawyers, the moment their work becomes even marginally successful.

AI will advance, and everything it can do will be done. Nexus-6 models aren’t right around the corner, but they’re an excellent metaphor. We could try and legislate all those leash-evading applications to death, chasing after them, always a few steps behind and a few years too late, trying to prevent new ways of exploitation while reinforcing old ways of exploitation. Or we could try and change our educational and creative economies in ways that make these applications actually useful and welcome, for educators and artists in particular and humanity and this planet in general.

permalink