ResearchGate Twitter Flickr Tumblr

“Thank you, Siri!”

Among the things that bother me about generative AI is how humans will treat it. But not because generative AI were “conscious” or “alive” in any conceivable way.

Before I make my argument, let me back up a little.

As for not being “conscious” and “alive,” it could be argued that a similar thing once was believed, at the time around Descartes and again in the nineteenth century, about animals—that they were mere automata, and pain for them just a bodily sensation, not something they could actually feel. (What Descartes himself exactly believed is a bit murky when you dig into his writings, but, yeah, that’s probably how he saw it.) This, of course, opened the door for all kinds of animal cruelty that we today as an even more enlightened society have left behi—oh wait. Livestock farming, factory farms, battery cages, by-catch, bullfighting, boiling alive, you name it, and that’s just for starters. In our society, animals still suffer even though we know they feel pain, and we inflict it on them on an industrial scale.

So, then, could the argument be made that we only think generative AI doesn’t have feelings, and that we will see the error of our ways a few decades hence? (Without actually changing our ways then, of course?) Absolutely not—making such an argument would be making category mistakes with reckless abandon. Generative AI models have no interior representations of the world, no sense of self, and they present you not with an answer but a statement about how an answer to your question would probably look like, which is categorically completely different from what an answer actually is. As such, they’re certainly not hurt if you call them names or switch them off. Making such an argument is like saying you should be nice to your car on the off-chance that it turns out to be Christine.

But you should be nice to your car, and to things in general. I’ve always been a great fan of Kondō Marié’s, particularly her philosophy that you should thank an item before discarding it. Why are we repelled by Sid torturing toys in Toy Story? Is it only because toys are sentient in that Pixar universe? Or do we feel that something is terribly off with Sid, and that he won’t stop at torturing toys as a grown-up?

Things (including generative AI), animals, and other human beings have one crucial aspect in common: for every single one of us, objectively, they’re subjectively “not I.” From that perspective, it is plausible that things, just like animals, can become a “training ground” for how we treat other human beings, and that cruelty toward things and animals will sooner or later spill over—call it habituation, virtue ethics, Overton window, whatever you like—to how we treat other human beings. That isn’t to say the only thing that counts at the end of the day is being nice to other human beings. But the effect is observable. There’s a lot of scientific evidence, and numerous papers, on the link between violence toward animals and violence toward humans, and there is the 2009 study “Slaughterhouses and Increased Crime Rates: An Empirical Analysis of the Spillover From ‘The Jungle’ Into the Surrounding Community,” whose title speaks for itself. (“Jungle” refers to the famous Chicago slaughterhouse complex.)

So, yeah. Maybe I read too much Aristotle and think too hard about habit-forming and personality, and maybe I’m too sentimental and nostalgic in general. But I do thank Siri for answering my questions, and I do thank things when I part with them, be it the pair of running shoes that finally fell apart, the terminally worn-out quilt, or the obsolete computer. Am I always nice to people? Well, I try not to be callous, but that doesn’t mean I always know what other people perceive as callous, or rude. Particularly because I’m notorious for being sarcastic by nature and also enjoy a good argumentative fight among friends. However, being callous or rude to things or animals, let alone cruel, would certainly not help me in becoming more considerate of humans.

For all that, I’m bothered by how people will abuse generative AI, especially when taking the tendency into account that many perceive it as a living thing instead of the thing it actually is. I’m bothered by what they are already doing to it, including creating, controlling, and abusing “virtual girlfriends.” I’m bothered by the thought that all this will terribly backfire and spill over and make the world even worse, as bad as it already is, and that in a foreseeable future every year will be like 2014 and 2025 combined.

permalink