ResearchGate Twitter Flickr Tumblr

Why Eliezer Yudkowsky’s Time Op-Ed on How Current AI Systems Will Kill Us All Is Even More Unhinged than You Think

Once upon a time, for more than a decade, I had a Time magazine print subscription; it was a pretty decent publication at the time, especially if you consider that the only competing weekly was Newsweek. However, that Time magazine now went and published Eliezer Yudkowsky’s “Pausing AI Developments Isn’t Enough: We Need to Shut It All Down” doesn’t imply that Yudkowsky isn’t a shrieking dangerous crackpot; it merely implies that Time magazine, being no longer the magazine it once was, is making use of the riches it found at the bottom of the barrel.

The article immediately sets your brain on fire before it even starts, with the ridiculously self-aggrandizing assertion that he’s “widely regarded as a founder of the field,” and that he’s been “aligning” “Artificial General Intelligence” since 2001.

Which doesn’t come as a surprise. The reason that Eliezer Yudkowsky is also widely known for his stupendous intelligence and monumental expertise is because he rarely fails to tell you, and I can see no reason why we shouldn’t believe him.

Teaser (not from the article):

I am a human, and an educated citizen, and an adult, and an expert, and a genius… but if there is even one more gap of similar magnitude remaining between myself and the Singularity, then my speculations will be no better than those of an eighteenth-century scientist.

His Machine Intelligence Research Institute—the successor of the Singularity Institute—is associated with Effective Altruism and LessWrong, which in turn are military-grade magnets for longtermists, about whom Émile P. Torres wrote last year and again more recently. Yudkowksy, to sum it up, is a vibrant member of TESCREAL—Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism—a secular cult whose two decades-old AGI Is Just Around the Corner! sales pitch perpetually oscillates between AGI utopia and AGI apocalypse. Why do so many people fall for it? There’s a unsound fascination attached to both utopian and apocalyptic scenarios in the context of personal existence, fed by the human desire to either live forever or die together with everyone else. I get that, certainly. But it shouldn’t be the lens through which we look at AI and other things in the world.

Now, that article. As a reminder, we’re talking about large language models here: text synthesis machines, trained with well-understood technology on ginormous piles of undisclosed scraped data and detoxified by traumatized exploited workers. These machines, as has been pointed out, present you with statements about how answers to your questions would probably look like, which is categorically completely different from what answers in the proper sense actually are. All that, however, keeps being blissfully overlooked or proactively forgotten or dismissed through armchair expertise in neuroscience.

Against this backdrop, you can now fully savor the delicacies from Yudkowsky’s article’s urgent demands’ menu:

  • Enact a comprehensive indefinite worldwide moratorium on training LLM.
  • Shut down all the large GPU clusters for LLM training.
  • Cap the allowed computing power to train AI systems.
  • Track all GPUs sold.
  • Destroy rogue data centers by airstrike and be prepared for war to enforce the moratorium.

If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

  • Be prepared to launch a full-scale nuclear conflict to prevent AI extinction scenarios.

Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.

Bonus demand on Twitter (screenshot):

  • Send in nanobots to switch off all large GPU clusters.

Sounds wild? Here’s his rationale (if you can call it that):


Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

On top of that, he added a bizarre story about his daughter losing a tooth “on the same day” that “ChatGPT blew away those standardized tests” (it didn’t), and his thoughts that “she’s not going to get a chance to grow up.” But don’t let that fool you. Here are two Twitter gems that put his sentiments in perspective; one stunner about when the “right to life” sets in (screenshot | tweet), and another one he later deleted, answering a question about nuclear war and how many people he thinks are allowed to die in order to prevent AGI (screenshot | delete notice):

There should be enough survivors on Earth in close contact to form a viable reproduction population, with room to spare, and they should have a sustainable food supply. So long as that’s true, there’s still a chance of reaching the stars someday.

Well, that’s longtermism for you! And the follow-up problem with longtermism is that we all know what kind of “population” will most likely survive, and is even supposed and preferred to survive in certain circles.

Unhinged soapbox meltdowns like this, about how LLM systems are on the verge of becoming alive and will inevitably annihilate humanity in a global extinction event, do us no favor. All they do is distract us from the actual risks and dangers of AI and what we really should do: research the crap out of LLM and put it to use for everything that makes sense but also implement strict protocols; enforce AI liability, transparency, and Explainable AI where needed; hold manufacturers and deployers of AI systems responsible and accountable; and severely discourage the unfettered distribution of untested models based on undisclosed data sets, theft, and exploitation (besides the posts on this here blog, I wrote a lengthy illustrated essay on these topics over at Medium).

Personally, I’ve always been, and still am, very excited about the development and promises of advanced LLM systems (with AlphaGo and “move 37” in particular as a beloved milestone back in 2016), and I will be happily engaging my game design students with LLM concepts and applications this term. How does this excitement rhyme with my positions on “intelligent” AI and Explainable AI? Well, it’s a matter of how you align creativity and accountability!

With regard to the latter, it’s perfectly fine when nobody’s able to explain how a creative idea or move came about; but it’s not fine at all if nobody’s able to explain why your credit score dropped, why your social welfare benefits were denied, or why you’ve been flagged by the CPS as a potential abuser.

And with regard to the former, I don’t think “intelligence,” whatever it is, is a precondition for being creative at all.

permalink