ResearchGate Twitter Flickr Tumblr

The day before yesterday, I worked my way through this terrible “Pause Giant AI Experiments” open letter, but didn’t get around to commenting on it. Luckily, I don’t have to! Emily Bender tore into it meanwhile, into the institution* that published it, and into the letter’s footnotes and what they refer to:

Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the “Sparks paper” and OpenAI’s non-technical ad copy for GPT4. ROFLMAO.

What it boils down to is this. One the one hand, one can and should agree with this open letter that the way LLM development is handled right now is really bad for everybody. On the other, this open letter advocates stepping on the brake by stepping on the gas to accelerate the AI hype, which is entirely counterproductive:

I mean, I’m glad that the letter authors & signatories are asking “Should we let machines flood our information channels with propaganda and untruth?” but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.

Go read the whole thing.

*Addendum: If you want to learn more about longtermism—both the Future of Life Institute and all the people cited in footnote #1 except the Stochastic Parrots authors are longermists—here’s an excellent article by Emile P. Torres on “The Dangerous Ideas of ‘Longtermisn’ and ‘Existential Risk’” (h/t Timnit Gebru).