Last week, I linked to and commented on reactions and responses to the Future of Life Institute’s open letter “Pause Giant AI Experiments.” Sayash Kapoor and Arvind Narayanan had written another excellent comment, “A Misleading Open Letter About Sci-fi AI Dangers Ignores the Real Risks”:
We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate.
Here’s the overview risk matrix they created for the occasion, to which their item-by-item explanations provide the depth.