Turns out, when I wrote about Geoffrey Hinton’s CBS interview, I was still too generous on several counts. In the Guardian last week, his position not only degraded to the usual “AI will take over humanity” tech bro nonsense, but proceeded into the bonkers territory of “maybe these big models are actually much better than the brain.”
And then he goes full Skynet from there:
You need to imagine something that is more intelligent than us by the same degree that we are more intelligent than a frog. It’s all very well to say: “Well, don’t connect them to the internet,” but as long as they’re talking to us, they can make us do things.
In another article in the New York Times, he makes a big fuss about leaving Google “because of the dangers ahead,” only that these “dangers” are the exact same smoke grenades all the other tech bros are throwing left and right with abandon. And to add insult to injury, he happily dismisses the actual threats and throws the whistleblowers who’ve warned about these real threats for years—and got actually fired from Google for it—under the bus.
What a trash human. And then there’s this stupendous gem:
I’m not a policy guy. I’m just someone who’s suddenly become aware that there’s a danger of something really bad happening. I wish I had a nice solution, like: “Just stop burning carbon, and you’ll be OK.” But I can’t see a simple solution like that.
If you listen long enough to Hinton, you begin to wish Skynet became a reality. Why, using conventional notions of intelligence for a moment, must those people who currently build our most intelligent systems be the least? I wonder.