ResearchGate Twitter Flickr Tumblr

Keep Your Fire Extinguisher Handy for This Interview with Geoffrey Hinton on ChatGPT

Barely one minute into this CBS interview with Geoffrey Hinton, the “Godfather of AI,” as CBS puts it, has already set my brain on fire with this claim about text synthesis machines:

You tell it a joke and, not for all jokes, but for quite a few, it can tell you why it’s funny. It seems very hard to say it doesn’t understand when it can tell you why a joke is funny.

High time we reassessed how smart these “legendary” AI researchers really are and removed them from the pedestals we’ve built.

Certainly, everything he goes on to say about the neural net approach is true, that it became the dominant approach and eventually led us to where we are at this moment in time. And, to his merit, he doesn’t think that we’re actually creating a brain:

I think there’s currently a divergence between the artificial neural networks that are the basis of all this new AI and how the brain actually works. I think they’re going different routes now. […] All the big models now use the technique called back propagation […] and I don’t think that’s what the brain is doing.

But that doesn’t keep him from proposing that LLM are becoming equivalent to the brain.

To start with, he “lowered his expectations” for the time it would take to create general purpose AI/AGI from “20–50 years” to “20 years” and wouldn’t completely rule out that it “could happen in 5.” Then, explaining the difference between LLM technology and the brain as revolving around quantities of communication and data, he stresses how much ChatGPT “knows” and “understands.” And he even claims that, yes, you can say an LLM is a sophisticated auto-complete system, “but so is the brain,” as both systems have to “understand the sentence” to do all the things they do. And he supports that claim with a translation example that completely erases any possible difference between “understanding” and “computing probabilities.”

Of course, I can see how you get there, by switching from using a model of the brain to build an LLM to taking that LLM and how it works to create a model of the brain. It’s a trick, a sleight of hand! Because now you can claim that LLM will develop into General Purpose Intelligence/AGI because it effectively works like the brain—which is like claiming you’re close to creating an antigravity device by recreating ever more perfect states of weightlessness through parabolic flights. I can’t be kind about this, but if you really believe that text synthesis machines will become self-aware and develop into entities like Skynet or Bishop, then congratulations, here’s your one-and-twenty Cargo Cult membership card.

There is an interesting bit around the 20-minute mark when Hinton talks about how the LLM of the future might be able to adapt its output to “different world views,” which is a fancy way of saying that it might become capable of generating output that is context-sensitive with regard to the beliefs of the recipient. All the ramifications such developments would entail merit their own blog post. Another interesting point this interview touches upon are autonomous lethal weapons and the alignment problem.

But just when my brain had cooled off a bit, Hinton lobs the next incendiary grenade at it by creating the most idiot straw man imaginable around the concept of “sentience”:

When it comes to sentience, I’m amazed that people can confidently pronounce that these things are not sentient, and when you ask them what they mean by sentient, they say, well, they don’t really know. So how can you be confident they’re not sentient if you don’t know what sentience means?

I need a drink.

Finally, there’s this gem:

To their credit, the people who’ve been really staunch critics of neural nets and said these things are never going to work—when they worked, they did something that scientists don’t normally do, which is “oh, it worked, we’ll do that.”

Jebus.

I’ve read my Kuhn and my Feyerabend very thoroughly. Neural net vs. symbolic logic is about competing approaches to AI, not paradigm change, and the actual problem with paradigm change is that, for an interim period, the established paradigm is likely to have as much or even more explanatory power, and indeed work better, than the new one.

We really should do something about these pedestals.

permalink