In a recent podcast at Mindscape, François Chollet talks—among other things—about General Artificial Intelligence and how we might get there (all emphases mine):
If you want to test actual intelligence you need problems that are novel problems where the test taking system or human being cannot have memorized the solution. Right. And I actually released a benchmark of machine intelligence a few years back in 2019. That’s all about this idea. So it’s called ARC, ARC-AGI in the long form. So it’s the Abstraction and Reasoning Corpus for Artificial General Intelligence. […]
The claim that we’re already there [at creating AGI], or that LLMs have high schooler level intelligence, that’s kind of absurd. Like I don’t, I can’t even fathom how I can make such claims. It just, it makes zero sense to me. Like I don’t even understand how you can be like so deluded as to claim that.
If you want to ask, “when is AGI coming?”, it’s very difficult to answer because the situation we’re in is that we have no technology today that is on the path to AGI. There is nothing that if you just scale it, it gives you intelligence.
[E]very time we get new results, they are along the line of showing that LLMs are actually just pattern matching engines. They are not intuitive. They are interpolative databases of programs. Again, the big difference between intelligence and a program database is, the program database is like GitHub and intelligence is like the programmer. The programmer individually knows dramatically less than what’s in the database, but the database cannot adapt. It’s only that fixed set of programs. You can maybe recombine some programs, but you have limited ability to recombine programs. The programmer can actually invent anything, adapt to anything, because it has general intelligence, right? That’s really the difference. And people are like, yeah, so if we just scale GitHub to a thousand more programs, then it’s going to be AGI. But no, it’s just a bigger GitHub. It’s just a more general GitHub. It is still not a programmer. There is no level, there’s no amount of stored, memorized programs where you develop suddenly the ability to synthesize your own programs on the fly. It’s just not how it works.
And here’s the gist:
But that said, that does not necessarily mean that AGI is very, very far away. Rather, what it means is that you cannot predict when it will arrive because you need to invent something new. But maybe we’ll invent it next year, like maybe the ARC competition will actually trigger someone into inventing it, you know? So maybe it arrives next year. It’s possible. It’s possible, but it’s unpredictable because it doesn’t exist yet.
Chollet cuts through all that LLM AGI nonsense in ways that can still leave you excited without having to check in your brain at the coatroom before you enter the discussion.
Addendum
Commenting on this blogpost over at Mastodon, @FeralRobots made this astute observation:
My own thought is that we should be prepared for AGI to be small & boring before it’s big & impressive. That if it seems impressive to a layperson looking at it casually, it’s probably not AGI. Need to look at his spec but his ARC might be really superficially mundane, but very impressive if one thinks it through.
Yup, that may well be.