I am reading Gödel, Escher and Bach. Mostly because I have been intrigued by Gödel’s proof of incompleteness for a few years but have been far too dim-witted to sort it out myself. Hofstadter’s 700+ page tome walks you through it one tiny bit at a time. And it works; I get it now. Sort of.
At any rate, I’m on page 473, shortly after the story climaxes with the demonstration that no formal system can be ‘complete’; that is, there will always be some ‘ideas’ (ie., expressions) within a formal system that are undecidable as to whether they are true or false, particularly when you throw in self-reference in a tricky way. I won’t try to make sense of this and foist upon you a long tedious discussion of some of the most abstruse ideas imaginable. Partly because I’d make a wreck of it and partly because it’s an acquired taste, kinda like grain alcohol flavored with hot sauce.
There is one idea that’s cropped up, though, that I can’t help but comment on. Apparently one JR Lucas has argued that Gödel’s demonstration of incompleteness demonstrates that machines can never be intelligent. In a nutshell, machines, because they operate on a formal system, could never arrive at Gödel’s conclusions. Ergo, they lack intelligence. That’s a gross simplification of the argument, but let’s go with it.
There are plenty of criticisms to this argument, but I am interested in a really superficial one (I’ll leave the deep thinking to the logicians). Namely, that most humans cannot arrive at Gödel’s conclusions either. Or for that matter, even understand them. Lucas’s argument seems to implicitly rest on the premise that because one human, Gödel, was able to sort this all out . . . and because subsequently a few humans (myself not really among them) were able to understand Gödel’s logic and arrive at the same conclusions . . . then in principle all humans (and the human brain generally) are capable of such a thing, while machines are not. For a group of people obsessively worried about truth, provable assertions and decideability, this strikes me as a remarkably unwarranted premise. In fact, I’d say your average human brain is no more likely than a Commodore 360 to understand Gödel’s logic. What is remarkable, and important to note, is that somehow humanity manages to get by. Yes, that’s right. The majority of instances of human intelligence are not that bothered by incompleteness and undecidability. Hell, just make a decision and be done with it.
Outside the circles of rigorous, formal argumentation, one might view the entire endeavor of trying to devise a truth-telling formal system that is complete and free of error as, well, a strange sort of fetish. Perhaps what is central to human intelligence– the type we see every day in billions of humans going about their human business– is not the ability to ferret out ever more abstruse formal truths, but their remarkable ability to move about within uncertainty, where undecideability and incompleteness are the fundamental conditions of existence.
Perhaps not being able to arrive at Gödel’s theorem is not such a handicap to machine intelligence. Building machines out of formal systems and then claiming they can never achieve intelligence may be like pushing your little brother into the mud and then telling on him for getting dirty.