It is sometimes claimed that Gödel's incompleteness theorem proves that humans cannot be machines, or cannot be described in purely formal terms, or cannot have their behaviour predicted by machines. This claim is false; I'd like to explain why.
I should remark that it is no part of my purpose to claim that humans can be, or be imitated by, or have their behaviour predicted by, machines. That's an interesting question, and I don't know the answer. What I do know is that Gödel's theorem doesn't prove that they can't be, and it's against this abuse of mathematics that I'm arguing here.
The presentation of the argument that has prompted me to write this may be found at http://www.starcourse.org/lucas.htm, where it appears to be part of a debate about the existence of God. When I first wrote this, I was on the "pro" side of that debate; I'm now on the other side; regardless, I don't think anyone is well served by invalid arguments. That page claims that my comments (below) have led to improvements in the argument or its presentation or something, but all my criticisms remain valid (with one minor exception which I have fixed below).
Anyway, briefly the argument goes like this. Suppose you've got a machine that predicts all my actions. Its behaviour can be described by some formal system, and since I can do mathematics the system must be powerful enough to "contain all of elementary arithmetic". Therefore, by Gödel's theorem, there is some sentence in whatever language we used for describing the system that can be considered to say "System X cannot derive this sentence". And, provided the system is consistent, this sentence is in fact true.
But, since as well as being able to do mathematics I am able to understand Gödel's theorem, I can follow the steps of the proof and show that the sentence is true. But that means I can derive the sentence, and the machine can't. Contradiction. Allegedly.
Why the argument is broken
(What follows may be easier to understand if you first read the presentation of Lucas's argument at the URL given above. It is the text of an e-mail message I sent to the folks who run the website, with a few minor changes in presentation and a couple of corrections.)
The alleged abilities of human mathematicians
Actually, the most formal version of the proof there is sort of OK apart from some parenthetical remarks, but it fails to observe that human mathematicians are not, in the sense of that proof, "Mathematical Logicians Capable of Understanding Gödel's Theorem".
Specifically, the condition for being a MLCUGT is the ability to construct, when given any logical system at all, a "Gödel proposition" for that system, and to know that it is true. I don't think there's the slightest reason to suppose that any human is capable of doing this, with or without electronic help. (And note that the complexity of the system they have to be able to do this for increases in proportion to the sophistication of the electronic help available to them.)
Note: the email message that turned into this page was commenting on an earlier version of the Star Course's Lucasian argument, whose definition of an MLCUGT was more stringent: it required an MLCUGT to be able to recognize all Gödel propositions for any logical system, not just able to construct one. They've backed off a bit from that, but that doesn't make it any more obvious that humans are MLCUGTs because the real sticking point is the ability to see that the system is consistent. I've had some correspondence with the author of the page, in which (as it seems to me) his account of what is meant by consistency oscillates between a notion for which we could indeed see that but for which Gödel's theorem no longer applies, and one for which Gödel's theorem might apply but for which I don't see any grounds for thinking that a system that simulates a human being is likely to be consistent.
The reasoning (presented in the less formal version of the proof) by which human logicians are supposed to be able to construct Gödel sentences and know that they are true begins with the human "observing" that the system must be consistent. But there's just no reason at all to think that, when presented with a system complicated enough to be a description of a human's entire thinking process, any human would be able to look at it and just see that it's consistent. Or even to prove after heroic labours that it's consistent. The system would have to be *immensely* complicated, hugely more so than the largest systems of computer software in existence; and some measure of humans' abilities to detect subtle flaws in the behaviour of such complicated systems may be obtained by considering the unreliability of most computer software, even after it has been examined and tested carefully by experts.
(And arguments like "Well, this system is just imitating my behaviour, so of course it must be consistent" are no good, because (1) proving that the system imitates the human's behaviour would be far beyond the abilities of any human mathematician, and (2) no one with any sense or humility believes himself to be perfectly consistent.)
This observation is not new. It may be found, for instance, in a paper by Hilary Putnam entitled "Minds and Machines", first published in 1960: the year before Lucas's paper "Minds, Machines and Gödel". Lucas's argument had been refuted before it was even published.
The alleged abilities of simulated mathematicians
There is another, related, problem with the "proof". The rather glib statement is made that the system must "contain elementary arithmetic" because it is able to predict the behaviour of a mathematician. I don't think this is right, because mathematicians are not in fact able to derive with perfect reliability every statement of elementary arithmetic (especially in the relevant, somewhat technical, sense of that phrase). It certainly can't be done by the obvious means of adopting a rule of inference like "If the machine says that the mathematician will believe X to be true, then infer X"; for mathematicians make mistakes.
In fact, it seems clear to me that a system whose abilities are limited to the emulation, or prediction, of a human mathematician will not have the necessary properties for Gödel's theorem to apply to it, because (since there are limits to the complexity and, even more obviously, the mere length of propositions a human mathematician can understand) it will not be closed under logical inference: there will be a certain threshold of complexity beyond which it will be unable to go.
In summary: the formal systems considered in the "proof" may well not actually be powerful enough for Gödel's theorem to apply to them; but despite this, they are still probably much too complicated for any human mathematician to be able to apply Gödel's theorem to them in the relevant way.