In his article “Present State of Research into the Foundations of Mathematics,” Gentzen briefly talks about Goedel’s incompleteness results. He says that it is not an alarming result because it says “that for number theory no adequate system of forms of inference can be specified once and for all, but that, on the contrary, new theorems can always be found whose proof requires new forms of inference.” This is interesting because Gentzen worked with Hilbert on his proof-theoretic projects and created two of the three main proof-theoretic frameworks, natural deduction and the sequent calculus. The incompleteness theorems are often taken as stating a sort of limit on proof-theoretic means. (I’m treading on shaky ground here, so correct me if my story goes astray.) That is to say, any sufficiently strong proof system will be unable to prove certain consequences of its axioms and rules. Adding more rules in an attempt to fix it can result in being able to prove some of the old unprovable statements, but new ones (maybe just more?) statements will arise.

Gentzen’s reaction to this is to shrug his philosophical shoulders. There may be consequences that we can’t capture using our current forms of inference, but that just means we need new forms of inference. To these we will need to add further forms of inference. And so on.

I thought this was, by itself, an interesting position whose foundational worth I haven’t figured out. But, in reading Logical Syntax of Language, I found some things that surprised me. First, Carnap read Gentzen. He cites the consistency of arithmetic article. I had thought that Gentzen was mostly unknown or ignored in Europe sort of like Frege. This doesn’t dispel that possibility though. The other thing was that Carnap has a similar reaction to Goedel’s incompleteness theorems. At least, he has a similar reaction to their consequences for his work. He says, “[E]verything mathematical can be formalized, but mathematics cannot be exhausted by one system; it requires an infinite series of ever richer languages.” (LSL 60d)

The parallel with Gentzen should be clear. The reaction seems to come from a common philosophical position with respect to mathematics. I’d like to call that an inferentialist one, although I’m a little hesitant. I’m unclear what Carnap’s views about inference are exactly, but it seems like it may be apt. It will depend on whether the inference forms are part of the richer language for Carnap. If that turns out not to work, the common thread might be the syntactic orientation of the two logicians. I think they have different philosophical views about syntax though, so that might not be entirely happy either. I’m not familiar enough with Gentzen to say how Carnap’s view of syntax’s role in science would strike him. In Carnap there doesn’t seem to be the talk of defining the logical constants syntactically or inferentially in the way that is alluded to in Gentzen’s papers. Still, it seems like the right sort of response from a certain orientation, one I want to call inferentialist. What Goedel’s results show is that the inferentialist can’t be satisfied with a static or final system for mathematics. An interesting question would be whether the same sort of “dynamic” would carry over to the non-mathematical. I have no clue if there is a lesson for inferentialist semantics generally lying in there.

A separate question is whether the above view is satisfactory as a foundational position. One reason to think that it isn’t is that there seems to be some slack between an inferential system at any one time and the consequences of the axioms. The latter is always outrunning the former which tries to play catch-up. One might want their foundational position to explain what is going on with the latter, or at least the slack between the two. Although, that might stack the deck in favor of classical logic against something like intuitionism or constructivist positions.

## 12 comments

Comments feed for this article

February 11, 2008 at 7:49 am

Ole Thomassen HjortlandOne idea proposed by Stephen Read and Alan Weir (among others) is to enrich the proof-system with non-recursive rules (specifically, the omega-rule), so as to avoid Gödel’s result. However, it strikes me as unclear what the consequences of adding infinitary rules will be for inferentialist semantics.

February 11, 2008 at 3:49 pm

ShawnCarnap adds an ω-rule, but it is one of his consequence rules, not his derivation rules. They are in good company I guess.

How does that get around Gödel’s results? Is the idea that the rule is non-recursive, so we are abandoning the attempt to recursively generate the theorems of arithmetic? I had thought that there would be Gödel sentences for the system with ω-rule added in. Where are my notes on this… (alternately: why doesn’t the library have Peter Smith’s book…)

February 11, 2008 at 5:20 pm

Ole Thomassen HjortlandYes, I think the idea is to abandon the notion of formal proof for which Gödel’s theorems apply. Whether or not there are interesting limiting results for systems with an omega-rule, I do not know. I’ll check out the Peter Smith book and see what I find.

February 11, 2008 at 5:23 pm

Ole Thomassen HjortlandAnd, yes, Noam at CMU sent me this link which might be useful:

http://www.cs.chalmers.se/~coquand/proof.html

They are lecture notes on the omega-rule (among other things).

February 12, 2008 at 4:29 pm

NoamVery interesting post! I did not know that Gentzen and Carnap held this position, though I have seen the same position in Girard’s “Locus Solum” (Appendix A, entry on “Truth”):

In the twenties, people had a nice definition of truth, namely A is true when it is formally provable. And when Goedel realised that he could formalise this notion in arithmetic, no doubt that he attempted at proving a contradiction, by means of a mere imitation of Cantor’s diagonal argument. By the way he succeeded, but this was not a contradiction, just the fact that truth could not be defined that way, in other terms incompleteness. However I think that this idea of the twenties is correct, provided we replace “formal provability” which refers to something too narrow─syntax─with provability, but not in a given system, but in a system that we don’t yet know, i.e., in an expanding formalism.February 12, 2008 at 4:40 pm

ShawnI have not read the Girard stuff. I printed it off but couldn’t make any headway so quit. Is it good?

That is pretty neat that Girard holds a similar position. Apparently some of Feferman’s recent work has been on formalisms that can change and develop. I haven’t read it, but I’m curious if it is with similar considerations in mind.

February 12, 2008 at 5:03 pm

GregInteresting post. A couple of quick things:

1. You are certainly right to identify a type of similarity between Carnap’s (1930s) reaction to the incompleteness results and Gentzen’s. Nonetheless, I at least would not want to call Carnap’s reaction a philosophical shoulder shrug. A, if not the, central aim of

Logical Syntaxis how to save the notion of analyticity (esp. of number theory) in the face of the incompleteness phenomena. The definition of ‘analytic in language II’ is so convoluted precisely because Carnap is trying to find a syntactic notion of analyticity that keeps things like a Godel sentence analytic.2. You write: “In Carnap there doesn’t seem to be the talk of defining the logical constants syntactically or inferentially in the way that is alluded to in Gentzen’s papers.” If all you mean is that Carnap does not use either the sequent calculus or natural deduction in LSL, then you’re of course right. I was just wondering whether you meant something else/ something further — for Carnap does, I think I’d say, attempt to ‘define the logical constants syntactically’ in LSL.

cheers

Greg

February 12, 2008 at 5:23 pm

ShawnRe 1: Fair enough. The should shrug comment was a bit of hyperbole. It probably was not fair to Carnap.

Re 2: I didn’t just mean that Carnap did not use natural deduction or sequent calculus. In the prose sections, Gentzen talks about the (intro-)rules defining the logical constants. I didn’t notice this when I read through LSL, but that sort of thing seems to be missing. There is a question of whether what Carnap does can be interpreted as providing an inferentialist definition of the logical constants. This wasn’t evident to me, although I need to go through the relevant parts of LSL to verify. This might be compounded by me confusing issues surrounding analyticity later in the book, things Coffa, in From Kant to Carnap, points in comparing Carnap’s work with Tarski’s on truth.

There is another question of whether Carnap takes himself to be defining the logical constants with his D-rules. The answer to this seems to be negative after a first read through. I’d love to come across passages to the contrary though.

I think these two questions got conflated in my post, where I indicated negative answers to both. I’m inclined to say “I’m not sure” to the first and no to the second.

February 12, 2008 at 8:31 pm

GregYou’re right that Carnap does not take the intro rules as defining the logical constants (the L-terms, in Carnap’s lingo). I think LSL section 50 contains Carnap’s characterization of the logical constants. Again, neat post.

February 12, 2008 at 9:15 pm

NoamI have not read the Girard stuff. I printed it off but couldn’t make any headway so quit. Is it good?It is very good, but it is also incredibly opaque because of Girard’s writing style. I’ve sort of been reading it sporadically over the last three or four years, picking up bits and pieces.

February 14, 2008 at 12:13 am

ShawnGreg,

Shortly after I wrote my last response to you, we talked about LSL in the Carnap/Quine class, in particular section 50. It seems like Carnap takes the transformation rules (C-rules?) as fixing the meaning of the logical and mathematical vocabulary, at least in the purely logical and mathematical parts. The text is still a little opaque so I’m not sure how to fit that into the picture, but it looks like my answers to both questions I distinguished were a little bit hasty.

February 15, 2008 at 5:18 pm

GregHello again. I was a bit cryptic/terse last comment. You are definitely right to say: “Carnap takes the transformation rules (C-rules?) as fixing the meaning of the logical and mathematical vocabulary.” But (part of) the point of LSL 50 and thereabouts, if I’m remembering correctly, is to show how to take an arbitrary set of formation and transformation rules (i.e. an arbitrary language) and return the set of logical terms proper to that pair of rules.

Carnap’s actual recipe is a bit convoluted and hairy; I think there is a good discussion of it in a paper online by Denis Bonnay called “Carnap’s Criterion of Logicality” (a google search should turn it up).