Michael Kremer does something interesting in his “Kripke and the Logic of Truth.” He presents a series of consequence relations, \models_{(V,K)}, where V is a valuation scheme and K is a class of models. He provides a cut-free sequent axiomatization of the consequence relation \models_{(V,M)}, where V is the strong Kleene scheme and M is the class of all fixed points. He  proves the soundness and completeness of the axiomatization with respect to class of all fixed points. After this, he proves the admissibility of cut. I wanted to note a couple of things about this proof.

As Kremer reminds us, standard proofs of cut elimination (or admissibility) proceed by a double induction. The outer inductive hypothesis is on the complexity of the formula, and the inner inductive hypothesis is on how long in the proof the cut formula has been around. Kremer notes that this will not work for his logic, since the axioms for T, the truth predicate, reduce complexity. (I will follow the convention that ‘A’ is the quote name of the sentence A.) T’A’ is atomic no matter how complex A is. ∃xTx is more complex than T’∃xTx’ despite the latter being an instance of the former. Woe for the standard method of proving cut elimination.

Kremer’s method of proving cut elimination is to use the completeness proof. He notes that cut is a sound rule, so by completeness it is admissible. Given the complexity of the completeness proof, I’m not sure this saves on work, per se.

Now that the background is in place, I can make my comments. First, he proves the admissibility of cut, rather than mix. The standard method proves the admissibility of mix, which is like cut on multiple formulas, since it would otherwise run into problems with the contraction rule. Of course, the technique Kremer used is equally applicable to mix, but the main reason we care about mix is because showing it admissible is sufficient for the admissibility of cut. Going the semantic route lets us ignore the structural rules, at least contraction.

Next, it seems significant that Kremer used the soundness and completeness results to prove cut admissible. He describes this as “a detour through semantics.” He doesn’t show that a proof theory alone will be unable to prove the result, just that the standard method won’t do it. Is this an indication that proof theory cannot adequately deal with semantic concepts like truth? This makes it sound like something one might have expected from Goedel’s incompleteness results. There are some differences though. Goedel’s results are for systems much stronger than what Kremer is working with. Also, one doesn’t get cut elimination for the sequent calculus formulation of arithmetic.

Lastly, the method of proving cut elimination seems somewhat at odds with the philosophical conclusions he wants to draw from his results. He cites his results in support of the view that the inferential role of logical vocabulary gives its meaning. This is because he uses the admissibility of cut to show that the truth for a three-valued language is conservative over the base language and is eliminable. These are the standard criteria for definitions, so the rules can be seen as defining truth. Usually proponents of a view like this stick solely to the proof theory for support. I’m not sure what to make of the fact that the Kripke constructions used in the models for the soundness and completeness, so cut elimination, results do not seem to fit into this picture neatly. That being said, it isn’t entirely clear that appealing to the model theory as part of the groundwork for the argument that the inference rules define truth does cause problems. I don’t have any argument that it does. It seems out of step with the general framework. I think Greg Restall has a paper on second-order logic’s proof theory and model theory that might be helpful…

Advertisements