You are currently browsing the category archive for the ‘inferentialism’ category.

There is an interesting post on Brandom’s discussion of singular terms over at Jon Cogburn’s blog. It makes some good points about one of the hardest arguments in Articulating Reasons. Jon points out that indefinite descriptions don’t seem to fit the pattern Brandom argues singular terms must fit. I don’t think I’ve come across that before. A quick glance at the responses to objections to the corresponding argument in MIE doesn’t reveal any standing response to it either.

I’ve been a bit negligent in my posting. My apologies. Things got terribly hectic around here and it sapped all my energy for writing.

I was talking to a colleague today and was asked about what to read over the summer to get a handle on Brandomian inferentialism. One answer is to read all of Making It Explicit, but that is a bit daunting. I’ve compiled the following list that is much more manageable and hits all of the essential points I think. The abbreviations are: MIE for Making It Explicit and AR for Articulating Reasons. Read the rest of this entry »

There is a review of Jeremy Wanderer’s Robert Brandom on NDPR. The reviewer is Christopher Gauker. It seems like a good review that is generally positive about the book and fairly critical about Brandom’s philosophy. Gauker thinks that Wanderer’s exposition, while accurate and generally good, doesn’t settle the question about the status of objectivity in Brandom’s view. There are scattered critical comments about Brandom’s view, particularly on his views about conditionals and linguistic scorekeeping.

This is an attempt to think through some topics from the philosophy of math seminar I’m attending.

Read the rest of this entry »

Reading the Revision Theory of Truth has given me an idea that I’m trying to work out. This post is a sketch of a rough version of it. The idea is that circular definitions motivate the need for a sharper conception of content on an inferentialist picture, possibly other pictures of content too. It might have a connection to harmony as well, although that thread sort of drops out. The conclusion is somewhat programmatic, owing to the holidays hitting. Read the rest of this entry »

Michael Kremer does something interesting in his “Kripke and the Logic of Truth.” He presents a series of consequence relations, \models_{(V,K)}, where V is a valuation scheme and K is a class of models. He provides a cut-free sequent axiomatization of the consequence relation \models_{(V,M)}, where V is the strong Kleene scheme and M is the class of all fixed points. He  proves the soundness and completeness of the axiomatization with respect to class of all fixed points. After this, he proves the admissibility of cut. I wanted to note a couple of things about this proof.

As Kremer reminds us, standard proofs of cut elimination (or admissibility) proceed by a double induction. The outer inductive hypothesis is on the complexity of the formula, and the inner inductive hypothesis is on how long in the proof the cut formula has been around. Kremer notes that this will not work for his logic, since the axioms for T, the truth predicate, reduce complexity. (I will follow the convention that ‘A’ is the quote name of the sentence A.) T’A’ is atomic no matter how complex A is. ∃xTx is more complex than T’∃xTx’ despite the latter being an instance of the former. Woe for the standard method of proving cut elimination.

Kremer’s method of proving cut elimination is to use the completeness proof. He notes that cut is a sound rule, so by completeness it is admissible. Given the complexity of the completeness proof, I’m not sure this saves on work, per se.

Now that the background is in place, I can make my comments. First, he proves the admissibility of cut, rather than mix. The standard method proves the admissibility of mix, which is like cut on multiple formulas, since it would otherwise run into problems with the contraction rule. Of course, the technique Kremer used is equally applicable to mix, but the main reason we care about mix is because showing it admissible is sufficient for the admissibility of cut. Going the semantic route lets us ignore the structural rules, at least contraction.

Next, it seems significant that Kremer used the soundness and completeness results to prove cut admissible. He describes this as “a detour through semantics.” He doesn’t show that a proof theory alone will be unable to prove the result, just that the standard method won’t do it. Is this an indication that proof theory cannot adequately deal with semantic concepts like truth? This makes it sound like something one might have expected from Goedel’s incompleteness results. There are some differences though. Goedel’s results are for systems much stronger than what Kremer is working with. Also, one doesn’t get cut elimination for the sequent calculus formulation of arithmetic.

Lastly, the method of proving cut elimination seems somewhat at odds with the philosophical conclusions he wants to draw from his results. He cites his results in support of the view that the inferential role of logical vocabulary gives its meaning. This is because he uses the admissibility of cut to show that the truth for a three-valued language is conservative over the base language and is eliminable. These are the standard criteria for definitions, so the rules can be seen as defining truth. Usually proponents of a view like this stick solely to the proof theory for support. I’m not sure what to make of the fact that the Kripke constructions used in the models for the soundness and completeness, so cut elimination, results do not seem to fit into this picture neatly. That being said, it isn’t entirely clear that appealing to the model theory as part of the groundwork for the argument that the inference rules define truth does cause problems. I don’t have any argument that it does. It seems out of step with the general framework. I think Greg Restall has a paper on second-order logic’s proof theory and model theory that might be helpful…

In the proof theory class I’m taking, Belnap introduced several different axiomatic systems, their natural deduction counterparts, and deduction theorems linking them. We started with the Heyting axiomatization for intuitionistic logic and the Fitch formulation of natural deduction for it.

The neat thing was the explanation of how to turn the intuitionistic natural deduction system into relevance logic. To do this, we add a set of indices to attach to formula. When formulas are assumed, they receive exactly one index (a set containing one index), which is not attached to any other formulas. The rule for →-In still discharges assumptions, but it is changed so that the set of indices attached to A→B is equal to the set attached to B minus the set attached to A, and A’s index must be among B’s indices. This enforces non-vacuous discharge. It also restricts what things can be discharged. The way it was glossed was that A must be used in the derivation of B.

From what I’ve said there isn’t anyway for a set of indices to get bigger. The rule for →-Elim does just that. When B is obtained from A and A→B, B’s indices will be equal to the union of A’s and A→B’s. This builds up indices on formulas in a derivation, creating a record of what was used to get what. Only the indices of formula used in an instance of →-Elim make it into the set of indices for the conclusion, so superfluous assumptions can’t sneak in and appear to be relevant to a conclusion.

This doesn’t on the face of it seem like a large change. Just the addition of some indices with a minor change to the assumption rule and the intro and elim rules. The rule for reiterating isn’t changed; indices don’t change for it. Reiterating a formula into a subproof puts it under the assumption the subproof, in the sense of appearing below it in the fitch proof, but not in the sense of dependence. The indices and the changes in the rules induce new structural restrictions, as others have noted. We haven’t gotten to sequent calculi or display logic, so I’m not going to go into what the characterization of relevance logic would look like in those. Given my recent excursion into Howard-Curry land, I do want to mention what must be done to get relevance logic in &lambda- calculus. A restriction has to be placed on the abstraction rule, i.e. no vacuous abstractions are allowed. This is roughly what one would expect. Given the connection between conditionals being functions from proofs of their antecedents to proofs of their consequents and λ-abstraction forming functions, putting a restriction on the former should translate to a restriction on the latter, which it does.

The first thing to note about the incompatibility semantics in the earlier post is that it is for a logic that is monotonic in side formulas, as well as in the antecedents of conditionals. (Is there a term for the latter? I.e. if p→q then p&r→q.) This is because of the way incompatiblity entailment is defined. If X entails Y, then ∩p∈YI(p) ⊆ I(X). This holds for all Z⊇X, i.e. ∩p∈YI(p) ⊆ I(Z). This wouldn’t be all that interesting to note, since usually non-monotonicity is the interesting property, except that Brandom is big on material inference, which is non-montonic. The incompatibility semantics as given in the Locke Lectures is then not a semantics for material inference. This is not to say that it can’t be augmented in some way to come up with an incompatibility semantics for a non-monotonic logic. There is a bit of a gap between the project in MIE and the incompatibility semantics. Read the rest of this entry »

Like I said, the Pitt-CMU conference has come and gone. I said before that if my comments on Ole’s paper went over well, I’d put them up here. The comments seem to have gone well, so I’m going to put them up. The comments won’t make much sense without having read the paper, which is on proof-theoretic harmony.  Read the rest of this entry »

In his article “Present State of Research into the Foundations of Mathematics,” Gentzen briefly talks about Goedel’s incompleteness results. He says that it is not an alarming result because it says “that for number theory no adequate system of forms of inference can be specified once and for all, but that, on the contrary, new theorems can always be found whose proof requires new forms of inference.” This is interesting because Gentzen worked with Hilbert on his proof-theoretic projects and created two of the three main proof-theoretic frameworks, natural deduction and the sequent calculus. The incompleteness theorems are often taken as stating a sort of limit on proof-theoretic means. (I’m treading on shaky ground here, so correct me if my story goes astray.) That is to say, any sufficiently strong proof system will be unable to prove certain consequences of its axioms and rules. Adding more rules in an attempt to fix it can result in being able to prove some of the old unprovable statements, but new ones (maybe just more?) statements will arise. Read the rest of this entry »

Author

Shawn Standefer, recent Ph.D. in philosophy from Pitt. (More about me)

Archives