You are currently browsing the category archive for the ‘inference’ category.

There is an interesting post on Brandom’s discussion of singular terms over at Jon Cogburn’s blog. It makes some good points about one of the hardest arguments in Articulating Reasons. Jon points out that indefinite descriptions don’t seem to fit the pattern Brandom argues singular terms must fit. I don’t think I’ve come across that before. A quick glance at the responses to objections to the corresponding argument in MIE doesn’t reveal any standing response to it either.

I’ve been a bit negligent in my posting. My apologies. Things got terribly hectic around here and it sapped all my energy for writing.

I was talking to a colleague today and was asked about what to read over the summer to get a handle on Brandomian inferentialism. One answer is to read all of Making It Explicit, but that is a bit daunting. I’ve compiled the following list that is much more manageable and hits all of the essential points I think. The abbreviations are: MIE for Making It Explicit and AR for Articulating Reasons. Read the rest of this entry »

This is an attempt to think through some topics from the philosophy of math seminar I’m attending.

Read the rest of this entry »

Reading the Revision Theory of Truth has given me an idea that I’m trying to work out. This post is a sketch of a rough version of it. The idea is that circular definitions motivate the need for a sharper conception of content on an inferentialist picture, possibly other pictures of content too. It might have a connection to harmony as well, although that thread sort of drops out. The conclusion is somewhat programmatic, owing to the holidays hitting. Read the rest of this entry »

In the proof theory class I’m taking, Belnap introduced several different axiomatic systems, their natural deduction counterparts, and deduction theorems linking them. We started with the Heyting axiomatization for intuitionistic logic and the Fitch formulation of natural deduction for it.

The neat thing was the explanation of how to turn the intuitionistic natural deduction system into relevance logic. To do this, we add a set of indices to attach to formula. When formulas are assumed, they receive exactly one index (a set containing one index), which is not attached to any other formulas. The rule for →-In still discharges assumptions, but it is changed so that the set of indices attached to A→B is equal to the set attached to B minus the set attached to A, and A’s index must be among B’s indices. This enforces non-vacuous discharge. It also restricts what things can be discharged. The way it was glossed was that A must be used in the derivation of B.

From what I’ve said there isn’t anyway for a set of indices to get bigger. The rule for →-Elim does just that. When B is obtained from A and A→B, B’s indices will be equal to the union of A’s and A→B’s. This builds up indices on formulas in a derivation, creating a record of what was used to get what. Only the indices of formula used in an instance of →-Elim make it into the set of indices for the conclusion, so superfluous assumptions can’t sneak in and appear to be relevant to a conclusion.

This doesn’t on the face of it seem like a large change. Just the addition of some indices with a minor change to the assumption rule and the intro and elim rules. The rule for reiterating isn’t changed; indices don’t change for it. Reiterating a formula into a subproof puts it under the assumption the subproof, in the sense of appearing below it in the fitch proof, but not in the sense of dependence. The indices and the changes in the rules induce new structural restrictions, as others have noted. We haven’t gotten to sequent calculi or display logic, so I’m not going to go into what the characterization of relevance logic would look like in those. Given my recent excursion into Howard-Curry land, I do want to mention what must be done to get relevance logic in &lambda- calculus. A restriction has to be placed on the abstraction rule, i.e. no vacuous abstractions are allowed. This is roughly what one would expect. Given the connection between conditionals being functions from proofs of their antecedents to proofs of their consequents and λ-abstraction forming functions, putting a restriction on the former should translate to a restriction on the latter, which it does.

Like I said, the Pitt-CMU conference has come and gone. I said before that if my comments on Ole’s paper went over well, I’d put them up here. The comments seem to have gone well, so I’m going to put them up. The comments won’t make much sense without having read the paper, which is on proof-theoretic harmony.  Read the rest of this entry »

In his article “Present State of Research into the Foundations of Mathematics,” Gentzen briefly talks about Goedel’s incompleteness results. He says that it is not an alarming result because it says “that for number theory no adequate system of forms of inference can be specified once and for all, but that, on the contrary, new theorems can always be found whose proof requires new forms of inference.” This is interesting because Gentzen worked with Hilbert on his proof-theoretic projects and created two of the three main proof-theoretic frameworks, natural deduction and the sequent calculus. The incompleteness theorems are often taken as stating a sort of limit on proof-theoretic means. (I’m treading on shaky ground here, so correct me if my story goes astray.) That is to say, any sufficiently strong proof system will be unable to prove certain consequences of its axioms and rules. Adding more rules in an attempt to fix it can result in being able to prove some of the old unprovable statements, but new ones (maybe just more?) statements will arise. Read the rest of this entry »

Before getting to the post proper, it will help to lay out a distinction drawn, I believe, by Sellars. The distinction is between three sorts of transitions one could make in relation to propositions, for example if one is playing a language game of some sort. They are language-entry moves, language-language moves, and language-exit moves. The first is made through perception and conceptualization. Perceiving the crumb cake entitles me to say that there is crumb cake there. The second is paradigmatic inferential or consequential relations among propositions. Inferring from p&q to p is a language-language move. The third is moving from a practical commitment or explicit desire to action. Borrowing Perry’s example, it is the move from thinking that I have to be at the meeting and that the meeting is starting now to me getting up and rushing off to the meeting.

In Making It Explicit, Brandom distinguishes three things that could be meant by inferentialism. These are the necessity of inferential relations, the sufficiency of inferential relations, and hyperinferentialism. The first is the claim that inferential articulation is necessary for meaning. Representation might also be necessary, but at the least inference is necessary. The second is the claim that inferential articulation is sufficient for meaning. In both of these, inference is taken broadly so as not to collapse into hyperinferentialism, which is the thesis that inference narrowly construed is sufficient for meaning. The narrow construal is that inferences are language-language moves. What does this make the broad construal? According to Brandom, it includes the language-entry and -exit moves. In MIE, Brandom defends, I believe, the necessity of inferential relations, although he says some things that sound like he likes the idea of the sufficiency claim. He doesn’t think that hyperinferentialism will work. This is because he thinks that for some words, the content of the word depends on causal/perceptual connections. I think that color terms are examples. Additionally, the content of some words exhibits itself in what practical consequences it has in our action and this exhibition is an essential part of the meaning of the word. My beliefs about crumb cake will influence how I act around crumb cake. Hyperinferentialism cannot account for these because the language-entry and -exit moves essential to their meaning are not things hyperinferentialism has access to.

Brandom’s claim then, once things have been unpacked a bit, amounts to saying that the narrowly inferential connections, perceptual input, and practical output are necessary for meaning. This seems to undercut the charge that inferentialism loses the world in a froth of words, which charge is mentioned at the end of ch. 4 of MIE, I think. It is also a somewhat looser version of inferentialism since things that are not traditionally inferential get counted as inferential. The inferentialist could probably make a case that that the language-language moves are particularly important to meaning, but I think Brandom’s inferentialism stretches the bounds of inference a bit. I’m not sure an inferentialist of the Prawitz-Dummett sort would be entirely comfortable with the Brandomian version of it. By the end of MIE, Brandom’s broad notion of inference encompasses a lot. Granted, it is fairly plausible that much of that is important to or essential for meaning. However, I wonder if it doesn’t move a bit away from the motivating idea of inferentialism, namely that inference is what is central.

[Edit: I’m trying out using the html for the math symbols and Greek letters since Blogger won’t do LaTeX markup. Let me know if they are not rendering correctly in your browser.]
Sometimes you read that that natural deduction systems are more appropriate for intuitionistic logic while sequent calculi are more appropriate for classical logic, e.g. in Hacking’s article “What Is Logic?”, whence the title. While reading a defense by Peregrin of why intuitionistic logic best characterizes the logic of inference, I came across a line that caught my eye. Peregrin comes to the conclusion that intuitionistic logic is the logic of inference as long as we restrict ourselves to single conclusion inferences. He goes on to say that he has written elsewhere about why we should restrict ourselves in that way. The interesting bit is the restriction to single conclusion inference.

One of the first lessons learned when studying the sequent calculus is that you get classical logic from the intuitionistic rules by allowing multiple conclusions. Going back over my notes from proof theory last year, it seems that you can also get classical logic if you keep the single conclusion by add a reductio rule, Γ, ∼φ ⇒ ⊥, so Γ => φ. This requires giving up the subformula property, so the multiple conclusion formulation is usually opted for.

Natural deduction systems only allow for one conclusion for each inference. This doesn’t mean we don’t have natural deduction systems for classical logic. We do, but they involve a reductio rule, from ∼∼φ to φ. How would this square with being more appropriate to intuitionistic logic? There seems to be a sense in which the reductio rule is really a multiple conclusion rule. In a standard multiple conclusion sequent calculus formulation, proving the reductio rule requires using multiple conclusions. Alternatively, we can prove that a formula is provable in a classical natural deduction system with reductio iff it is provable in the multiple conclusion sequent calculus. Given that this is “iff”, why should the sequent calculus version be privileged? The sequent calculus version has the subformula property and cut elimination. This puts really strong restrictions on the structure of proofs. In particular, the subformula property requires that only the things in the bottom line of the proof appear in the preceding steps of the proof. This restriction is strong enough to simulate, in a way, semantic effects (See Jeremy Avigad’s work for more on this). Put in a more philosophical way, it requires us to make explicit everything that goes directly in to the proof.

This explicitness requirement does the work. The sequent calculus version shows us that the reductio rule is really a multiple conclusion wolf masquerading in single conclusion sheep’s clothing. It is apparent that none of the other rules require multiple conclusions. These correspond to the intuitionistic sequent calculus rules and they go over directly to natural deduction systems. Classical logic with its reductio rule can go over too, but along the way the reductio rule assumes the appearance of a single conclusion rule. Natural deduction is natural for single conclusion inference and the sequent calculus is natural for multiple conclusion inference (I haven’t really defended the latter claim here). There is a sense, then, in which natural deduction is more natural for intuitionistic logic.

I don’t expect that this idea is original to me, but I had thought today about Hacking’s view of logic in his “What is logic?” There are some conditions that sequent calculus introduction rules must satisfy in order to be properly logical including being conservative over the base language. There are some conditions that the deducibility relation must satisfy, including admitting cut elimination and elimination of the structural rules. In particular, the system must admit of dilution elimination. This is why modal logic doesn’t count as logic to Hacking. It (S4 in particular, since there are lots of modal logics) doesn’t admit of general dilution (weakening) elimination. Hacking sees this as a feature, not a bug. What I am now wondering about are substructural logics, like relevance logic. Relevance logic (if I am remembering this rightly) doesn’t have weakening in it to begin with, so one can’t prove an elimination theorem. Requiring the deducibility relation to admit weakening seems to put a strong constraint on what counts as logic at the outset. What is so special about weakening that it gets picked out? The other rules that get singled out as important are cut and the basic axiom or identity rule. It is hard to argue with those. Granted, Hacking says those are sufficient conditions and makes no effort to give necessary ones, but, I am now puzzled.

Author

Shawn Standefer, recent Ph.D. in philosophy from Pitt. (More about me)

Archives