One of the novel features of MIE is Brandom’s philosophy of logic. He calls this the expressive theory of logic. On this view, the primary purpose of logic is to express certain things. It privileges the conditional and negation. The conditional expresses the acceptance of an inference from premises which form the antecedent to the conclusion which forms consequent. The conditional lets you say that a certain inference is acceptable. Of course, conditionals in different logical systems express different sorts of acceptance. Classical conditionals express a weak form of acceptance, intuitionistic conditionals a stronger acceptance in the form of saying there is a general method of transforming justification for the premises into justification for the consequence, and so on. Negation expresses incompatibilities, generally in the presence of a conditional so as to allow one to say that certain inferences are not kosher. Incompatibility can be used to create an entailment relation defined as inclusion on sets of incompatibles. Brandom suggests taking conjunction and disjunction as set operations on those sets of incompatibilities. This would work for languages with a conditional and negation. If one does not have negation, then, I suppose, come up with the sets of incompatibilities although one couldn’t say that the incompatibilities were such. Barring defining conjunction and disjunction in terms of incompatibilities, I’m not sure exactly what they would be accepting. Conjunction might express the acceptance of both conjuncts. Intuitionistic disjunction might express the acceptance or ability to demonstrate one of the disjuncts and you know which one. I’m not sure what classical disjunction would express exactly; possibly that one accepts one of the disjuncts although no further information is given about which.

There is nothing in Ch. 2, where logical expressivism is introduced, about quantifiers. At this point in the book, nothing has been said about objects, so I’m not sure what the quantifiers express since it is likely to be tied in with theoretical claims about objects. Modal operators aren’t addressed in MIE, although they are tackled in the Locke Lectures. I don’t remember that terribly well, although I think for the most part Brandom sticks to alethic modalities in S4 and S5. I could be wrong on this. I am almost positive he doesn’t get to non-normal modal operators (to use Restall’s terminology) such as the Kleene star. I currently have no idea what the Kleene star would express. I’m similarly unsure about intensional connectives as in relevant logic’s fusion. It might be an interpretation similar to Restall’s of application of data in the form of propositions to other data, although this is really speculative. There might be something in this.

Something that I’m a little more immediately curious about is the status of translations of formulas. There are certain systems of logic that can be translated into others, e.g. S4 into intuitionistic logic. The conditional in S4 is classical, but the translation (in Goedel’s translation at least) of the intuitionistic A->B looks like [](A’->B’) where A’ and B’ are the translations of A and B. The classical conditional then will express a weaker acceptance of an inference but it will be modified in some way by the necessity operator in front. If the view of logic is right, one would expect the translation to preserve what is expressed in some form. I will have to track down the relevant part of the Locke Lectures in order to further test this idea.

## 7 comments

Comments feed for this article

September 14, 2007 at 11:21 pm

BradDo you know if Brandom has ever talked about the interpretation of the resource sensitive conditional used in linear logic?

September 14, 2007 at 11:29 pm

Shawnbrad,

Not to my knowledge. I’ll ask him in the near future about it. I almost stuck it in the post after the intensional fusion but decided against it. In general, I’m not sure what expressivism says about the connectives of substructural logics with less structure than intuitionism.

September 14, 2007 at 11:49 pm

BradIf you haven’t yet, you should take a look at Girard’s paper in Computer Science from 87. The entire journal was devoted to the paper (no other papers where published in the issue) and there is a note at the beginning that says, roughly, this paper is so innovative and unique that we couldn’t find anyone to referee it, so the editors have decided to take full responsibility for any criticism that might eventually be made of this work.

Anyways, Girard claims alot of crazy stuff about linear logic and its relation to mathematics and computation (he also says alot of crazy stuff about constructive and classical logic), and I haven’t seen many philosophers talk about it. (I know you know some formal grammar, so you might also know that GLUE-semantics is based on linear logic, and alot of the stuff that goes on in type-logical grammar nowadays takes big advantage of structural modalities, proof nets, and other linear logic type stuff).

September 15, 2007 at 12:08 am

ShawnAre you talking about the paper called “Linear Logic”? That paper is on my to-read list along with Girard’s paper on the logic of rules (“locus solum” or “Ludics” is the title I think) and Troelstra’s lectures on linear logic. The local proof theorists are not that interested in linear logic for the most part. Belnap says some things about it, I think, in his paper on the relation of display logic to linear logic. But, I agree that philosophers don’t seem to be interested in it. I don’t know why. I hope to have some grasp of the area by the end of spring term.

I have a little familiarity with the connection between linear logic and glue semantics. One of my TAs for a semantics class as an undergrad is a researcher on the applications of glue semantics to computer science and published a primer on it. It seemed interesting, but I lacked some background at the time and it didn’t stick. I think I still have it around here somewhere. I think it should be available online, although a quick glance at where I think it should be doesn’t reveal its location.

September 17, 2007 at 2:44 am

BradYeah `Linear Logic’ by Girard.

I think the fact that (Anglo-American) philosophers haven’t caught on to the importance of linear logic is indicative of the way logic is taught in philosophy departments in America.

The usual “soundness completeness compactness Lowenheim-Skolem” track for a grad-level introduction to logic in a philosophy department leaves out so much interesting stuff!

For example:

Most would be suprised that type-theory wasn’t just some Byzantine Rube-Goldberg-Machine introduced by Russell to avoid set-theoretic paradoxes. Most would be suprised that intuitionstic logic is still interesting even if one doesn’t accept the normal philosophy of mathematics that goes with it (isn’t it interesting to find out how *difficult* it is to extend the curry-howard correspondence to classical logic?)

Most would be suprised that the most basic logic possible (the Lambek calculus) turns out to have important applications to syntax for natural languages.

Obviously it would be nearly impossible to get into any of this stuff in any detail in an graduate level intro to logic class, especially since most philosophers might not have an interest in logic and are just forced by their department to take the class. But I wish people did a better job of at least mentioning all the sweet applications logic has had *outside* of classical foundations of mathematics (Godel was great, but so was Gentzen. Tarski was super, but so were Curry *and* Howard. Church was awesome, but so was Church…alright maybe Church is a bad example.)

But maybe this is just my experience, have you had a different experience (I’m thinking the Pitt-CM scene might be *much* more cosmopolitan…)?

September 17, 2007 at 3:46 am

ShawnI agree. It is somewhat difficult to connect the usual track through the basic model theoretic metatheorems to other philosophical interests. I think my FOL professor did a good job of showing how the things we were covering were important and relevant though.

I think part of the problem you’re raising is that the normal track often just covers very, very basic proof theory and a the basics of model theory. The interesting stuff, and possibly more philosophically relevant now, comes in other places. Is there a good reason that the basic track in logic should follow this traditional pattern now that logic doesn’t have quite the traditional role in American departments? Why not introduce model theoretic notions, do proof theory, and then lots of modal logic for example?

It would have been really neat to see Gentzen earlier than the grad level proof theory class. Tarski is important, but it is sad that others don’t get any attention at the basic level.

Completeness is neat, and the usual method of proof, the Henkin construction, is important. The importance of it is often lost since you don’t normally get to see other uses of that sort of construction or its variants in an intro class.

I actually didn’t know that fact about the Curry-Howard isomorphism. I should get to that paper on it that is sitting on my desk. That is also a good point about type theory since it is probably often introduced in the context of Russell or the paradoxes and then, for most people, it passes into the dark recesses of history. Of course, I think it’d be neat if recursion theory were done more in lower level classes. I imagine that there are simply time barriers to this…

I don’t know exactly how the intro grad courses go around here since I didn’t take them. I know that Belnap has a hefty dose of proof theory in his and I think they talk about Gentzen some.

September 17, 2007 at 5:24 am

BradAs far as I can tell, getting a curry-howard correspondence for classical logic requires getting around to thinking about how to state the correspondence for a multiple conclusion sequent calculus instead of a natural deduction .

I think that the paper “The Duality of Computation” by Curien and Herbelin is the most important paper in this area of research (I might be wrong though, and perhaps someone who knows more about this stuff reads this blog and can set us straight).

I first hear about this stuff at ESSLLI this year in Dublin, where Moortgat (who was van Benthems student in the late 80s, and who (literally) wrote the book on categorial grammar) and Bernardi gave a class called “Symmetric Categorial Grammar” in which they presented new work on “classicalyzing” standard type-logical frameworks, which are intuitionistic (permutations of the Lambek Calculus). Since the curry-howard correspondence is so important to these people for getting a tight interface between syntax and semantics, this required that they introduce the “lambda over-bar mu mu tilda” calculus, which I guess is what is required to get the correspondence for classical sequent calculi. Anyways, the Curien and Herberlin paper is where this calculus was introduced, so it must be important. (maybe you can read the paper and tell me what is going on!)