I’m working my way through some papers by Rajeev Gore and Greg Restall, connecting some algebraic logic stuff, gaggles, to some proof theory stuff, display logic. A post on gaggles is forthcoming. I need to figure out something to say first. That won’t stop me with display logic though. I’ll shoot for a little bit of exposition. A little rough exposition.

Display logic is a sequent proof system for various logics. The innovation in it is the introduction of structure connectives to augment the logical connectives. Two of the main ones in Belnap’s original presentation were º and ∗. The former is an associative, binary structure connective that acts sort of like conjunction and the latter is a unary structure connective that acts sort of like negation. Together with these, one distinguishes two sorts of context, antecedent and consequent. Antecedent context is to the left of the turnstile and consequent context is to the right of the turnstile. Being within the scope of an odd number of ∗s reverses the polarity, to use a leading term. The º becomes a conjunction in antecedent position and a disjunction in consequent position. The ∗ becomes a negation in either. (I say “becomes” but this is spelled out more in the introduction rules for the connectives.)There are some basic rules relating structures that allow one to prove some structures to be display equivalent. For example, X ⇒ Y and X∗∗ ⇒ Y are equivalent.

Display logic gets its name from (or lends its name to, I forget) a distinctive property it has: the display property. The display property says that when one has a structure Z as a part of a structure X, e.g. X(Z) ⇒ Y (I am using ⇒ for the turnstile since I can’t find a turnstile in html), it is possible using display equivalences alone to isolate that structure in antecedent or consequent position, depending on its initial polarity, e.g. to obtain from e.g. X(Z) ⇒ Y something like X(Y) ⇒ Z. [Edit: This isn’t quite right. It should say, if Z is a succeedent structure, from X(Z) ⇒ Y, obtain W ⇒ Z, where W is a structure resulting solely from structural manipulations of X, Y, and Z. If Z is an antecedent structure, the preceding will be something like: from X(Z) ⇒ Y, obtain Z ⇒ U, where U is a structure resulting solely from structural manipulations of X, Y, and Z.] This allows us to display the structure Z all by its lonesome. This is important because then we can give introduction rules for logical connectives that do not involve structure in or around the formula containing the logical connective. For example, this gives us a way to formulate a single left- and right-introduction rule for → for a range of logical systems that differ only in structural assumptions. To put things another, slightly more tendentious way, making the structural assumptions explicit in the form of operators, allows us to get a better grip on the introduction rules.

By some straightforward inductions, we can get the following. For any formula M, M ⇒ M and for any structures
X, Y and formulas M, if one has X ⇒ M and M ⇒ Y, then one can infer X ⇒ Y. The latter is the cut rule. Indeed, part of the appeal of display logic is that it provides a general, easy (?) way of showing cut elimination. The preceding Belnap takes to show that left and right introduction rules are in harmony or have the same meaning. I don’t think I can rehearse that argument in detail here. How cut is used seems fairly clear, but I’m not sure as to the use of the other one.

Of course, once one has introduced some structure connectives, there is no reason to stop. For example, instead of having an empty antecedent or consequent, which operates differently depending on polarity, one could introduce a structural constant to mark that. Or, instead of side conditions, one could introduce a structure connective, I, to mark modal formulae. Doing this, the right-introduction rule for [] is: from XºI ⇒ M, infer X ⇒ []M. I like this because it turns out that when the cut formula is []M, then there must be additional structure on X, namely the structure for marking the modal context. This seems to me to capture the idea of side conditions on derivations involving modality, which is what ‘I’ was introduced to do.