You are currently browsing the category archive for the ‘logic’ category.

I’m trying to work out some thoughts on the topic of semantic self-sufficiency. My jumping off point for this is the exchange between McGee, Martin and Gupta on the Revision Theory of Truth. My post was too long, even with part of it incomplete, so I’m going to post part of it, mostly expository, today. The rest I hope to finish up tomorrow. I’m also fairly unread in the literature on this topic. I know Tarski was doubtful about self-sufficiency and Fitch thought Tarski was overly doubtful. Are there any particular contemporary discussions of these issues that come highly recommended? Read the rest of this entry »

This term I’ve spent some time studying nonmonotonic logics. This lead me to look at David Makinson’s work. Makinson has done a lot of work in this area and he has a nice selection of articles available on his website. One unexpected find on his page was a paper called “Completeness Theorems, Representation Theorems: What’s the Difference?” A while back I had posted a question about representation theorems. In the comments, Greg Restall answered in detail. Makinson’s paper elaborates this some. He says that representation theorems are a generalization of completeness theorems, although I don’t remember why they were billed as such. There are several papers on nonmonotonic logic available there. “Bridges between classical and nonmonotonic logic” is a short paper demystifying some of the main ideas behind non-monotonic logic. The paper “How to go nonmonotonic” is a handbook article that goes into more detail and develops the nonmonotonic ideas more. Makinson has a new book on nonmonotonic logic, but it looked like most of the content, minus exercises, is already available in the handbook article online.

I just finished reading Badesa’s Birth of Model Theory. It places Löwenheim’s proof of his major result in its historical setting and defends what is, according to the author, a new interpretation of it. This book was interesting on a few levels. First, it placed Löwenheim in the algebraic tradition of logic. Part of what this involved was spending a chapter elaborating the logical and methodological views of major figures in that tradition, Boole, Schröder, and Peirce. Badesa says that this tradition in logic hasn’t received much attention from philosophers and historians. There is a book, From Peirce to Skolem, that investigates it more and that I want to read. I don’t have much to say about the views of each of those logicians, but it does seem like there is something distinctive about the algebraic tradition in logic. I don’t have a pithy way of putting it though, which kind of bugs me. Looking at Dunn’s book on the technical details of the topic confirms it. From Badesa, it seems that none of the early algebraic logicians saw a distinction between syntax and semantics, i.e. between a formal language and its interpretation, nor much of a need for one. Not seeing the distinction was apparently the norm and it was really with Löwenheim’s proof that the distinction came to the forefront in logic. A large part of the book is attempting to make Löwenheim’s proof clearer by trying to separate the syntactic and semantic elements of the proof.

The second interesting thing is how much better modern notation is than what Löwenheim and his contemporaries were using. I’m biased of course, but they wrote ax,y for what we’d write A(x,y). That isn’t terrible, but for various reasons sometimes the subscripts on the ‘a’ would have superscripts and such. That quickly becomes horrendous.

The third interesting thing is it made clear how murky some of the key ideas of modern logic were in the early part of the 20th century. Richard Zach gave a talk at CMU recently about how work on the decision problem cleared up (or possibly helped isolate, I’m not sure where the discussion ended up on that) several key semantic concepts. Löwenheim apparently focused on the first-order fragment of logic as important. As mentioned, his work made important the distinction between syntax and semantics. Badesa made some further claims about how Löwenheim gave the first proof that involved explicit recursion, or some such. I was a little less clear on that, although it seems rather important. Seeing Gödel’s remarks, quoted near the end of the book in footnotes, on the importance of Skolem’s work following Löwenheim’s was especially interesting. Badesa’s conclusion was that one of Gödel’s big contributions to logic was bringing extreme clarity to the notions involved in the completeness proof of his dissertation.

I’m not sure the book as a whole is worth reading though. I hadn’t read Löwenheim’s original paper or any of the commentaries on it, which a lot of the book was directed against. The first two chapters were really interesting and there are sections of the later chapters that are good in isolation, mainly where Badesa is commenting on sundry interesting features of the proof or his reconstruction. These are usually set off in separate numbered sections. I expect the book is much more engaging if you are familiar with the commentaries on Löwenheim’s paper or are working in the history of logic. That said, there are parts of it that are quite neat. Jeremy Avigad has a review on his website that sums things up pretty well also.

In chapter 3 of the Revision Theory of Truth (RTT), Gupta and Belnap argue against the fixed point theory of truth. They say that fixed points can’t represent truth, generally, because languages with fixed point models are expressively incomplete. This means that there are truth-functions in a logical scheme, say a strong Kleene language, that can not be expressed in the language on pain of eliminating fixed points in the Kripke construction. An example of this is the Lukasiewicz biconditional. Another example is the exclusion negation. The exclusion negation of A, ¬A , is false when A is true, and true otherwise. The Lukasiewicz biconditional,A≡B , is true when A and B agree on truth value, false when they differ classically, and n otherwise. Read the rest of this entry »

In the Revision Theory of Truth, Gupta says (p. 125) that a circular definition does not give an extension for a concept. It gives a rule of revision that yields better candidate hypotheses when given a hypothesis. More and more revisions via this rule are supposed to yield better and better hypotheses for the extension of the concept. This sounds like there is, in some way, some monotony given by the revision rule. What this amounts to is unclear though.

For a contrast case, consider Kripke’s fixed point theory of truth. It builds a fixed point interpretation of truth by claiming more sentences in the extension and anti-extension of truth at each stage. This process is monotonic in an obvious way. The extension and anti-extension only get bigger. If we look at the hypotheses generated by the revision rules, they do not solely expand. They can shrink. They also do not solely shrink. Revision rules are non-monotonic functions. They are eventually well-behaved, but that doesn’t mean monotonicity. One idea would be that the set of elements that are stable under revision monotonically increases. This isn’t the case either. Elements can be stable in initial segments of the revision sequence and then become unstable once a limit stage has been passed. This isn’t the case for all sorts of revision sequences, but the claim in RTT seemed to be for all revision sequences. Eventually hypotheses settle down and some become periodic, but it is hard to say that periodicity indicates that the revisions result in better hypotheses.

The claim that a rule of revision gives better candidate extensions for a concept is used primarily for motivating the idea of circular definitions. It doesn’t seem to figure in the subsequent development. The theory of circular definitions is nice enough that it can stand without that motivation. Nothing important hinges on the claim that revision rules yield better definitions, so abandoning it doesn’t seem like a problem. I’d like to make sense of it though.

Michael Kremer does something interesting in his “Kripke and the Logic of Truth.” He presents a series of consequence relations, \models_{(V,K)}, where V is a valuation scheme and K is a class of models. He provides a cut-free sequent axiomatization of the consequence relation \models_{(V,M)}, where V is the strong Kleene scheme and M is the class of all fixed points. He  proves the soundness and completeness of the axiomatization with respect to class of all fixed points. After this, he proves the admissibility of cut. I wanted to note a couple of things about this proof.

As Kremer reminds us, standard proofs of cut elimination (or admissibility) proceed by a double induction. The outer inductive hypothesis is on the complexity of the formula, and the inner inductive hypothesis is on how long in the proof the cut formula has been around. Kremer notes that this will not work for his logic, since the axioms for T, the truth predicate, reduce complexity. (I will follow the convention that ‘A’ is the quote name of the sentence A.) T’A’ is atomic no matter how complex A is. ∃xTx is more complex than T’∃xTx’ despite the latter being an instance of the former. Woe for the standard method of proving cut elimination.

Kremer’s method of proving cut elimination is to use the completeness proof. He notes that cut is a sound rule, so by completeness it is admissible. Given the complexity of the completeness proof, I’m not sure this saves on work, per se.

Now that the background is in place, I can make my comments. First, he proves the admissibility of cut, rather than mix. The standard method proves the admissibility of mix, which is like cut on multiple formulas, since it would otherwise run into problems with the contraction rule. Of course, the technique Kremer used is equally applicable to mix, but the main reason we care about mix is because showing it admissible is sufficient for the admissibility of cut. Going the semantic route lets us ignore the structural rules, at least contraction.

Next, it seems significant that Kremer used the soundness and completeness results to prove cut admissible. He describes this as “a detour through semantics.” He doesn’t show that a proof theory alone will be unable to prove the result, just that the standard method won’t do it. Is this an indication that proof theory cannot adequately deal with semantic concepts like truth? This makes it sound like something one might have expected from Goedel’s incompleteness results. There are some differences though. Goedel’s results are for systems much stronger than what Kremer is working with. Also, one doesn’t get cut elimination for the sequent calculus formulation of arithmetic.

Lastly, the method of proving cut elimination seems somewhat at odds with the philosophical conclusions he wants to draw from his results. He cites his results in support of the view that the inferential role of logical vocabulary gives its meaning. This is because he uses the admissibility of cut to show that the truth for a three-valued language is conservative over the base language and is eliminable. These are the standard criteria for definitions, so the rules can be seen as defining truth. Usually proponents of a view like this stick solely to the proof theory for support. I’m not sure what to make of the fact that the Kripke constructions used in the models for the soundness and completeness, so cut elimination, results do not seem to fit into this picture neatly. That being said, it isn’t entirely clear that appealing to the model theory as part of the groundwork for the argument that the inference rules define truth does cause problems. I don’t have any argument that it does. It seems out of step with the general framework. I think Greg Restall has a paper on second-order logic’s proof theory and model theory that might be helpful…

While doing some reading on relevance logic, I came acrossreview of Entailment vol. 2 by Gerard Allwein. It ends by noting that the volume doesn’t include recent work on substructural logics and notes that there is an article by Avron that gives details of the relationship between linear logic and relevance logic. Allwein follows this up by saying, “It is noted by this reviewer that linear logic is intellectually anaemic compared to relevance logic in that relevance logic comes with a coherent and encompassing philosophy whereas linear logic has no such pretensions.” This was written in 1994. I have no idea to what degree this is still true since I have no firm ideas about linear logic. I’m just discovering the “coherent and encompassing philosophy” that relevance logic comes with. I keep meaning to follow up on this stuff about linear logic at some point…

Since I’m talking about linear logic, I may as well link a big bibliography on linear logic.

In the proof theory class I’m taking, Belnap introduced several different axiomatic systems, their natural deduction counterparts, and deduction theorems linking them. We started with the Heyting axiomatization for intuitionistic logic and the Fitch formulation of natural deduction for it.

The neat thing was the explanation of how to turn the intuitionistic natural deduction system into relevance logic. To do this, we add a set of indices to attach to formula. When formulas are assumed, they receive exactly one index (a set containing one index), which is not attached to any other formulas. The rule for →-In still discharges assumptions, but it is changed so that the set of indices attached to A→B is equal to the set attached to B minus the set attached to A, and A’s index must be among B’s indices. This enforces non-vacuous discharge. It also restricts what things can be discharged. The way it was glossed was that A must be used in the derivation of B.

From what I’ve said there isn’t anyway for a set of indices to get bigger. The rule for →-Elim does just that. When B is obtained from A and A→B, B’s indices will be equal to the union of A’s and A→B’s. This builds up indices on formulas in a derivation, creating a record of what was used to get what. Only the indices of formula used in an instance of →-Elim make it into the set of indices for the conclusion, so superfluous assumptions can’t sneak in and appear to be relevant to a conclusion.

This doesn’t on the face of it seem like a large change. Just the addition of some indices with a minor change to the assumption rule and the intro and elim rules. The rule for reiterating isn’t changed; indices don’t change for it. Reiterating a formula into a subproof puts it under the assumption the subproof, in the sense of appearing below it in the fitch proof, but not in the sense of dependence. The indices and the changes in the rules induce new structural restrictions, as others have noted. We haven’t gotten to sequent calculi or display logic, so I’m not going to go into what the characterization of relevance logic would look like in those. Given my recent excursion into Howard-Curry land, I do want to mention what must be done to get relevance logic in &lambda- calculus. A restriction has to be placed on the abstraction rule, i.e. no vacuous abstractions are allowed. This is roughly what one would expect. Given the connection between conditionals being functions from proofs of their antecedents to proofs of their consequents and λ-abstraction forming functions, putting a restriction on the former should translate to a restriction on the latter, which it does.

I’ve been reading Lectures on the Howard-Curry Isomorphism by Sørensen and Urzyczyn recently. I wanted to comment on one of the interesting things in it. Read the rest of this entry »

A while ago, Ole mentioned a presentation on adaptive logic by some logicians from Ghent. It sounded pretty interesting. Apparently one of the Ghent logicians, Rafal Urbaniak, has started a blog, the first posts of which are on that very topic. Rafal was nice enough to link to a long introduction to adaptive logic. I haven’t had the chance to go through it all yet, but it looks solid. How am I supposed to narrow my interests when neat stuff like this keeps popping up?

Author

Shawn Standefer, recent Ph.D. in philosophy from Pitt. (More about me)

Archives