You are currently browsing the category archive for the ‘language’ category.

A topic that recurs in the interviews with Quine is his views on analyticity and how his views have changed since “Two Dogmas”.

Read the rest of this entry »

Advertisements

Reading the Revision Theory of Truth has given me an idea that I’m trying to work out. This post is a sketch of a rough version of it. The idea is that circular definitions motivate the need for a sharper conception of content on an inferentialist picture, possibly other pictures of content too. It might have a connection to harmony as well, although that thread sort of drops out. The conclusion is somewhat programmatic, owing to the holidays hitting. Read the rest of this entry »

Does anyone know of any proof systems in which some but not all contents have negations? I’m looking for examples for a developing project.

I’m trying to work out some thoughts on the topic of semantic self-sufficiency. My jumping off point for this is the exchange between McGee, Martin and Gupta on the Revision Theory of Truth. My post was too long, even with part of it incomplete, so I’m going to post part of it, mostly expository, today. The rest I hope to finish up tomorrow. I’m also fairly unread in the literature on this topic. I know Tarski was doubtful about self-sufficiency and Fitch thought Tarski was overly doubtful. Are there any particular contemporary discussions of these issues that come highly recommended? Read the rest of this entry »

In chapter 3 of the Revision Theory of Truth (RTT), Gupta and Belnap argue against the fixed point theory of truth. They say that fixed points can’t represent truth, generally, because languages with fixed point models are expressively incomplete. This means that there are truth-functions in a logical scheme, say a strong Kleene language, that can not be expressed in the language on pain of eliminating fixed points in the Kripke construction. An example of this is the Lukasiewicz biconditional. Another example is the exclusion negation. The exclusion negation of A, ¬A , is false when A is true, and true otherwise. The Lukasiewicz biconditional,A≡B , is true when A and B agree on truth value, false when they differ classically, and n otherwise. Read the rest of this entry »

In the Revision Theory of Truth, Gupta says (p. 125) that a circular definition does not give an extension for a concept. It gives a rule of revision that yields better candidate hypotheses when given a hypothesis. More and more revisions via this rule are supposed to yield better and better hypotheses for the extension of the concept. This sounds like there is, in some way, some monotony given by the revision rule. What this amounts to is unclear though.

For a contrast case, consider Kripke’s fixed point theory of truth. It builds a fixed point interpretation of truth by claiming more sentences in the extension and anti-extension of truth at each stage. This process is monotonic in an obvious way. The extension and anti-extension only get bigger. If we look at the hypotheses generated by the revision rules, they do not solely expand. They can shrink. They also do not solely shrink. Revision rules are non-monotonic functions. They are eventually well-behaved, but that doesn’t mean monotonicity. One idea would be that the set of elements that are stable under revision monotonically increases. This isn’t the case either. Elements can be stable in initial segments of the revision sequence and then become unstable once a limit stage has been passed. This isn’t the case for all sorts of revision sequences, but the claim in RTT seemed to be for all revision sequences. Eventually hypotheses settle down and some become periodic, but it is hard to say that periodicity indicates that the revisions result in better hypotheses.

The claim that a rule of revision gives better candidate extensions for a concept is used primarily for motivating the idea of circular definitions. It doesn’t seem to figure in the subsequent development. The theory of circular definitions is nice enough that it can stand without that motivation. Nothing important hinges on the claim that revision rules yield better definitions, so abandoning it doesn’t seem like a problem. I’d like to make sense of it though.

The first thing to note about the incompatibility semantics in the earlier post is that it is for a logic that is monotonic in side formulas, as well as in the antecedents of conditionals. (Is there a term for the latter? I.e. if p→q then p&r→q.) This is because of the way incompatiblity entailment is defined. If X entails Y, then ∩p∈YI(p) ⊆ I(X). This holds for all Z⊇X, i.e. ∩p∈YI(p) ⊆ I(Z). This wouldn’t be all that interesting to note, since usually non-monotonicity is the interesting property, except that Brandom is big on material inference, which is non-montonic. The incompatibility semantics as given in the Locke Lectures is then not a semantics for material inference. This is not to say that it can’t be augmented in some way to come up with an incompatibility semantics for a non-monotonic logic. There is a bit of a gap between the project in MIE and the incompatibility semantics. Read the rest of this entry »

I’ve been spending some time learning the incompatibility semantics in the appendices to the fifth of Brandom’s Locke Lectures. The book version of the lectures just came out but the text is still available on Brandom’s website. I don’t think the incompatibility semantics is that well known, so I’ll present the basics. This will be a book report on the relevant appendices. A more original post will follow later. Read the rest of this entry »

MacFarlane’s project in his dissertation requires that he make sense of quantifiers in terms of his presemantics. The initial suggestion is to assign quantifiers the type ((O => V) => V), where O is the basic object type and V is the truth value type. Two problems arise for this. The quantifiers could receive interpretations that are sensitive to the domain of objects. Regardless the quantifiers receive different interpretations as the domains vary. This leads to the second problem. What do the variable domains represent? Why do we use them? MacFarlane follows Etchemendy here. Etchemendy says that there are two competing ways of understanding the variable domains, neither of which is satisfactorily captured by variable domains. The first is understanding the variable domains as representing the things that exist at each possible world, with models representing worlds. Three objections to this are given, only two of which I will mention. One is that it seems to make the strong metaphysical claim that for any set of objects at all, the world could have contained just those objects. There might be ways to respond to this; MacFarlane cites a couple of attempts, one of which appeals to “subworlds.” The other objection, which seems promising, is that this is hard to square this with the use of frames in modal logic. If the various domains are parts of worlds in different frames, then we must make sense of ways the very structure of possibility could have been, in MacFarlane’s phrase. This seems like a problem. I think some people have objected along these lines to David Lewis’s modal realism. Making sense of the moving parts of modal logic is hard. Read the rest of this entry »

I just came across something on YouTube that I feel I must share. Someone put up a seven part clip of Sellars giving a talk on meaning and language. Thank you unknown stranger. Part of the audio on the first clip is missing unfortunately. I’m kind of amazed that this exists. It is unfortunate, however, that it doesn’t use a more flattering picture of Sellars.

Author

Shawn Standefer, recent Ph.D. in philosophy from Pitt. (More about me)

Archives