A while ago, I wrote a post on monotonicity. In particular, a large part of it was on what I called semantic monotonicity, that is, if some formula \phi(P) is true, and in the model the set assigned to P is a subset of that assigned to Q, then \phi(Q). This needs to be slightly changed for inferences. If from premise P you can infer Q, and Q is a subset of R, then from P you can infer R. Intuitively the extensions of the predicates/concepts in the premise and conclusion are ordered by the subset relation. This is different from monotonicity in side formulas, which says that if you can derive a conclusion, then you get the conclusion no matter what further premises are added to the argument. Brandom’s picture in Articulating Reasons rejects monotonicity in side formulas. This is one of the properties of material inferences. They can be turned from good to bad by adding more premises. Some material inferences are semantically monotonic. I should say some concepts or predicates are, to be a bit more exact. For example, the inference from Madrone is a tabby to Madrone is a cat, and from that to Madrone is a mammal. What role do these sorts of inferences play in the inferentialist picture? They happen to be a subset of commitment-preserving inferences. There is more to the story. Brandom distinguishes three sorts of inferential relations, which I went over in a previous post. These relations are ordered such that incompatibility entailments->commitment preserving->entitlement preserving, but not conversely. Based on what is said about incompatibility entailments, I think the semantically monotonic inferences are also a subset of the incompatibility entailing inferences. Everything incompatible with the conclusions of semantically monotonic inferences will be incompatible with the premises. From what I can tell, this works for single premise inferences. I speculate that multi-premise inferences are where one finds commitment preserving inferences that are not incompatibility entailing.

### Like this:

Like Loading...

*Related*

## Leave a comment

Comments feed for this article