You are currently browsing the category archive for the ‘inference’ category.

In chapter 4 of Articulating Reasons, Brandom argues that singular terms and predicates have a certain sort of inferential behavior. Predicate isn’t quite the correct label. Rather, he concerns himself with sentence frames, which include predicates (Fx) but also include predicates made of, e.g., logically complex sentences that are turned into predicates by removing an argument (from Fa->Pa, to Fx->Px) Singular terms are symmetrically substituted for and sentence frames are asymmetrically substituted in. There are three views to argue against: both are symmetric, terms are asymmetric while frames are symmetric, and both asymmetric. The first option is dismissed as wrong since both being symmetric would be structurally deficient. The other two are argued against together. The strategy is to show that terms can’t behave asymmetrically.

The argument begins by noting that if the inference from Pa to Pb, for terms a and b, is good, then the inference from Pb to Pa will not be good generally. If there is a way to construct a sentence P’x, for arbitrary sentence Px, such that P’x is inferentially complementary to Px, then this will show that there is no way to characterize the asymmetric terms. P’x is inferentially complementary to Px if when the inference from Pa to Pb is good but not conversely, the inference from P’b to P’a is good but not conversely. Such complements can be constructed given either conditional or negation. If the language has a conditional, then P’x is Px->S for some sentence S. If the language has negation, then P’x is ~Px. Now, given any terms with a inferentially stronger than b, and that the inference from Pa to Pb is good but not conversely, and that the language under consideration has negation or conditionals, a complementary P’x can be constructed so that the inference from P’b to P’a is good but not conversely. But, a was inferentially stronger than b, so the inference from P’a to P’b should be good. This means that the inferential strength ordering breaks down, so the terms can’t be asymmetric given certain expressive resources.

The argument depends on the notion of asymmetry Brandom uses. One way of understanding the asymmetry of substitution is by taking it to be that if you can infer from a to b in a frame, then generally you cannot infer from b to a in that frame and if you can infer from b to a, then you can’t from a to b. Brandom’s idea of asymmetry is different. He seems to use a partial ordering of inferential strength of terms such that if for all frames, if you can infer from a to b, then you can’t infer from b to a. I’ll label the former weakly asymmetric and the latter strongly asymmetric.

The argument rests on setting up inferentially complementary sentence frames, but the inferential behavior of these is weakly be asymmetric. The asymmetry is simply reversed from the normal direction; in terms of the example I’ve been using, instead of being asymmetric going from a to b, it is asymmetric going from b to a. Weak asymmetry is not eliminated. This does contradict strong asymmetry though. In terms of an ordering, the complementary frames reverse the ordering.

For the following point the ordering of the terms in the non-complementary frame setting will have positive polarity and the ordering in the complementary frame setting will have negative polarity. For the argument to work, it needs strong asymmetry as a premise, so that the positive polarity ordering is preserved in the negative polarity setting; in other words, the positive polarity ordering is the only ordering. Since the complementary frames are constructed using some bits of logical vocabulary, it is possible to discriminate them syntactically. It looks like Brandom tries to get around this by introducing new syntactically simple signs for the complementary sentences, defined using the conditional or negation. I think this is smoke and mirrors. The crux of the argument is the use of strong asymmetry rather than weak. I find strong asymmetry a little bit weird. Since it is stronger than weak asymmetry (not just in title), Brandom should supply some argument or explanation why strong asymmetry is a property of term substitution. We can always pick out the frames in which the polarity of the ordering has been reversed, but the strong asymmetry’s appeal seems to rely on not being able to do that. The weak point of Brandom’s argument seems to be the strong asymmetry premise. Given that this argument is the linchpin of a foundational chapter of Articulating Reasons, this looks bad for Brandom.

Advertisements

In chapter 4 of Articulating Reasons, Brandom argues that singular terms must exhibit a certain sort of inferential behavior, namely symmetric. He argues that asymmetric substitution inferences for terms are impossible. I will present that argument in detail soon in another post, but I wanted to comment on an odd thing about the conclusion. Assuming the argument works, we find that if the language has certain expressive resources, namely either a conditional or a negation, then terms must lisence symmetric substitution. That is a fairly minimal condition, but it is still a non-trivial condition. If we have an impoverished langauge without either of those bits of logical vocabulary, the argument doesn’t get off the ground. Why does Brandom try to draw the stronger conclusion that singular terms must behave symmetrically with respect to substitution? Is there supposed to be something about inferentially articulated languages that require them to have at least one of those bits of logical vocabulary? Based on what he says in chapter 1, there doesn’t seem to be any such requirement. Is there some further condition that for any language that would be used by a group, that language must have such logical vocabulary? Again, there is no indication from any of the chapters in Articulating Reasons that there is such a condition. I’m not sure why the expresively impoverished langauges drop out of the picture. One guess is that for any inferentially articulated language, it is always possible to enrich it to include conditionals. Then the argument in ch. 4 would apply. However, this seems to leave open the possibility that in adding such vocabulary the singular terms are changed such that they act symmetrically rather than asymmetrically. Were this the case, then new conditional/negation-free conclusions could be drawn, meaning the conditional/negation were not actually conservative over the previous inferences. If the claim from the first chapter, that conditional/negation are conservative over any field (Brandom keeps calling collections of material inferences “fields”; I don’t know why, nor do I know why they aren’t just sets.) of material inferences is correct, then the result from ch. 4 would be that singular terms must behave symmetrically. Offhand, I don’t know what exactly the status of the conservativity claim is. This line of thought might call it into question, or, if it is solidly established, it might go a ways towards explaining why Brandom draws the conclusion that he does.

A while ago, I wrote a post on monotonicity. In particular, a large part of it was on what I called semantic monotonicity, that is, if some formula \phi(P) is true, and in the model the set assigned to P is a subset of that assigned to Q, then \phi(Q). This needs to be slightly changed for inferences. If from premise P you can infer Q, and Q is a subset of R, then from P you can infer R. Intuitively the extensions of the predicates/concepts in the premise and conclusion are ordered by the subset relation. This is different from monotonicity in side formulas, which says that if you can derive a conclusion, then you get the conclusion no matter what further premises are added to the argument. Brandom’s picture in Articulating Reasons rejects monotonicity in side formulas. This is one of the properties of material inferences. They can be turned from good to bad by adding more premises. Some material inferences are semantically monotonic. I should say some concepts or predicates are, to be a bit more exact. For example, the inference from Madrone is a tabby to Madrone is a cat, and from that to Madrone is a mammal. What role do these sorts of inferences play in the inferentialist picture? They happen to be a subset of commitment-preserving inferences. There is more to the story. Brandom distinguishes three sorts of inferential relations, which I went over in a previous post. These relations are ordered such that incompatibility entailments->commitment preserving->entitlement preserving, but not conversely. Based on what is said about incompatibility entailments, I think the semantically monotonic inferences are also a subset of the incompatibility entailing inferences. Everything incompatible with the conclusions of semantically monotonic inferences will be incompatible with the premises. From what I can tell, this works for single premise inferences. I speculate that multi-premise inferences are where one finds commitment preserving inferences that are not incompatibility entailing.

This post is my stab at understanding the last section of the last chapter of Articulating Reasons.

In the last section of chapter 6 of Articulating Reasons, Brandom tackles the problem of making room for a notion of objectivity. This is important because he is trying to give an assertibility-conditional semantics. Truth-conditional semantics have a clear route to objectivity since what is true is independent of any agent’s attitudes in a reasonable sense. The primitives for the inferentialist are going to be commitment and entitlement, so the question is how to come up with, in Brandom’s phrase, an attitude transcendent notion of objectivity. This is one of the more interesting and important parts of the book, but it is also one of the more poorly written parts. Alas!

To make room for objectivity, there needs to be some way to cash out the difference in truth-conditions, without invoking truth, between two sorts of sentences: a claim and its meta-claims, where meta-claims involve the attitude or ascription of an agent. For example, “I am in Pittsburgh” is a claim and some of its meta-claims include “I am committed to the claim that I am in Pittsburgh” and “I assert that I am in Pittsburgh”. There is a difference in truth-conditions between claims and their meta-claims. (The claim/meta-claim terminology is mine. It should be useful for explaining what is going on.) The primitives of evaluation for the assertibilist are entitlement and commitment. There are instances where one can be committed and entitled to both a claim and a meta-claim, e.g., to use Brandom’s example: (a) “I will write a book” and (b) “I foresee I will write a book”. Commitment to both can be obtained by, possibly, resolute avowals of your plans to write. Entitlement to both will be secured by roughly the same assertions for both. What can be said in defense of the former can be said in defense of the latter. The point isn’t that commitment and entitlement to both necessarily go together, just that they possibly can. It is possible that the assertibilist cannot distinguish them solely in terms of commitment and entitlement alone.

Commitment alone and entitlement alone cannot account for the difference. However, incompatibility, which was introduced earlier in chapter 6 and discussed briefly a few posts ago, can. To recap, two claims are incompatible just in case commitment to one precludes entitlement to the other. The interaction of the two primitive notions give us the derived notion which will be used here. This is sort of the philosophy analog to Chekhov’s gun principle: a concept introduced in an earlier argument will be used by the end of the book in an argument. Here is where it happens. The incompatibilities associated with (a) differ from those associated with (b), i.e. Inc(a) != Inc(b). To take Brandom’s example again, “I will die in 10 minutes” is incompatible with (a), but not with (b), taking “foresee” in a non-omniscient, slightly weak way.

This gives the assertibilist a way to insert a gap between a claim and its meta-claims, to reflect the gap that the proponent truth-conditional claims is there. This should show that there is space in the inferentialist picture for objectivity, in the sense of attitude-transcendence. While the assertibility conditions on claims and meta-claims might be the same, they can be distinguished in terms of incompatibilities. This doesn’t go the extra mile toward an account of objectivity. If memory serves, there is a stab at that near the end of Making It Explicit. This move just shows there is room for the notion objectivity.

Suppose one asks if there is a difference between the incompatibilities of a proposition p and “it is truth that p”. It seems straightforward that anything incompatible with the former is incompatible with the latter and vice versa. Assume that q is incompatible with p. Then commitment to q then precludes entitlement to p. But, what entitles one to p would entitle one to “it is true that p”. So q is incompatible with “it is true that p”. Similarly for the converse. Incompatibilities do not distinguish between p and “it is true that p”, which seems desirable. Brandom thinks he can exploit the notion of incompatibility to define a predicate “it is assertible that …” which is disquotational in the same way the truth predicate is. In his jargon, this would be a predicate such that “it is assertible that p” has the same incompatibilities as p. This would be the same incompatibilities as “it is true that p”. This would allow the assertibility-condition proponent to say that everything that the truth predicate does can be simulated by a predicate defined out of only her primitive notions. I do not know what I think of this last move, defining this assertibility predicate. I’m much less comfortable with it than other parts of the argument. It doesn’t seem to get all the way to an account of objectivity, and it doesn’t provide much beyond the demonstration that there is room for objectivity in the Brandom picture. “It is assertible that…” also seems to introduce another modality which is lacking in the truth predicate (as pointed out to me by another Pitt grad student). Incompatibility is a modal notion, but, outside of incompatibilities, the two predicates, truth and assertibility, differ as to modality, which is problematic for claiming an equivalence.

In Chapter 6 of Articulating Reasons, Brandom makes some distinctions among the proprieties of assertions in his sketch of how to make a semantics based on assertibility conditions work. The two main ones are commitments and entitlements. Commitments are what claims one is committed to by an assertion. These give inferential relations among contents. Asserting “This is red” commits one to “This is colored”, that is, the latter should be added to one’s assertion score if the former is. Entitlements are the subset of commitments to which one can supply justifying reasons. These two notions come together to define incompatibility, which is a relation between contents. A is incompatible with B iff commitment to A precludes entitlement to B. This gives us two basic notions and one derived from their interplay. Now there are a few things to note about these.

Commitment is generally asymmetric. “A commits one to B” would not seem to entail “B commits one to A” for many A and B. Incompatibility is a modal notion on, as far as I can tell, two counts. The first is that “precludes” seems to be modal. Commitment to A means that one cannot be entitled to B. There is, I suppose, a non-modal reading of “precludes”: commitment to A means that one is not entitled to B. Even in that case, incompatibility is a modal notion because entitlement is. To be entitled to A is to be able to give reasons in support of A. Entitlement is, it would seem, a modal notion. Incompatibility is a relation between contents, and a modal one at that. Incompatibility seems like it should be a symmetric relation, but, the definition doesn’t seem to guarantee this. Commitment to A may preclude entitlement to B, but commitment to B need not preclude entitlement to A. At least, a proof of symmetry would be needed and I don’t have one. At a minimum, incompatibility should be symmetric for A and ~A, since ‘~’ is supposed to express incompatibilities.

From here, Brandom defines three sorts of relations. (He describes them as three sorts of inferential relations, but one is not on inferences. This is primarily on p. 193-4 for the interested reader.) These relations are: commitive inferences, permissive inferences, and incompatibility entailment. Commitive inferences are those that preserve commitments. Similarly, permissive inferences preserve entitlement. One gripe here is that there is no explanation of what preservation is supposed to be. Most non-stuttering inferences (i.e. inferences not of the form A, therefore A) will lose commitments. Suppose we make the inference: that is vermillion, therefore that is red (to stick with the easy example). The conclusion has different commitments than the premiss, at the least, the conclusion does not commit me to the premiss. There’s an idea there that one can grab ahold of, but we’re not given a clear picture of it. Incompatibility entailment (I-entailment) is better defined, but it is surprisingly difficult to ferret out the definition. By “surprisingly” I mean that it requires any work at all; it should be clearer. (No philosophical gripe here, just a stylistic and pedagogical one.) Let’s say that Inc(A) is the set of contents that are incompatible with A, i.e. {C: A is incompatible with C, for B in the appropriate language(pardon the hand-waving)}. A I-entails B iff Inc(B)\subseteq Inc(A). The example from the book is “The swatch is vermillion” I-entails “The swatch is red” because everything incompatible with the latter is incompatible with the former. Do we have to restrict ourselves to single premises and conclusions? I think, with reservation, that all three relations can be generalized to be between sets of contents, with no restriction on the set of premises and with the restriction that the set of conclusions be a singleton. For I-entailment, this seems to be straightforward. For a set of contents H, Inc(H) equals the union of Inc(B), for all B\in H. I’m hesitant about extending this straightforwardly to commitive and permissive inferences since I’m unclear about what the preservation in their description is.

Why are these three notions important? I will get back to the relations at a later time. The two notions of commitment and entitlement are important because of how they figure in Brandom’s game of giving and asking for reasons. Brandom argues that there are two necessary conditions on any game to count as a reasons game for assertions. One necessary condition is that the moves be evaluable in terms of commitments. The short reason for this is that assertions must express conceptual content. Such content is inferentially expressed, so assertions must fit into inferential networks. Making a move must commit you to inferentially articulated consequences in order for it to count as an assertion. The other necessary condition is that there be a distinguished class of assertions to which one is entitled. The short argument for this is that undertaking commitments implicitly acknowledges that justification for the commitment can be requested, at least in some circumstances. In separating out the moves that are justified/justifiable from those that aren’t, one is creating a distinguished class of entitled assertions. Therefore, commitment and entitlement structures are necessary conditions for a game to count as a game of giving and asking for reasons for assertions. No sufficiency claim is being made. The notion of incompatibility, which links the other two normative notions, is important for what Brandom has to say about objectivity, which I will post on later.

A few weeks ago the Formal Epistemology Workshop (FEW) )was held at CMU. I attended a few sessions. The few sessions I attended were pretty good. In addition to a few good talks, I got to meet Kenny, who also commented on one of the talks I liked. Writing about the talks, even though kind of belated, is a great way to procrastinate on this paper I have hanging around, so I figure I’ll do that.

I went to four talks. One on diagrams in proofs, one by Kevin Kelly on his stuff, one incomprehensible one by Isaac Levi, and one neat one on philosophical issues in AI and information theory. I’m going to talk about the first one. It was by John Mumma of CMU and it was on his thesis work. He was trying to offer a way of reconstructing Euclid’s proofs in the Elements that gave the diagrams an important role. The traditional story, since Hilbert, is that diagrams don’t have any actual role in the proof. Proofs are sequences of propositions or text, so diagrams, being neither propositional nor textual, play no role in proof. At best, they gesture towards how to the missing steps in a proof. Mumma’s idea was to come up with a way of using the diagrams in proofs. He distinguished between two sorts of diagrammatic properties, whose names I cannot remember. The rough idea was that one was metric (how long) and the other topological (what is between/inside what). Based on this distinction, he noted that Euclid only invoked the latter when his diagrams played a role in the proofs. The proofs contain instructions for constructing the diagrams, and, when the constructions are done out of order the diagrams can lead us astray. He gave some examples and took the moral to be that the order of the constructions are important, that they create dependencies among the parts and properties of the diagrams. From my point of view, the neat thing was what he did next. In order to talk about proofs that actually use diagrams in a central way, he defined a proof system that included syntactic diagram objects. These diagrams were (roughly) n x n grids with lines between various points (I think they ended up being 4-tuples encoding this information). Akin to inference rules, diagrams get operations, or constructions, on them. (He might have ended up calling them inference rules in the end.) The proof system is a sequent calculus in which each sequent A, B-> A’,B’ consists of (sets of) diagrams A,A’, and (sets of) formulas B,B’. This is all a bit rough since my memory of these details is hazy after the few weeks that have passed. Mumma had some nice result about this proof system being able to nicely reconstruct all of the proofs from the first four or so books of the Elements. Well done! It looks like a neat project.

I said the new proof system was the interesting thing from my point of view. I will explain. The traditional picture says that proofs are sequences of propositions or text related in a suitable way. Mumma’s move was to expand the idea of proof theoretic language to include diagrammatic objects and operations on them. No reason to be overly strict with the notion of language for proofs. He did not go all the way to using full diagrams, it seems. The diagrammatic objects are encoded in 4-tuples (with some further abstracting into equivalence classes, I think). This means that some things are getting left out, but things must be left out to get the right level of generality needed for proofs I suppose. If Mumma’s idea pans out (must read papers…), it seems like it could have an application to inferentialist semantics of the Brandomian sort. Certain sorts of concepts, e.g. geometrical ones, seem to have natural inferential roles bound up with diagrams or pictures. It would be nice to be able to take advantage of that. But, this is speculation. Something else I’m wondering is to what degree Mumma’s work can hook up to Etchemendy and Barwise’s project of investing hybrid forms of information, which are forms of information that depend on narrowly linguistic forms of information as well as traditionally non-propositional forms of information, such as maps and diagrams. Maybe some day I will get to come back to this.

In the second chapter of Articulating Reasons, Brandom makes a connection between logical and normative vocabulary. He says: “[N]omative vocabulary (including expressions of preference) makes explicit the endorsement (attributed or acknowledged) of material proprieties of practical reasoning. Normative vocabulary plays the same expressive role on the practical side that conditionals do on the theoretical side.”
In the first chapter he says that for logical vocabulary to plays its expressive role, the introduction of logical vocabulary must be conservative over old inferences. The question is: does normative vocabulary have to satisfy the same criterion to play its expressive role? I do not think that the distinction between theoretical and practical reasoning is going to influence this. The expressive role is to make explicit commitments and endorsements. A case is made that in order to do this, the explicitating vocabulary (what an awful phrase) must not add additional conceptual material to the mix, that is, not make good new conclusions involving only old vocabulary. This is to be conservative over the old inferences though. This means that normative vocabulary is not conceptually contentful, even expressions of preference. This is mildly surprising. My next question is what makes normative vocabulary normative if not conceptual content, since it has none. Is it just a sui generis property of that sort of vocabulary? We don’t get any indication about what makes some bit of vocabulary normative though, so I am hesitant to endorse that sort of strategy. I’m not sure what to make of this. Is logical vocabulary similarly normative? This might be where the theoretical/practical divide comes in if the answer is no. One might say the practical dimension somehow confers normativity on the vocabulary. Although, since the inferentialist project in AR and Making It Explicit sees beliefs as normative (or what is discussed instead of belief), it is not unlikely that logical vocabulary has a normative element to it. This element is not conceptual, so I’m unsure what it would be. To back up a little, the question of whether the addition of normative vocabulary is always conservative is an interesting one. The idea is that normative vocabulary is supposed to codify certain practical inferences that one makes, e.g. I am banker going to work so I shall wear a tie. The normative vocabulary codifies this inference to fill the role of the pro-attitude or missing premise in an enthymeme. So, would adding, say, ‘should’ allow inferences to conclusions not containing ‘should’ that were not previously allowed? Intuitively, no, although there isn’t really anything more than that given in the book. I’m not sure how one would argue this point generally though. While one might let ‘should’ slip by as conservative, some of the other normative vocabulary is less compelling, e.g. expressions of preference.

I’m doing a reading group this summer with some of the other Pitt grads on Brandom’s Articulating Reasons and (later) McDowell’s Mind and World. I finally got my copy of Articulating Reasons (AR) back, so I figure I will do a few posts on issues we’ve covered in discussion. The first is going to be Brandom’s discussion of harmony in the final sections of chapter 1. For some good background on harmony, check out the presentation slides that Ole put up. They are quite helpful.

On the inferentialist picture in AR, the meaning of a word or concept is given by the inferences in which it figures. Brandom discusses Prior’s tonk objection to this idea for the case of logical constants, and then he goes on to state Belnap’s reply that logical constants should be conservative. This will not work for placing restrictions on the introduction of new vocabulary into a language for an inferentialist (understanding the inferentialist project as trying to give the meanings for the whole language, not just the logical constants) since conservativeness is too strong. To be conservative is not to license any inferences to conclusions that are free of the new vocabulary. This however is to not add any new conceptual content, i.e. any new material inferences. Adding new conceptual content would mean licensing new material inferences which would interact with the old vocabulary and conceptual content to get new conclusions. Brandom sums this up by saying, “the expressive account of what distinguishes logical vocabulary shows us a deep reason for this demand; it is needed not only to avoid horrible consequences but also because otherwise logical vocabulary cannot perform its expressive function.” It take this to mean that logical vocabulary can make something explicit because it is not adding any content to the mix, not muddying the conceptual waters so to speak (or even to throw another metaphor in, not creating the danger of crossing the beams of the conceptual). I will say more about this in another post.

Brandom proceeds to the next obvious constraint on inferences, Dummett’s notion of harmony, which is supposed to generalize Belnap’s conservativeness. Brandom doesn’t say much to shed light on what precisely harmony is. The idea, presented roughly, is to find some sort of nice relationship between the introduction rules (circumstances of application; left rules) and the elimination rules (consequences of application; right rules). Brandom reads Dummett as hoping there will be a tight connection between the consequences of a statement and the criteria of its truth (p. 74 in AR, p. 358 in Dummett’s Frege: Philosophy of Language). Dummett also says that conservativeness is a necessary but not sufficient condition on harmony. Brandom thinks there is reason to doubt the latter, namely that new content can be added harmoniously to an existing set of material inferences, which conservativeness does not allow. Following Wittgenstein, Brandom thinks there is reason to doubt the former. I won’t go into that though. He sees Dummett as wanting a theory of meaning to provide an account of harmony between circumstances and consequences of application.
Brandom says, “that presupposes that the circumstances and consequences of application of the concept of harmony do not themselves stand in a substantive material inferential relation.” Instead, Brandom thinks the idea of harmony only makes sense as the process of harmonizing, repairing, and elucidating concepts. He seems to take it as requiring an investigation into what sorts of inferences ought to be endorsed, normative rather than descriptive. This process, on the Brandomian picture, is done by making inferential connections explicit through the codification of these in conditionals and then criticizing and defending these commitments. Sounds a wee bit hegelian (I say like I know anything about Hegel…).

Brandom’s idea here seems to be that conservativeness works for characterizing or constraining what counts as a logical constant. I have some qualms about this since conservativeness will be relative to the initial vocabulary and set of material inferences. Harmony will not serve as a constraint on the wider field of all material inferences since the concept of harmony itself must stand in inferential relations, so it is subject to change over time, revised in light of other commitments. If this is so, I don’t see why that consideration wouldn’t also apply to conservativeness, unless that somehow can be added conservatively to all languages. I’m skeptical that that is the case. If that is right, I’m not sure how Brandom can maintain that harmony cannot function as a semantic constraint while conservativeness can. It seems like he wants to have his cake and eat it too.

[This post is rather speculative and the waters may get a little choppy. Please be gentle.] It occurred to me that there are some further consequences to the criticism of Brandom that MacFarlane puts forward. In his Locke Lectures “Between Doing and Saying” (BDS), Brandom talks about some of the consequences of inference as a kind of doing. In particular, he highlights the relationship between the conditional and inference, the former saying what one does in the latter. This idea is generalized to a discussion of what expressive tools one needs to be able to say explicitly what is left implicit in practice. This a theme from Making It Explicit (MIE) that sees a formal working through in BDS.

It has been too long since I read either BDS or MIE, and I only read each once through so some (lots) of the details are getting fuzzy. But, the role of score-keeping is toned down in BDS. The emphasis is placed on the deployment of different vocabularies to express what is done in practice. This is tightly connected to inference, but if MacFarlane is right then the inference is not the proper concept to bridge the practice and its semantic expression. That is deontic score-keeping. I don’t remember any discussion tying the use of different vocabularies together with deontic score-keeping. That could be an aspect of BDS that needs to be worked out. The role of inference in BDS is a bit different from that of MIE, so MacFarlane’s point is not directly applicable. BDS does build on the MIE account though, and inference is the paradigmatic kind of doing in both MIE and BDS. Probably the clearest place at which the semantic role of inference comes into play is when Brandom discusses modal notions and how certain inferences are counterfactually robust.

It seems a little odd that there wasn’t any talk (to my memory at least) of the explicitating-explicating (technical Brandomese, left unexplained here; see BDS lectures 1 & 2) role of score-keeping vocabulary, namely commitment and the various kinds of entitlement. I don’t think that was mentioned at all, when it is starting to seem to me like it should have been. [edit: I have been corrected in the comments. Brandom does talk about the vocabulary of score-keeping. He says in lecture 4 that it is explicitating-explicating for all vocabularies. That roughly vindicates the intuition behind this paragraph. It also seems to support some of MacFarlane’s idea. Tip of my hat to eccecattus for pointing this out.] This could turn into a summer project…

John MacFarlane’s “Pragmatism and Inferentialism” is, in large part, about Brandom’s claim that inferring is a kind of doing and how that claim fits into the larger picture of Brandom’s project. Brandom also thinks that semantics must answer to pragmatics, that the meaning of a term is its role in a practice. This commitment means that representation can’t be one of the semantic primitives since representation is not an action, whereas inference is. Norms and proprieties can be brought to bear on actions and are less naturally brought to bear on states. But, as MacFarlane points out, even if representation isn’t an action, asserting is and we can see truth as a norm of assertion. This is not a completely unproblematic response (have there been any in philosophy?), but it is a line that Davidson, who MacFarlane characterizes as a pragmatist in the sense of emphasizing use, takes. Davidson is a pragmatist in this sense since the truth-conditional theories he promotes are tested in the field by how well they allow us to interpret agents as rational beings.

Brandom sees inference as bridging the gap between pragmatics and semantics. The notion of (material and formal) validity supplies proprieties for inferences. MacFarlane asks what makes validity an unproblematic norm where truth fails the test. He give an explicit answer to that question, but he points to analogs of the reasons why truth fails. There are times when evidence supports asserting something that is false and there are times when it is improper to assert something that is true, e.g. due to redundancy or to lack of evidence. Going a bit further, when we look at Brandom’s primary semantic notions (incompatibility, entitlement-preservation, commitment-preservation) we find that they do not have anything to do with validity. They are all norms for deontic score-keeping. Deontic score-keeping is what connects use to meaning. (This is really what makes MacFarlane’s paper interesting.) The score-keeping is a kind of Davidsonian interpretation, so it looks like this lends some support to someone that wants to be a representationalist, i.e. use truth-conditional semantics, while being a pragmatist. Additionally it leaves it somewhat open what the role of inference in meaning is. Inferential role is how Brandom cashes out meaning, but it is no longer the concept bringing together meaning and use. That concept is score-keeping, and inference shows up there in what inferences the score-keepers are disposed to make, not the ones the asserting agent is disposed to make. Also, it opens the door as to what sorts of use meaning could consist in (if it so consists). The candidates on the table are inference and score-keeping (maybe assertion?), but there is no argument (that I know of) to the effect that this exhausts the possibilities. I don’t have anything to offer as a candidate, although it seems like there could be possibilities out there.

Author

Shawn Standefer, recent Ph.D. in philosophy from Pitt. (More about me)

Archives