You are currently browsing the category archive for the ‘language’ category.

MacFarlane presents a trichotomy within the discipline of semantics: presemantics, semantics proper, and postsemantics. MacFarlane understands presemantics as Belnap presented it in his article “Under Carnap’s Lamp.” Presemantics is a theory of the available semantic values and their relations. Linguistic expressions do not enter into presemantics, unless they are themselves some of the objects under consideration as possible values. In a phrase, the point of presemantics is to make it clear upon what truth depends. Semantics proper, in MacFarlane’s words, “brings together grammatical and presemantic concepts to give an account of how the semantic values of expressions depend on the semantic values of their parts.” (p. 187) Semantics proper is the more or less familiar enterprise of computing semantic values of large expressions or sentences from the values of their atomic parts. Read the rest of this entry »

Advertisements

I haven’t been doing much in the way of natural language semantics or pragmatics lately. It is something that I hope to return to, especially after seeing my inbox this morning. I know some of my readers are actively working in those areas, so they are likely to appreciate this. I just got an email from Amazon saying that David Beaver’s Sense and Sensitivity: How Focus Determines Meaning is coming out soon. It is a book on the semantics and pragmatics of focus. From the blurb it looks to have a good overview of the field of formal pragmatics. I took a class on semantics and pragmatics from David that was quite good. Near the end we covered focus some, including David’s work, and if that material, or some version of it, is in the book, it is well worth a read.

There is a review of Situating Semantics up at Notre Dame Philosophical Reviews. The main critical part looks to be a discussion of various criticisms and defenses of Perry’s notion of unarticulated constituent. The author doesn’t say much about the non-language-oriented essays in the volume though. It ends with a cute anecdote too.

This post probably just indicates that I am missing some key bits of knowledge. Why are all metalanguage classical (at least the ones I’ve seen)? Are there not any non-classical ones? Are there reasons for not using a non-classical metalanguage or there not being any, apart from making things more complicated? Surely there are discussions of these things somewhere…

On a different note, googling {{“non-classical metalanguage”}} yields 5 results. After this post it will probably yield 6.

This might be common knowledge amongst the people who are in the know, but I just found out about this. Paul Spade at Indiana has a website full of stuff on medieval logic and philosophy of language. This includes a full manuscript he wrote on the late medieval views on these things, including material on Buridan and Occam. He also has a lot of translations of relevant material up. I’ve read through the first two chapters of his book and it looks like it will be informative and interesting. Chapter two has a brief overview of the development of logic from Aristotle and the Stoics up to the 13th century. It also has a cute picture of the dragon of supposition.

In an introductory article on forcing, Timothy Chow mentions something he calls “exposition problems,” which are the problems of presenting some material in such a way that it is perspicuous, clear, explained, and learnable. He thinks that forcing presents an open exposition problem. I just read through Ramberg’s Donald Davidson’s Philosophy of Language and it goes a way towards an answer to the exposition problem for Davidsonian philosophy of language. With the exception of the incommensurability chapter towards the end, it is remarkably clear and quite helpful. I’m not sure if it would be perspicuous to someone coming to it without having read at least some of the Davidson articles. If you have read them it does a good job of displaying the unity of Davidson’s thought on language which is not always apparent when, say, one juxtaposes “Truth and Meaning” and “A Nice Derangement of Epitaphs”. Ramberg isn’t doing straight Davidson exposition though and the volume of quotation is rather meager. He does succeed in presenting Davidson’s ideas in a coherent, unified, perspicuous manner that, at least for me, made things gel. One of the things that he emphasized is that interpretation is a process that is supposed to result continuously in the revision of theories of truth rather than a single theory. This is maybe easier to see in “Nice Derangement” than the early stuff. I don’t know if Davidson made this explicit anywhere though. Anecdotally, I heard someone say that Davidson endorsed this book as a better explanation of his theory than he ever gave.

I came across something while reading this that reminded of a claim Davidson makes which I’ve never quite gotten. He claims that in order to interpret someone you have to treat their beliefs as mostly true. Since beliefs are mostly true there isn’t the possibility of systematic error of the kind skepticism points to. Ramberg didn’t say much about this that clarified why this is so. He may have said some things in relation to the principle of charity that are relevant and I suspect there is a connection to his rejection of the principle of humanity (aim to maximize intelligibility rather than agreement). However, it seems like if I ran into a modern Don Quixote, who took cars to be metal horses and who took my apartment to be a castle and me to be a coffee bean, I could interpret his (bizarre) behavior even though it seems like most everything he says is false. It may take a little while for enough of his knights errant tale to come out, but it seems like his speech would be interpretable. Despite the fact that most of what he says is false, one would be able to work out the ways in which it is false, thereby making sense of him. Maybe the idea is supposed to be that there is a lot more that he believes that is true, or at least that you take to be true, that is semantically connected to what he says, though not made explicit in his speech behavior (possibly implicit in his nonverbal behavior). This other stuff must, for the most part be true, in order for us to make sense of him. But if my Don is under the impression that he is floating above the surface of Mars, many of these background beliefs go false too. It seems like I’d be able to interpret him, with some difficulty, yet his beliefs are systematically mistaken. I don’t think I could interpret him if I didn’t take him ah treating most of his beliefs as true. This, however, isn’t what Davidson claims. He thinks that it would be impossible to interpret someone unless you treated them as having mostly true beliefs. So, I am stuck.

Before getting to the post proper, it will help to lay out a distinction drawn, I believe, by Sellars. The distinction is between three sorts of transitions one could make in relation to propositions, for example if one is playing a language game of some sort. They are language-entry moves, language-language moves, and language-exit moves. The first is made through perception and conceptualization. Perceiving the crumb cake entitles me to say that there is crumb cake there. The second is paradigmatic inferential or consequential relations among propositions. Inferring from p&q to p is a language-language move. The third is moving from a practical commitment or explicit desire to action. Borrowing Perry’s example, it is the move from thinking that I have to be at the meeting and that the meeting is starting now to me getting up and rushing off to the meeting.

In Making It Explicit, Brandom distinguishes three things that could be meant by inferentialism. These are the necessity of inferential relations, the sufficiency of inferential relations, and hyperinferentialism. The first is the claim that inferential articulation is necessary for meaning. Representation might also be necessary, but at the least inference is necessary. The second is the claim that inferential articulation is sufficient for meaning. In both of these, inference is taken broadly so as not to collapse into hyperinferentialism, which is the thesis that inference narrowly construed is sufficient for meaning. The narrow construal is that inferences are language-language moves. What does this make the broad construal? According to Brandom, it includes the language-entry and -exit moves. In MIE, Brandom defends, I believe, the necessity of inferential relations, although he says some things that sound like he likes the idea of the sufficiency claim. He doesn’t think that hyperinferentialism will work. This is because he thinks that for some words, the content of the word depends on causal/perceptual connections. I think that color terms are examples. Additionally, the content of some words exhibits itself in what practical consequences it has in our action and this exhibition is an essential part of the meaning of the word. My beliefs about crumb cake will influence how I act around crumb cake. Hyperinferentialism cannot account for these because the language-entry and -exit moves essential to their meaning are not things hyperinferentialism has access to.

Brandom’s claim then, once things have been unpacked a bit, amounts to saying that the narrowly inferential connections, perceptual input, and practical output are necessary for meaning. This seems to undercut the charge that inferentialism loses the world in a froth of words, which charge is mentioned at the end of ch. 4 of MIE, I think. It is also a somewhat looser version of inferentialism since things that are not traditionally inferential get counted as inferential. The inferentialist could probably make a case that that the language-language moves are particularly important to meaning, but I think Brandom’s inferentialism stretches the bounds of inference a bit. I’m not sure an inferentialist of the Prawitz-Dummett sort would be entirely comfortable with the Brandomian version of it. By the end of MIE, Brandom’s broad notion of inference encompasses a lot. Granted, it is fairly plausible that much of that is important to or essential for meaning. However, I wonder if it doesn’t move a bit away from the motivating idea of inferentialism, namely that inference is what is central.

This may be quite naive and rambling, but I’ll go ahead. There is a difference in views of language that I’ve been somewhat puzzled by for a while. One approach takes linguistics fairly seriously and focuses more speaker intuitions. These two parts may not be intimately related. Philosophers falling into this camp are, say, Jason Stanley and Francois Recanati. Another camp tends not to pay that much attention to linguistics. There is, maybe, a tendency to view language through the lens of first-order logic, broken into terms and predicates. Philosophers in this camp are, say, Sellars and Dummett. I think Davidson might be read as being on each side at different points in his writings. As it stands, I’ve drawn nothing resembling a clear line. This may even play out in different approaches to semantics. Why one might concentrate on, say, meaning in terms of reference or meaning in terms of inference. As an aside, I once asked one of my linguistics professors at Stanford what linguists thought something like conceptual role semantics. He said that they didn’t really think about it because it was a different sort of project than what they were interested in. He pointed me to this little explanatory piece by Ned Block. That helped put things into perspective, but I digress… I almost want to say that the integration of philosophy of language into other areas of philosophy also cuts along these lines. However, I am fairly sure that is false and more a product of selective memory. People on both sides are interested in integrating philosophy of language into other areas, such as philosophy of mind or of action. In any case, I came across a footnote in van Fraassen’s essay on Putnam’s paradox which exemplifies one side (It is left as an exercise to the reader to guess which)(The context is a discussion of translation and knowledge of meaning):
“For the sake of example I am here pretending that Dutch and English are two separate languages in actu. I usually think of one’s language as everything one has learned to speak, and of natural language as consisting in all the resources we have for speaking and writing.”
This, by itself, probably does not force van Fraassen into one or the other grouping, although it seems to lend itself to fitting into one group rather than the other. This idea, it seems, lends itself to viewing language more expansively than usual for linguistic semantics. Interestingly, I think that Lewis in “Language and languages” could be on board with this completely even though he is someone I would usually situate in the first group. I must go ask some linguists what they think of this…

This was a little rambling, but, I am currently stuck on a paper idea. What better way to pass the time than ramble on about approaches to the philosophy of language? Probably doing those model theory proofs now that I think about it… But, I’m pretty sure people that would naturally place themselves in one or the other camp read this, so hopefully someone is interested.

In the Topics, Aristotle describes accidents as “now belonging, now not belonging.” This is an interesting thing to note since, at least how the translation renders it, it looks like it is part of one sentence. Clearly if you are saying this, it would be spread out over time and would be evaluated in slightly different contexts. The time parameter would have shifted. Considered as written and as a type, rather than a token, it seems reasonable to deny that there is a principled reason to shift the context. The whole thing would be evaluated relative to one context. In that case though what Aristotle says is false and can’t be true. It would require something to be both P and not P.

I mention this because one of the features of Kaplan’s setup for indexicals is that he evaluates types in context. This is because types, not being spread out in time, can be evaluated relative to one context. This lets us get at the logic of the terms rather than getting bogged down in details about tokenings. Aristotle’s example seems to be one place where this supposed virtue breaks down. In order to understand it and properly evaluate it we must consider both the type and the tokenings as temporally spread out.

I’m doing a reading group this summer with some of the other Pitt grads on Brandom’s Articulating Reasons and (later) McDowell’s Mind and World. I finally got my copy of Articulating Reasons (AR) back, so I figure I will do a few posts on issues we’ve covered in discussion. The first is going to be Brandom’s discussion of harmony in the final sections of chapter 1. For some good background on harmony, check out the presentation slides that Ole put up. They are quite helpful.

On the inferentialist picture in AR, the meaning of a word or concept is given by the inferences in which it figures. Brandom discusses Prior’s tonk objection to this idea for the case of logical constants, and then he goes on to state Belnap’s reply that logical constants should be conservative. This will not work for placing restrictions on the introduction of new vocabulary into a language for an inferentialist (understanding the inferentialist project as trying to give the meanings for the whole language, not just the logical constants) since conservativeness is too strong. To be conservative is not to license any inferences to conclusions that are free of the new vocabulary. This however is to not add any new conceptual content, i.e. any new material inferences. Adding new conceptual content would mean licensing new material inferences which would interact with the old vocabulary and conceptual content to get new conclusions. Brandom sums this up by saying, “the expressive account of what distinguishes logical vocabulary shows us a deep reason for this demand; it is needed not only to avoid horrible consequences but also because otherwise logical vocabulary cannot perform its expressive function.” It take this to mean that logical vocabulary can make something explicit because it is not adding any content to the mix, not muddying the conceptual waters so to speak (or even to throw another metaphor in, not creating the danger of crossing the beams of the conceptual). I will say more about this in another post.

Brandom proceeds to the next obvious constraint on inferences, Dummett’s notion of harmony, which is supposed to generalize Belnap’s conservativeness. Brandom doesn’t say much to shed light on what precisely harmony is. The idea, presented roughly, is to find some sort of nice relationship between the introduction rules (circumstances of application; left rules) and the elimination rules (consequences of application; right rules). Brandom reads Dummett as hoping there will be a tight connection between the consequences of a statement and the criteria of its truth (p. 74 in AR, p. 358 in Dummett’s Frege: Philosophy of Language). Dummett also says that conservativeness is a necessary but not sufficient condition on harmony. Brandom thinks there is reason to doubt the latter, namely that new content can be added harmoniously to an existing set of material inferences, which conservativeness does not allow. Following Wittgenstein, Brandom thinks there is reason to doubt the former. I won’t go into that though. He sees Dummett as wanting a theory of meaning to provide an account of harmony between circumstances and consequences of application.
Brandom says, “that presupposes that the circumstances and consequences of application of the concept of harmony do not themselves stand in a substantive material inferential relation.” Instead, Brandom thinks the idea of harmony only makes sense as the process of harmonizing, repairing, and elucidating concepts. He seems to take it as requiring an investigation into what sorts of inferences ought to be endorsed, normative rather than descriptive. This process, on the Brandomian picture, is done by making inferential connections explicit through the codification of these in conditionals and then criticizing and defending these commitments. Sounds a wee bit hegelian (I say like I know anything about Hegel…).

Brandom’s idea here seems to be that conservativeness works for characterizing or constraining what counts as a logical constant. I have some qualms about this since conservativeness will be relative to the initial vocabulary and set of material inferences. Harmony will not serve as a constraint on the wider field of all material inferences since the concept of harmony itself must stand in inferential relations, so it is subject to change over time, revised in light of other commitments. If this is so, I don’t see why that consideration wouldn’t also apply to conservativeness, unless that somehow can be added conservatively to all languages. I’m skeptical that that is the case. If that is right, I’m not sure how Brandom can maintain that harmony cannot function as a semantic constraint while conservativeness can. It seems like he wants to have his cake and eat it too.

Author

Shawn Standefer, recent Ph.D. in philosophy from Pitt. (More about me)

Archives

Advertisements