Why has standard first-order logic occupied philosophers so much? In particular, why take it as the basis for forays into the foundations of math or the nature of language or metaphysics? It has its good points to be sure. It is complete and compact. The negative that I’m thinking of is non-trivial though. First-order logic (with identity) has some pretty large expressive limitations. In particular, it cannot express the general concept of finitude. It can express trivial finiteness conditions, e.g. “there are exactly n things that are such and such” or “there are at least m things that are so and so”. It cannot express “there are finitely many things that are so and so”, so it also is incapable of expressing “there are infinitely many things that are so and so”. Of course, if we want we can make up our own quantifiers that express infinitude. This comes at the expense of some of the niceties of first-order logic; in this case the Loewenheim property. If finite sentences are too limiting, we can always add infinitary conjunctions or disjunctions. These force us to give up standard forms of compactness. However, these seem like good moves to make in many applications. Why would we saddle ourselves with expressive limitations from the outset?