There are several conceptions of the lexicon. The two I will be concerned with here are the generative lexicon (GL) and the sense-enumeration lexicon (SEL). SELs are like dictionaries in that they list each individual sense of a word. Thus, ‘bank’ has an entry for a building, an institution, and so on for any other senses. These senses are usually distinct entries. This leads to a lot of entries, but each entry doesn’t need much structure since it isn’t storing much informaton. GLs posit fewer entries but with richer structure. They posit entries together with rules and mechanisms for generating other senses. This is to cover the building/institution senses we see in the case of ‘bank’. Polysemy seems to be one of the biggest (the biggest?) consideration for GLs. SELs account for polysemy by including more entries. This looks bad since polysemy is systematic, so there are lots of entries for lots of words that are systematically similar. GLs are supposed to be better than SELs because they capture the overarching generality and uniformity that SELs miss. The different entries for ‘bank’ miss the uniformity given by the overall meaning of ‘bank’. The main arguments for GLs over SELs that I’ve seen say that: there is systematic polysemy and that language is generative; SELs can account for the former by a proliferation of entries; accounting for the latter would require an infinite number of entries, by the meaning of ‘generative’; SELs are by definition finite; therefore SELs cannot account for generativity; additionally, SELs cannot capture the overarching uniformity in polysemic words. The responses that I’ve seen to this seem to say: SELs can capture polysemy through more entries because the systematicity and uniformity aren’t that big a deal; generativity can be captured through enough entries because there will only be a finite number of generated senses. As far as I can tell, there aren’t particularly good arguments on either side although the pro-GL arguments look better.