In the last chapter of Lexical Competence, Marconi proposes a sort of thought experiment for investigating the bounds of linguistic competence that seems like it could be fruitful. Consider a natural language processing system that does not seem to understand language. Next, consider what needs to be added to that before you would attribute it competence. The response ‘understanding’ is ruled out at the start because it would not get us anywhere. Could anything be added, say, to Searle’s Chinese room to get it to the point we would attribute it competence? Marconi’s own answer is that a visual perceptual system of sufficient intricacy would help it acheive a minimal competence. It still has problems, e.g. in cases where the concepts involved don’t rely on visual or otherwise perceptual cues. It is an interesting start. To get the philosophical gears going, we can start to make the systems different from the Chinese room or a computer in a box. How about an android with perceptual systems that have all five senses? How about a thermometer? How about a robot that does not have any apparent linguistic output? These are a few cases it would be fun to kick around.