Notes to The Computational Theory of Mind
1. There is an alternative usage of “analog,” on which analog computation exploits some kind of structural analogy between computational states and a represented domain (Ulmann 2023, p. 2). Analog computation in this sense need not be continuous. For example, a Turing machine might use stroke marks on the machine tape to count the number of words in an input text. If the Turing machine appropriately exploits the analogy between number of strokes and number of words, then its computations could count as analog according to the alternative usage. Lewis (1971) offers an influential explication of the alternative usage. Maley (2011, 2023) offers a similar explication.
2. The label “classical” is sometimes taken to include additional doctrines beyond the core thesis that mental activity is Turing-style computation: e.g., that mental computation manipulates symbols with representational content; or that mental computation manipulates mental representations with part/whole constituency structure; or that mental computation instantiates something like the Von Neumann architecture for digital computers. Note also that the abbreviation “CCTM” is sometimes instead used as shorthand for the connectionist computational theory of mind.
3. Computer science offers several techniques for implementing read/write memory in neural networks. For example, if we use a suitable analog recurrent neural network, then we can encode the contents of the memory tape in the activation values of nodes (Siegelmann and Sontag 1995). However, implementationist connectionists do not propose that memory in biological systems actually works this way, perhaps because they regard the implementation as biologically implausible (Hadley 2000).
4. A related argument claims only that internalist explanation offers certain advantages over externalist explanation (Block 1986; Chalmers 2002; Lewis 1994). This argument does not attempt to expunge wide content from psychological explanation. It simply maintains that we gain explanatory benefits by citing narrow content.
5. Fodor’s early (1980) discussion suggests a similar view: two tokens of a single Mentalese syntactic type share the same narrow content but not necessarily the same wide content. For example, there is a Mentalese syntactic type WATER that could denote either H2O or XYZ but that necessarily expresses a single fixed narrow content. Mental computation is formal (because insensitive to externally determined semantic properties) and narrow-content-involving (because Mentalese syntactic types have their narrow contents essentially). Fodor’s later work (from the early 1990s onwards) abandons narrow content, along with any leanings towards content-involving computation.
6. Horowitz (2007), Bontly (1998), and Shea (2013) likewise favor externalist individuation of computational vehicles, albeit for somewhat different reasons than those considered here.
7. As discussed below in §7.1, the machine may implement a trivial computational model (e.g., a one-state finite automaton). However, a trivial model along these lines sheds little if any light on the machine’s operations.
8. Naturalistically-minded philosophers often try to reduce representational content to Dretske-style information (Dretske 1981; Fodor 1990). This reductive project is controversial.