Notes to The Neuroscience of Consciousness
1. If one had to pick a rubric for the type of neuroscience discussed here, it would be cognitive neuroscience, a label that both psychologists and neuroscientists gravitate towards and which refers to the task of identifying the neural basis of states that have some connection to cognition broadly construed, with much recent focus on perception, decision making, and memory.
2. Information as used in neuroscience is typically not a semantic notion (see the entry on information. The relevant notion of information in neuroscience is a statistical notion about reducing uncertainty regarding a random variable. It is not a notion of semantic content. See the entry on the contents of perception. For an accessible deployment of similar notions of information in philosophy of mind as a basis for semantic content, see (Dretske 1981).
3. Evans wrote:
[A] subject can gain knowledge of his internal informational states [his ‘perceptual experiences’] in a very simple way: by re-using precisely those skills of conceptualization that he uses to make judgements about the world. Here is how he can do it. He goes through exactly the same procedure as he would go through if he were trying to make a judgement about how it is at this place now … he may prefix this result with the operator ‘It seems to me as though…’. (Evans 1982: 227–28.
4. The issues become even more complicated depending on what form of attention we are focusing on (see Chapter 5 of Wu 2014b for detailed discussion). For a theorist who takes attention to be necessary and sufficient for consciousness, see (Prinz 2012).
5. Experimental work attempting to dissociate attention from consciousness often fail to be sufficiently specific as to what attention is. For example, some recent work attempts to show consciousness in the “near absence” of attention (Li et al. 2002). This work, while interesting, would not address the central issue of the necessity of attention for consciousness (for discussion, see Wu 2014b: chap. 5).
6. Colin Klein (2017) has argued that unresponsive wakefulness syndrome patients lack a capacity to form intentions. It seems plausible that their normal capacities for forming intentions are defective. It is less clear whether their responses to commands would fail to count as an intentional action, whether this involves acquiring an intention due to “exogenous” stimuli (the command) or whether intentional action does not require an intention (it could be a simpler, action-oriented state). There are also broader issues about “levels of consciousness” that we shall not address here (for discussion, see Bayne, Hohwy, & Owen 2016).
7. To motivate IIT, Tononi (2004) contrasts systems that can carry large amounts of information but are not conscious with those that are. For example, he contrasts a large array of photo diodes to the human visual system. Both systems can convey a large amount of information, but only one is conscious. As he notes, a crucial difference between the two systems is that in the human visual system, information is integrated via connections between visual neurons while the photo array, with its disconnected diodes, lacks the capacity to integrate information. Still, this contrast suggests only a necessary condition for consciousness, namely that the absence of integration disrupts what is needed for consciousness. It is not clear that the case motivates a sufficiency claim. It is also not clear that it provides an informative necessary condition (e.g., breathing is an uninformative necessary condition for human consciousness). Second, why should this thought experiment motivate IIT over other theories? While it might be true that the photo array lacks integration, it also lacks many other capacities present in a human system.
Tononi (2008) and colleagues (Albantakis et al. 2023) also motivate IIT by appeal to “axioms” about consciousness that are taken to be self-evident, irrefutably true and immediately given through introspection and reason. This raises earlier concerns about an introspective channel that can reliably deliver such truths (section 2.1). The key axiom that Tononi adduces is integration: consciousness is unified [in that] each experience is irreducible to noninterdependent subsets of phenomenal distinctions. So, the Italian verb “sono” is not the conjunction of separable experiences of “so” and of “no”. Thus, rather than thinking of the axiom of integration as entailing information integration (Φ > 0), we can see IIT as something like a best explanation of the axiom (for criticisms of IIT’s axiomatic approach, see Bayne 2018). Taken this way, the challenge will be to test this specific claim, to show that concrete cases of conscious integration are best explained by informational integration. The question remains whether we have reason to endorse IIT over other theories.
8. According to critics, IIT has failed to find empirical neuroscientific support. IIT proponents have developed a measure called Perturbation Complexity Index or PCI (Massimini et al. 2005). PCI measures complexity changes in EEG signals produced by transcranial magnetic stimulation (TMS) disruptions. During non–rapid eye movement (presumably dreamless) sleep, the initial response of a TMS pulse is stronger but it is rapidly extinguished. The activity elicited by the TMS pulse typically does not travel beyond the stimulation site. This breakdown in cortical effective connectivity has been taken by proponents as evidence in favor of IIT: when there is no awareness, there is less integrated information which is reflected in the reduction of cortical connectivity. However, the results from PCI are at best a measure modestly related to IIT’s Φ. For instance, EEG measures are several orders of magnitude coarser than anything IIT postulates as the relevant causal structures capable of integrating information (Sitt et al. 2013). Similarly, neuroimaging predictions made by IIT proponents have not been in any way derived from the theory itself (see Other Internet Resources Fleming 2023b). More generally, estimating Φ for any system of interest, let alone a human brain, is not feasible as recognized by IIT theorists themselves (Albantakis et al. 2023: 39). To put it in perspective, the possible bipartitions for a 128-channel EEG that would have the maximally irreducible integrated information among all possible partitions is approximately under 2 × 1038 (that is a 2 followed by 38 zeros) (H. Kim et al. 2018). As acknowledged by proponents, “for the 302 neurons that make up the nervous system of C. elegans, the number of ways that this network can be cut into parts is the hyperastronomical 10 followed by 467 zeros.” (Koch 2012: 128). The human brain has 86 billion neurons, which makes computing Φ effectively intractable.
It has also been pointed out that Φ is not well-defined for general physical systems like the human brain (Barrett & Mediano 2019); existing IIT-inspired measures such as PCI do not provide specific tests of IIT, leaving it effectively untested (Mediano et al. 2022) and risking conflating evidence of some empirical measure of cortical connectivity (PCI) with evidence of the axiomatic, metaphysical claims of IIT (Michel & Lau 2020; Sitt et al. 2013).
From a purely theoretical perspective, “the axiomatic foundations of IIT are shaky” (Bayne 2018: 7) because the axioms either are not self-evident, or they fail to provide substantive constraints on a theory of consciousness. Moreover, there is no sense in which the postulates describing the implementation of IIT in physical systems are ‘derived’ from the axioms (Merker, Williford, & Rudrauf 2022) and even if one accepted the postulates, it is unclear, “How does one deduce that the ‘amount of consciousness’ should be measured by Φ, rather than by some other quantity?” (Aaronson 2014).
Hanson & Walker (2019) are concerned about the unfalsifiability of IIT because estimating Φ faces a non-uniqueness problem: in certain conditions, a system can be attributed several Φ values, including Φ = 0 and Φ > 0, which would mean that the computation of Φ indicates both that the system is not conscious and conscious. Relatedly, Doering et al.’s “unfolding argument” argues that IIT is either false or unfalsifiable: the problem is that “every system for which we can measure non-zero Φ allows a feed-forward decomposition with Φ = 0” (Doerig et al. 2019: 6). For these and other reasons, there are concerns among some neuroscientists of consciousness about the unfalsifiability of IIT (see also Bartlett 2022) and in consequence about its scientific status (IIT-Concerned et al. 2023 [Other Internet Resources]; also see De Brigard 2023 in Other Internet Resources). For useful comments on the role of theories of consciousness in science, with focus on IIT, see (Lau 2023, also see 2017 in Other Internet Resources).
9. Another important set of experiments taken to support the ventral-conscious/dorsal unconscious dichotomy concerns work that provides evidence for the seemingly different effects of visual illusions on each stream, with the ventral stream being subject to them yet the dorsal stream impervious to them as evidenced by report and by action respectively. Subjects report seeing an illusory stimulus as being a certain way when it is not, yet motor action towards the stimulus does not show itself to be subject to the illusion. For important early work on this, see (Aglioti, DeSouza, & Goodale 1995; Haffenden & Goodale 1998; Haffenden, Schiff, & Goodale 2001; and for criticism, see Smeets & Brenner 2006; Franz & Gegenfurtner 2008; Franz 2001). For an argument from these results to “zombie” action in normal individuals see (Wu 2013).
10. Campion et al. (1983) raised alternative explanations of the data, one to be discussed in the text. The others were: (1) perhaps information reaches spared residual cortex in V1 or (2) vision informs behavior due to light scattering from the stimulus onto a part of the retina corresponding to spared V1. Not every blindsight patient has been tested to rule out these alternatives though in the case of the well-studied patient GY, imaging suggests that no cortex is spared in his V1 lesion. Light scattering mediated behaviors has been demonstrated in some blindsight patients (King et al. 1996) though the effect cannot explain all blindsight cases. For example, in a clever control, a stimulus that elicits blindsight behavior fails to do so when projected to the blindspot (e.g., Stoerig, Hübner, & Pöppel 1985). It seems plausible that there exist cases where (1) and (2) are not true.
11. Binocular rivalry might seem visually idiosyncratic, a product of special laboratory conditions, but it might be common. Consider looking at the distance when there is an occluding tree in one’s right visual field blocking the right eye more than the left. There is a substantial difference in the images of each eye, and rivalry might obtain in such natural viewing conditions (Arnold 2011a, 2011b; O’Shea 2011).
12. The connection between the studies discussed in this section and the neural substrate of consciousness although insightful, it is also not always straightforward. Stimulation of sensory and motor cortex can produce conscious experiences with completely endogenously generated contents (that is, without external sensory stimulation) (Raccah, Block, and Fox 2021). However, one must be careful not to rush to conclusions along two fronts. First, one must not infer that the stimulated areas are sufficient for producing conscious experiences. For example, direct stimulation to the fusiform face area (FFA) (Parvizi et al. 2012) produces conscious experiences with face-altering contents (“You just turned into somebody else. Your face metamorphosed. Your nose got saggy, went to the left.”; see Movie S3 in Other Internet Resources). Microstimulation to FFA can also produce experiences of face-like features such as eyes and mouths superimposed on other objects (e.g., a basketball; Schalk et al. 2017). However, the likelihood that other regions are indirectly activated and partially driving the conscious experience cannot be underestimated (in fact, it is predicted by some theories such as Global Neuronal Workspace Theory and Higher-Order Theory; see sections 3.1 and 3.3). However, it is rare to record from the whole brain during microstimulation, so we do not know what other areas become active. Direct stimulation of the right ventrolateral prefrontal cortex made a patient experience a rapid succession of faces when staring at a blank background, and they also experienced modifications to a real face they were concurrently perceiving (Vignal, Chauvel, & Halgren 2000). Thus, (direct) stimulation of sensory cortex is not necessary, and it may not be sufficient, for consciousness. This leads to the second front of caution: the distributed nature of neural codes. While theorists of consciousness often talk about single (admittedly large) regions as responsible for sustaining experiences, consciousness likely emerges as the outcome of interactions across several regions and timescales, rather than from a single hotspot. For recent discussions about the distributed neural coding in general, see (Pessoa 2022, 2023; Westlin et al. 2023; Noble et al. 2023). Direct electrical stimulation of small neural populations opens an exciting window for studying consciousness, but the fact that it is a limited window should be kept in mind.
13. Consider Fetsch et al.’s (2014) recent attempt to have monkeys report their confidence about their perception. Confidence is not necessarily a phenomenal property, but we can still use it in consciousness research; the point here is to get the animals to turn their attention “inward”. Using a paradigm where direction of apparent motion (here, left or right) is reported by an eye movement, the researchers also provided animals with a “sure bet” choice in some trials, a third target to which the animal could move its eye. Where animals are uncertain of the stimuli’s motion, they can opt out by taking the sure (smaller) reward. The authors conclude that when animals were confident, they tended to make reports and reject the sure bet; when not confident, they opted for the sure bet. The researchers also microstimulated on some trials and observed a shift in the psychometric function, one they interpreted as a microstimulation based increase in confidence.
Does the animal in fact turn attention inward, assessing their confidence? One challenge is the meaning of the sure bet stimulus. When the animal moves its eyes to it, what exactly is it reporting? The authors suggest that the animal indicates lower confidence but given that “the monkey accepted the sure bet most often for the stimulus conditions that led to the most equivocal choice proportions” (2014: 798; our emphasis), the monkey might in fact be reporting that the stimulus was neither left nor right. That is, the report remains externally directed, a claim about the stimulus category and not about confidence. The report can thus indicate uncertainty without being a report of it.
14. Interestingly, microstimulation had to be generated at a certain intensity to be detected: (a) at low amplitudes (<40 mA)), animals did not detect the microstimulation and appeared to wait for onset of the test stimulus as if they had not noticed anything; (b) at moderate amplitudes (40-65 mA), the animals could detect a stimulation so attempted the task, but their performance fell to chance levels; (c) at higher amplitudes (>65 mA), animals were able to discriminate as well as when the stimulation was mechanical. Clearly mere stimulation is not sufficient to trigger behavior, and action is engaged only when stronger stimulation is applied.
15. Similar work has recently been done using optogenetics to manipulate gustatory guided behavior in rodents (Peng et al. 2015).
16. We set aside issues of distributed representations. The simple code noted here is a version of the grandmother cell hypothesis which maps one-to-one semantic value to neuron. A different view takes representation to be distributed rather than local. Our question is not about the vehicle but about the content of the representation.