Empirical Approaches to Moral Responsibility

First published Wed Jun 11, 2025

What are the conditions under which an agent is morally responsible for some action that they have performed? Put another way, and acknowledging that this rephrasing might be contentious, what are the conditions under which it would be appropriate to praise or blame the agent for something they have done? (Strawson 1962; Wallace 1998; Coates & Tognazzini 2013). An account of moral responsibility supplies answers to these questions. (See the entry on “Moral Responsibility”).

Most theorists agree that moral responsibility requires satisfying at least two core conditions. The first is a control condition; the agent must have the right sort of control over what they do (Dennett 1984; Fischer & Ravizza 1998; Shepherd 2014). Such control would be lacking in cases involving force or coercion. The second is an epistemic condition; the agent must know certain things, such as what they are doing and the moral reasons that bear on their actions (Wieland & Robichaud 2017; Zimmerman 1997). The epistemic condition would be violated if a person does something out of ignorance, especially if their ignorance is itself non-culpable. See the entry by Fernando Rudy-Hiller on “The Epistemic Condition for Moral Responsibility” in this encyclopedia.

Some of the work in constructing an account of moral responsibility is conducted from the “armchair” (Daniels 1979; Williamson 2007). Theorists put forward plausible principles for when an agent is morally responsible for what they do. They check these principles for consistency with other principles and with reactions to specific hypothetical cases. They revise these principles and reactions to cases in the direction of greater coherence, simplicity, explanatoriness, elegance, and so forth.

Yet, consistent with the view that there is no sharp demarcation between philosophical and scientific inquiry (Quine 1957; Stich 1996), there could be many ways empirical evidence might be relevant to the task of constructing an account of moral responsibility. For example, we might believe that ordinary adults have the kind of control that is needed to satisfy the control condition for moral responsibility, while individuals with certain mental disorders, such as addiction, lack this kind of control. Empirical investigations might help identify what is possessed by the former and lacked by the latter.

This example, and others to be discussed below, highlight that findings from the natural sciences, especially about reasoning, deliberation, belief formation, action selection, and self-control, among other topics, can potentially inform accounts of moral responsibility and application of these accounts to difficult real-world cases.

This entry examines four areas in which empirical insights have been fruitful aids in constructing and applying theories of moral responsibility. The focus is on work in empirical fields examining the mechanisms of agency, especially psychology, neuroscience, computational cognitive science, and artificial intelligence.

Of note, this entry does not take up findings from experimental philosophy, understood as the study of folk judgments about philosophically relevant cases (Mallon 2016). Experimental philosophy understood this way is about ordinary attributions of moral responsibility, especially ordinary intuitive judgments. The focus of this entry is, instead, on actual mechanisms of mind and agency, as revealed by empirical sciences, that have relevance for constructing and applying theories of moral responsibility.

Also, two topics are not discussed here: situationism and implicit bias. These topics fall within the scope of this entry, but they are covered at length in the entries in this encyclopedia by Christian Miller on “Empirical Approaches to Moral Character” and by Michael Brownstein on “Implicit Bias”.

1. Addiction and Impaired Control

Addiction figures frequently in contemporary works on moral responsibility because it is supposed to illustrate a canonical way that agency goes awry. We are usually authors of our actions; what we do is up to us. Many philosophers and other theorists claim that this is not so for individuals addicted to drugs. These theorists claim that these individuals are not morally responsible for their drug-directed actions, or else responsibility is to an important degree mitigated.

To better understand and assess these claims, we must get clear on a number of empirical questions about the nature of addiction and the ways it impacts agency—specifically addiction to drugs such as alcohol or opiates, which have been the focus of philosophical interest.

1.1 What is Addiction?

Addiction is a complex and heterogeneous phenomenon (Glackin et al. 2021), and it is best conceptualized as encompassing several importantly distinct components (Griffiths 2005; Sussman & Sussman 2011; Sinnott-Armstrong & Pickard 2013). These components include excessive use despite deleterious consequences, physiological changes including tolerance and withdrawal, and excessive time spent on drug-related activities, which crowds out other aspects of life. These components are reflected in the criteria for substance use disorders in the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association & DSM-5 Task Force 2013), a widely used text for psychiatric diagnosis and classification.

Impaired control is another important component of addiction that is conceptually distinct from the previous three (Heather 1998, 2020; Sripada 2022). The hallmark of impaired control is that a person with (sincerely held) goals to cut down or quit has an inability to get themselves to reach or maintain these goals. As noted earlier, control is a core condition for moral responsibility. Naturally, then, this fourth component of addiction, impaired control, has garnered the most philosophical attention, and it shall be our focus. What is the nature of impaired control in addiction?

1.2 Addiction Involves Irresistible Desires

One possibility is that impaired control in addiction arises from irresistible desires. Very roughly, a desire is irresistible if it is so strong that the person cannot resist it no matter how hard they try, though pinning down the idea with more precision is challenging (Mele 1990; Pickard 2015). A particularly vivid depiction of irresistible addictive desires comes from Harry Frankfurt, who gives us the haunting figure of the unwilling addict:

[The man] hates his addiction and always struggles desperately, although to no avail, against its thrust. He tries everything that he thinks might enable him to overcome his desires for the drug. But these desires are too powerful for him to withstand, and invariably, in the end, they conquer him. He is an unwilling addict, helplessly violated by his own desires. (Frankfurt 1971, p. 12)

The idea that addiction involves irresistible desires was famously endorsed by William James (James 1890) and has been widespread in philosophy, for example, in Fischer (2012, Section 2.2), Fischer and Ravizza (1998, p. 82), Wolf (1993, Chapter 2), Scanlon (1998, p. 290), and Wallace (1998, Chapter 5), among others. But is the idea accurate? Does impaired control in addiction typically take the form of desires too powerful for a person to withstand?

The answer appears to be, fairly clearly, no. Hanna Pickard provides an influential critique of the idea that that conditions such as addiction involve motives so strong that the person “cannot do otherwise” (Pickard 2015). A key observation is that individuals with addictions display substantial “incentive sensitivity”: they both act and withhold actions related to drug use based on incentives and disincentives for doing so (Pickard 2012; Hart 2013; Husak 1992; Heyman 2009; Sripada 2022). So, for example, they avoid drug use when negative consequences are clearly and saliently present, for example when a police officer is “at the elbow” (Morse 2000; Caplan 2006), and they choose to forsake using drugs if offered only modest-sized incentives (Hart et al. 2000; Higgins & Petry 1999). Additionally, many individuals with addictions routinely attempt to quit, and typically manage to maintain sobriety for days and weeks, something that would not be possible if drug-directed desires are literally irresistible (Pickard 2015).

1.3 Addiction Involves Desires That Are Very Hard to Resist

Faced with the preceding observations, many theorists retreat to a weaker claim: In addiction, drug-directed desires are not literally irresistible, but they are somehow uniquely hard or difficult to resist (claims along these lines are found in Watson 1999; Wallace 1999; Kennett 2013; Levy 2006; Henden, Melberg, and Rogeberg 2013; Morse 2002; Holton and Berridge 2013 and Burdman 2022). There are several potential problems for these “hard to resist” views.

First of all, there are active philosophical disagreements about how best to understand the notion of “hard” or “difficult” (Bradford 2015). One sense of something’s being hard emphasizes subjective effort (Bermúdez & Massin 2023). This sense seems to be at work when it is said that it is hard to lift a hefty but ultimately manageable weight, say 50 pounds. The weight is not unliftable, but it is not easy to lift—it takes some effort and feels somewhat aversive to do it.

But if this is the sense of “hard” at issue, it is not clear that something’s being hard or difficult counts as a form of impaired control that contributes to an excuse from moral responsibility. For example, here are some things that are hard in this effortful sense: going to work early in the morning, grading poorly written student papers, and cleaning up a baby’s messy diaper. Though these things are hard to do, it is a leap to say that we have impaired control over these things, or that we are excused from moral responsibility if we fail to do these things.

There is, in addition, an empirical problem for this “hard to resist” view under consideration. A number of studies have asked people to rate the strength of their drug-directed desires. These studies typically find that these desires are rated fairly moderately; notably, people almost never rate them using whatever is the highest rating the survey allows (Hofmann, Baumeister, et al. 2012; Hofmann, Vohs, et al. 2012; Preston et al. 2009).

Perhaps, then, one might offer an alternative analysis of what it means for one’s drug-directed desires to be “hard” to resist or otherwise control. Drawing on a picture rooted in cognitive behavioral therapy, Sripada (2021) proposes that many individuals with addiction experience distorted automatic impressions and evaluations. They inaccurately “see” their self as inadequate, the world as harsh, their future as bleak, their ability to cope without drugs as limited, their relief from using drugs as substantial, and so on. As a result, they engage in drug use (see Pickard (2016) and Flanagan (2013) for additional perspectives on distortions in addiction). These distortions that lie at the root of drug seeking are “hard” to control in the sense that they are difficult for the person to recognize and correct. More specifically, people are highly unreliable, in a statistical sense, at recognizing and correcting distorted impressions—they succeed sometimes but many times they don’t. On this model, the person has impaired control not over their drug-directed desires directly, but rather over their distorted impressions and evaluations, which in turn are the source of their seeking to use drugs.

1.4 People Who Use Drugs Do Not Have Impaired Control

Thus far we have been assuming that addiction involves some kind of impaired control over use of drugs and have examined several accounts of what this impaired control amounts to. Not everyone, however, subscribes to this assumption. Some theorists have been attracted to the position that people who use drugs heavily (and meet many of the conventional criteria for the diagnosis of addiction) do not genuinely have impaired control at all (Foddy & Savulescu 2007, 2010; Hart 2013; Heyman 2009). See also Pickard 2012, for a nuanced related position.

On these views, choices to use drugs, even substantial quantities of drugs, are made freely, rationally, and with full control. We may disagree morally with the choices these individuals make. We may not be privy to some of the instrumental goals that they are trying to achieve. But they choose rationally and their control is unimpaired.

There are, however, some serious problems with these “purposive choice” views. A typical pattern in addiction is that the person attempts to cut back or quit and has some temporary success (i.e., often days to weeks), but eventually resumes regular patterns of use, with this cycle typically repeating multiple times (Dennis et al. 2007; Hunt et al. 1971; Kirshenbaum et al. 2009; McLellan et al. 2000). Also, in order to maintain sobriety, people with addiction frequently undertake interventions that are costly in terms of time, money, and other burdens and risks. For example, they join therapeutic communities (requiring meetings multiple times a week), attend counseling sessions, and undertake onerous drug treatments that require close clinical supervision. These observations seem inconsistent with the claim that drug use is chosen freely, purposively, and with full control.

1.5 What Is at Stake in the Debate?

Philosophers frequently assert that addiction is a paradigmatic real-world case in which individuals fail to meet the control condition for moral responsibility. But the empirical literature is far more tentative and unsure. Claims that addiction involves impaired control face serious challenges, and theorists have struggled to specify in detail the nature of the supposed control impairments. But claims that addiction does not involve impaired control also face serious difficulties. Work on the issue is ongoing, but meanwhile questions of moral responsibility for individuals with addiction hang in the balance. Whether and to what degree individuals with addiction are morally responsible depend importantly on exactly how the empirical cards get settled.

2. Responsibility for Spontaneous Conduct

Suppose a person does something morally criticizable. They deliberate carefully. They reflect on and endorse their relevant (morally flawed) motives. They form a judgment about what to do and follow through on it. Putting global forms of skepticism aside, assessing moral responsibility is relatively straightforward in cases like these, and most theories deliver similar verdicts. Assuming no defeaters are present, the person is morally responsible for their morally criticizable actions.

But many of our actions, in fact most, do not arise from deliberation; they arise spontaneously and unreflectively. A number of authors, including George Sher, Tim Scanlon, Angela Smith, and Santiago Amaya have pointed out that we can be morally responsible for various kinds of spontaneous conduct, including how we direct attention, what we notice and fail to notice, what we remember and forget, what we dwell on or ignore, our spur-of-the-moment actions (e.g., blurting something out), our momentary slips or lapses, our unwitting omissions, and so forth (Sher 2006, 2009; Scanlon 1998; A. Smith 2005, 2008; Amaya 2013; Amaya & Doris 2015; Nelkin & Rickless 2017).

2.1 A Case of Forgetting

To make matters more concrete, consider a case from Samuel Murray and Manuel Vargas that they call “Bourbon.”

As Randy is about to leave his home for the office, Al calls to tell him that they are out of bourbon. His regular route to the office takes him right by a liquor store, and Randy tells Al he’ll buy some. Between his home and the liquor store, Randy starts thinking about a paper he is writing on omissions. He continues thinking about his work until he arrives at the department, where he realizes that he has forgotten the bourbon. (Murray & Vargas 2020, p. 826)

Randy did not step back, reflect, and then deliberately violate his promise to bring the bourbon. Yet he is, it seems, morally responsible for what he did, or in this case, what he failed to do (Nelkin & Rickless 2017; Clarke 2014; Amaya & Doris 2015; Murray & Vargas 2020). Notice that it seems appropriate for Randy to apologize to Al, and that he might perhaps be forgiven, which further support this impression.

Yet, even if we are inclined to say that Randy is morally responsible, it is not at all clear how he could be because it is hard to see how he can satisfy the control and epistemic conditions for moral responsibility. After all, it seems odd to say that Randy was in control of his forgetting or that he knew he was forgetting (Murray & Vargas 2020; Nelkin & Rickless 2017).

Notice further that whatever the psychological processes that led to Randy’s forgetting, they don’t seem to resemble very much the processes at work in producing deliberative actions, the more familiar case studied in the moral responsibility literature. As Murray and Vargas put the point:

The problem is that in Bourbon—and relevantly similar cases—the offending agent’s conduct lacks all familiar actional and valuative antecedents that might ground responsibility. There is no decision, volition, intention, belief, desire, choice, or judgment (among other things). (Murray & Vargas, 2020, p. 826)

When confronted with cases such as Bourbon, an important yet underexplored next question is this: What exactly are the processes that generate spontaneous conduct? Are these processes simple and reflex-like? Or do they reflect more sophisticated forms of agency?

In asking these questions, we are acknowledging that it is implausible that we can simply treat the mechanisms that produce spontaneous conduct as a black box. Instead, it is likely that we need to get clear on the mechanistic underpinnings of why Randy did what he did before we can decide if he is morally responsible for his conduct.

2.2 Opening Up the Black Box

A first step is to map the language of the Bourbon case into more precise, empirically tractable constructs. We are told that Randy “starts thinking” about omissions during the trip. This in turn might plausibly be understood in terms of the allocation of attention: Randy attends to an internally generated stream of thought that pertains to omissions rather than other things that he could be attending to, such as thoughts about getting the bourbon. His allocation of attention to omissions is harmless until he is near the store. Unfortunately, he continues to allocate attention to omissions rather than the goal of getting bourbon, and so he passes by the store and arrives to his destination empty handed.

Having reconstructed the Bourbon case in terms of attention allocation, we can next turn to the substantial body of empirical work that illuminates how attention allocation works. Much of this work specifically studies allocation of visual attention to spatially arrayed external targets. Nonetheless, it is widely thought that allocation of attention to candidate internal targets—for example, thoughts or memory items—follows similar principles (Chun et al. 2011; Kiyonaga & Egner 2013).

According to a leading model, choices to allocate attention to candidate targets arise from integrated priority maps (Fecteau & Munoz 2006; Zelinsky & Bisley 2015; Theeuwes et al. 2022). These are maps that assign to each candidate attentional target a scalar score that represents the expected value of attending to that target. On the basis of these priority maps, the person chooses to attend to the target with highest value. Of course, due to the selectivity of attention, this implies the person is not attending (or perhaps not attending sufficiently) to the other candidate targets of attention.

The priority map model of attention allocation fits nicely with a much broader picture of the etiology of spontaneous conduct that has emerged in computational cognitive science (Rangel et al. 2008; Rangel & Hare 2010; Busemeyer & Townsend 1993; Carruthers 2018; Railton 2017a; Haas 2022, forthcoming; Sripada, 2025). According to this model, the mind houses an extensive set of algorithms for the ongoing calculation of the expected value of one’s actional options, where the value at issue is instrumental value relative to a person’s more basic aims, goals, and priorities. On this picture, one’s spontaneous conduct, which can sometimes feel reflexive or automatic, is actually the product of rapid decisions made on the basis of representations of the expected value of the options.

We can distinguish two kinds of processes operative in the priority map model of attention allocation. First, there is a set of processes that leads to the construction of priority maps. These are subpersonal processes that integrate a number of cues into an overall expected value representation that attaches to each attentional target (Anderson & Kim 2018; Failing & Theeuwes 2018). Second, there is a set of rapid decision processes that translate priority map value representations into actual choices about the target to which attention is in fact allocated (Forstmann et al. 2016; Ratcliff & McKoon 2007). Importantly, the rapid decisions implemented by this second set of processes are person-level events. This accords well with first person phenomenology. Allocations of attention aren’t simply happenings within one’s psychology; they are something that the person does (Watzl 2017; Wu 2023).

Adopting this overall priority map model, we get the following detailed account of what unfolds in the Bourbon case. At the start of the trip, Randy allocates attention to omission thoughts. He does so based on an attentional priority map that assigns omission thoughts a higher value than alternatives including bourbon thoughts. This makes sense—Randy is highly interested in omissions, so it is natural that his attentional priority map would reflect this. As he passes the store, however, there appears to be a missed opportunity: the subpersonal processes that construct and update priority maps could have been sensitive to his current store-adjacent location and shifted the priority map to place higher value on bourbon thoughts rather than omission thoughts. But this shift did not occur—his priority map continued to place higher value on omission thoughts rather than bourbon thoughts, and thus Randy continued to choose to attend to omission thoughts. And due to the limited capacity of attention, in attending to omission thoughts, he was not attending to bourbon thoughts—that is, he forgot about the bourbon.

2.3 How Findings from Cognitive Science Inform Assessments of Moral Responsibility

With these empirical details filled in, what then about moral responsibility?

One approach holds that the kinds of mechanisms that underwrite attention allocation in our empirically-detailed reconstruction of the Bourbon case do in fact confer the kind of control that is required for moral responsibility. This is broadly the tack taken by Murray and Vargas (2020). A hallmark of mechanisms that confer control is that they are reasons-responsive—they flexibly issue in different responses when there are sufficient reasons to do so (Vargas 2013; Fischer & Ravizza 1998; Dennett 1984; Shepherd 2014). The priority map-based mechanisms that underpin attention allocation appear to be reasons-responsive in just this way—a suite of learning algorithms and other mechanisms help to make sure priority maps track the things that promote one’s aims and priorities (Haas 2022; Railton 2017a). On this approach[1], since Randy satisfies the conditions for control, he is morally responsible for forgetting the bourbon, vindicating ordinary opinion.

Another position focuses on the epistemic condition for moral responsibility. Recall that priority maps are generated by subpersonal algorithms that are sensitive to the agent’s more basic aims and interests. Now, these algorithms work well most of the time, but they aren’t perfect. Like everything else in our psychology, they are only boundedly rational (Simon 1990; Lewis et al. 2014; Lieder & Griffiths 2020), and thus, inevitably, there will be many situations in which priority maps associated with candidate attentional targets are inaccurate. That is, there are many situations in which these maps will ascribe higher value to some candidate attentional target A rather than some target B, when, in fact, the actual value, relative to the agent’s own aims and priorities, favors B over A.

Suppose that, at the point when he is near the store, Randy’s priority map is in fact inaccurate. That is, the map assigns higher value to omission thoughts rather than bourbon thoughts when in fact the reverse should be the case, given Randy’s own more basic aims and priorities. If Randy allocates his attention on the basis of an inaccurate priority map and this is why he forgets to bring the bourbon, then he may have done what he did under ignorance. That is, he may fail the epistemic condition for moral responsibility, and thus he would not be morally responsible for his conduct (assuming, that is, that his ignorance is itself non-culpable—see the following section). This position revises folk opinion in the Bourbon case, identifying a potential excusing condition, i.e., ignorance, that might have otherwise gone unnoticed or been unappreciated.[2]

Stepping back a bit, the larger point is that we appear to have made some progress. At the start, it was noted that most theorists treat the mechanisms that produce spontaneous conduct in cases like Bourbon as a black box. By consulting the empirical literature, we have been able to offer up one version of what happens in the case with the box meaningfully filled in. We can next ask, given this picture of the mechanisms that operate to produce the relevant instance of spontaneous conduct, whether the control condition is satisfied, whether the epistemic condition is satisfied, and so on for other conditions relevant for moral responsibility. Empirical considerations have not settled the question of whether and why agents are morally responsible for spontaneous conduct. But they have brought to bear additional resources that clarify the debate and move it forward.

3. Moral Responsibility and the Etiology of Ignorance

3.1 The Epistemic Condition and Culpable Ignorance

When assessing the epistemic condition for moral responsibility, most theorists agree that it is not enough to check whether the agent acts under ignorance. We must further assess whether the agent is culpable for bringing about their ignorance—whether they did something that led to their own “benighting” (H. Smith 1983; Zimmerman 1997; Rosen 2004; Wieland & Robichaud 2017).

Consider the case of a doctor who infuses the wrong type of blood into a patient who then has a severe reaction (the case is loosely drawn from Rosen 2004). The doctor does this out of ignorance; she thinks the patient has one blood type when in fact the patient has an incompatible one. She has this mistaken impression, however, because she does not doublecheck the chart, something all doctors are supposed to do. She does not doublecheck because she is in a rush to get to her tee time at a golf club, the most exclusive in the city, and chooses to skip portions of the usual protocol. Here, her ignorance plausibly does not excuse moral responsibility as she is culpable for bringing it about.

Now consider a second case in which the doctor does doublecheck the chart. The chart itself is wrong because of a rare mistake—the laboratory procedure that establishes a patient’s blood type is nearly always accurate, but in this particular case, the procedure has assigned the wrong blood type. The doctor acts under ignorance, but she is plausibly excused from moral responsibility because her ignorance itself arises non-culpably.

In the preceding examples, we were able to stipulate key features of how the respective agents come to falsely believe certain things. In many interesting real-world cases, we cannot rely on stipulation. We must go out and investigate the pathways by which the agent came to be ignorant. Armed with this etiological information, we can make better assessments of whether the agent’s ignorance is culpable or non-culpable and thus whether they might receive an excuse from moral responsibility.

One place where this kind of empirical etiological inquiry is important is in assessing the role of culture and socialization in creating impediments to knowledge, especially moral knowledge, impediments that bear on questions of moral responsibility (Wolf 2012).

One note before proceeding. Our focus in what follows is on empirical questions about how certain forms of ignorance, especially moral ignorance, arise. In putting the focus here, we are passing over important conceptual questions that remain unsettled about such things as whether moral ignorance excuses at all (we are assuming, in agreement with the weight of philosophical opinion, that it does), and if so, precisely which forms of moral ignorance are excusing. Weatherson 2019, Chapter 5 provides a helpful overview of the literature on these questions.

3.1 Culture and Moral Ignorance

Michael Slote observes that though slavery is viewed as morally repugnant today, it was widespread in the ancient world (Slote 1982). So, can we blame Greek slave owners for their wrongdoing or their acts of vice? Slote thinks we cannot, because Greek slave owners were simply unable to see what virtue required regarding slavery. The reason, he believes, is that slavery was too much of a universal phenomenon, constraining people’s ability to conceive of alternatives. He writes:

Just as the alternative terms used by other languages can seem to make linguistic conventions seem like inevitable facts of nature, so too ignorance of alternatives to a given social arrangement can instill the belief that the arrangement is natural and inevitable and thus beyond the possibility of radical criticism. So, if the ancients were unable to see what virtue required in regard to slavery, that … requires some explanation by social and historical forces, by cultural limitations if you will. (Slote 1982, p. 72)

Michelle Moody-Adams agrees with Slote that whether culturally acquired beliefs defeat moral responsibility depends importantly on the empirical details of how they were acquired and sustained (Moody-Adams 1994). But she disagrees with Slote’s particular etiological story: “I challenge the empirical credentials of those views which attempt to exempt historical agents from responsibility on the grounds that they suffer from some presumed culturally generated inability to avoid wrongdoing” (Moody-Adams 1994, p. 293).

According to Moody-Adams, a better explanation of culturally sanctioned immoral practices and institutions is that individuals choose to participate in these practices and perpetuate them.

Our failure to see this as the correct etiological story owes to underappreciation of two facts about human psychology that are, according to Moody-Adams, well attested to in anthropology, social psychology, and case reports. The first fact is the “banality of evil,” the observation that ordinary humans are regularly and routinely able to inflict great cruelty on their fellows if there are incentives to do so. Moody-Adams cites Stanley Milgram’s famous experiments on obedience (Milgram 1965), among other empirical data, in support. The second fact is our proclivity towards “affected ignorance,” in which individuals engage in subtle forms of self-deception and motivated evidence seeking. Slavery was maintained as an institution, Moody-Adams surmises, because people made choices that were in part cruel and in part affectedly ignorant. See also Calhoun 1989 and Mills 2007, for influential related arguments.

3.2 Social Networks and “Bad Beliefs”

Neil Levy offers a contrasting perspective in investigating a related phenomenon, the acquisition of “bad beliefs” among members of certain political or ideological subcultures, for example, communities of climate change deniers, vaccination skeptics, or supporters of demagogic political figures (Levy 2021).

Levy’s argument relies heavily on the ideas of bounded rationality and epistemic deference. He notes that the evidence that bears on most complex scientific matters (climate, evolution) is so extensive and abstruse, it is simply not feasible or rationally advisable for ordinary people to evaluate all this evidence for themselves. Drawing on the work of quantitative biologists and social scientists working in the “dual-inheritance” framework (Boyd & Richerson 1988; Henrich 2016), Levy argues that people instead make heavy use of the strategy of deferring to the beliefs that prevail in their community. This strategy works because there are a number of features of human belief networks that jointly make it the case that the fact that others around you believe that p is in fact, on average, evidence for p. Levy argues that from these observations, it follows that those who hold beliefs that run counter to scientific consensuses are very often just engaging in rational deference—the problem is not in the way they form their beliefs but in the epistemic environment in which they are formed. See also Rini 2017, O’Connor & Weatherall 2019, Nguyen 2020, Dorst 2023, for importantly different arguments that reach similar conclusions.

A striking aspect of Levy’s picture is that those who hold “good” beliefs that are in line with the scientific consensus and those who hold “bad” beliefs that deviate from this consensus typically don’t differ much in terms of their respective methods of belief formation. The former just happen to be in a less favorable epistemic environment than the latter (see Worsnip 2022, for a helpful discussion of this point). Levy’s picture furthermore suggests that the ignorance of those who hold bad beliefs is often non-culpable. A parent who fails to vaccinate their child may not be morally responsible for the damage inflicted because they may be non-culpably ignorant of the balance of risks and benefits. The fault lies not with them but with the unlucky fact that their epistemic environment is polluted.

A key empirical assumption in Levy’s account is that taking on the beliefs of those around you is done solely, or even mostly, for epistemic reasons. But there is considerable evidence that people often adopt views, especially counternormative ones, that serve as “cultural badges” that identify the members of a moral, political, or ideological community and demarcate the community’s boundaries (Boyd & Richerson 1987; McElreath et al. 2003). These views are held on to tenaciously in the face of strong and persistent counterevidence because the epistemic costs are more than made up by personal, prudential gains. Membership in the group is valuable and deviations from orthodoxy are punished so the person has strong social incentives to endorse whatever is the “tribal” creed (Williams 2023, 2021; Kahan 2017; Funkhouser 2022).

But notice that at this turn in the discussion, it is not at all clear that we are still talking about the attitude of belief anymore, which was our original topic. The present claim under consideration is that people express support for certain views for prudential reasons: personal gain, social status, avoiding ostracism, and so forth. None of this requires the person actually to believe the relevant claims for which they are expressing support. Recall the original issue that we were aiming to address: When is a person excused from moral responsibility due to ignorance and when are they not excused because they culpably brought about their ignorance? If the claims currently under consideration are correct, then some cases that appear overtly to be ignorance may turn out to be no such thing; they are simply cases of people expressing social support for certain false propositions without actually believing them. Such expressions of social support, it would seem, are not forms of ignorance and are no shield from moral responsibility.

We have considered several alternative explanations for the etiology of ignorance, or related attitudes. It should be emphasized that there is one sense in which they don’t necessarily compete: each explanation might be true of different sets of cases. That is, it might be true that Slote’s explanation invoking cultural inability is applicable to some cases, Levy’s view invoking rational deference is applicable to others, and the views of Moody-Adams and other critics are applicable in still other cases (see Benson 2001, which makes a related point). It is ultimately an empirical question which of the preceding etiological explanations best fits the particular case at hand, if any. It follows that when dealing with complex, real-world cases, as opposed to stipulated hypothetical cases, assessing whether agents exhibit culpable versus non-culpable ignorance cannot generally be accomplished from the armchair alone. Such assessment will additionally require careful empirical inquiry into the etiology of how the relevant agents came to believe what they do.

4. Moral Responsibility and Artificial Agents

Agents, we have been saying, can be morally responsible for what they do. But what is an agent? Conceptual work is certainly needed to answer this question, but empirical work can make key contributions. An empirical field of particular interest is artificial intelligence (AI). As AI gallops forward, questions arise about whether machines of various kinds might be morally responsible for what they do.

4.1 The Agent Model

To get us going in thinking about this issue, a useful starting place is a framework for understanding agency developed in computer science and artificial intelligence called the “agent model” (Russell & Norvig 2020; Sutton & Barto 1998; Haas 2022). A standard formulation of the model considers an agent who interacts with an environment partitioned into a set of states, with various actions available at each state. The agent also has a function that assigns to each state a degree of intrinsic value to the agent; these assignments of intrinsic value capture the agent’s “goals”. Agency involves a loop. In the first step, the agent receives perceptual input regarding the current state and also receives the intrinsic value of that state. Next, the agent makes a selection from the actions available in that state. Based on that action, the agent enters a new state, and the loop starts again. The key task of agency involves learning over time to behave in ways that maximize cumulative, long-term achievement of intrinsic value.

Much of the excitement in the study of artificial agents is due to the development of a wide assortment of algorithms that explain how agents embedded in the above setup can learn what are the best actions (Sutton & Barto 1998; Haas 2022; Solway & Botvinick 2012). These algorithms take a variety of forms. Some learn directly from experience, while others learn a model of the workings of the world and consequences of one’s actions and leverage this information to guide the selection of actions. And though the agent model is relatively spartan in its setup, it has been shown to be extraordinarily powerful. Recent milestones in AI, such as machines that achieve expert-level performance at chess, Go, and Atari video games, have been achieved within the agent model framework (Campbell et al. 2002; Mnih et al. 2015).

The agent model might also serve as a tool to probe a space of possible agents, each with varying capacities, some of which might potentially be morally responsible for what they do. Recall the two core conditions for moral responsibility. It seems at least some artificial agents of the type envisioned in the agent model (hereafter just “artificial agent”) might be able satisfy these two conditions (Nyholm, 2018; Menges & Altehenger 2024).

An artificial agent flexibly adjusts its actions based on the current circumstances in order to maximize attainment of its aims. This kind of behavioral flexibility in pursuit of goals is widely thought to be the central element of control (Dennett 1984; Fischer & Ravizza 1998; Shepherd 2014). Moreover, an artificial agent selects its actions based on various kinds of knowledge. For example, it knows what actions are available in the current situation, which of these actions are most likely to achieve its goals, and so forth. Thus, it is at least prima facie plausible that an artificial agent might be able to satisfy moral responsibility’s epistemic condition.

And yet, suppose an Atari game-playing artificial agent is playing the game Space Invaders. It fends off wave after wave of aliens and thus saves the Earth’s cities from destruction. Putting aside the question of moral responsibility in fictional scenarios, it seems deeply implausible that the artificial agent should be considered morally responsible for what it does, or that it should be somehow praised (Nyholm 2022, Chapter 6). So, questions naturally arise about what is missing for moral responsibility.

4.2 What Is Needed for Artificial Agents to be Morally Responsible for What They Do?

What capacities do ordinary unimpaired adult humans have that are relevant for moral responsibility that seem to be missing in relatively simple artificial agents such as an Atari-playing machine? Here are a few candidates.

4.2.1 Temporally Extended Projects and Plans

The Atari-playing artificial agent has just a single goal that it reaches fairly quickly: win the Atari game. Humans have a much richer, interlocking set of things that they care about—health, success, relationships, cultivating one’s talents, and so forth (Jaworska 2007; Shoemaker 2003). These things can typically be accomplished only through endeavors and projects that unfold over years and even decades. Moral responsibility, one might argue, requires an agent with a rich and complex evaluative point of view and capacities for long-term planning agency (Bratman 2000) to bring about those things the agent cares about.

4.2.2 A Moral Sense and Access to Moral Reasons

The Atari-playing artificial agent has no sense of what is right or wrong, fair or unfair, good or bad, and what kinds of actions manifest vices or virtues. To be morally responsible, one might argue, an agent needs to be outfitted with a moral sense, have access to moral reasons (Wolf 2012; Wallace 1998; Shoemaker 2011), or be able to engage in moral learning (Railton 2017b). Only then can the agent be blamed for failing to do what morality demands.

4.2.3 Consciousness

Suppose a person acts based on motives that are entirely unconscious; they do what they do without awareness of why they are doing it. This is in fact what may happen in, for instance, certain kinds of sleepwalking (Levy 2014). Reflecting on cases such as these, some philosophers have claimed that consciousness is required for moral responsibility (Levy 2014; King & Carruthers 2012).

But why specifically might moral responsibility depend on consciousness? One influential view is that consciousness is required for, or else closely associated with, the widespread sharing of information throughout the agent’s psychology (King & Carruthers 2012; Levy 2014; Nahmias et al. 2020). In the conscious waking state, information is widely broadcast to diverse consumer systems, rendering the agent responsive to a wide range of reasons (Fischer & Ravizza 1998) and rendering their actions reflective of their full evaluative point of view (Wolf 1993). The problem with the sleepwalker, it is argued, is that low-level modules produce their behavior. These modules are unlikely to be flexibly responsive to a broad range of reasons, and the agent’s actions correspondingly won’t (typically) be reflective of their motives and evaluations, thus explaining why the absence of consciousness undermines moral responsibility.

4.2.4 “Ultimate” Control

Our Atari-playing artificial agent comes pre-programmed with its aims, in this case, a single aim to win the game. It learns over time to choose actions that bring it closer and closer to fulfillment of this externally installed aim. More generally, for an artificial agent like the Atari-playing machine, the moral quality of its actions depends strictly on what aims were prespecified. If these aims are morally commendable, the agent will act in the service of the good; if these aims are morally reprehensible, so too will be agent’s actions.

It might be argued that for genuine moral responsibility, agents cannot simply act on the basis of fixed, externally installed aims. The agents themselves must somehow have control over what they are aiming for, something we might term “ultimate control.”

What ultimate control amounts to is itself controversial. For some theorists, it only requires criticizability and revisability—whatever aims an agent has, there must be ways for the agent to reflectively criticize them (Frankfurt 1971) or revise them (Mele 2001, 2006), or at least change them were they to conflict with certain standards of correctness (Wolf 1980, 1993; Nelkin 2011).

For other theorists, the agent must be the “root” source of their own aims or other elements of their evaluative point of view. For example, one formulation goes like this: To be responsible for what one chooses at some time t, the elements of one’s evaluative point of view that are the basis for what one chooses at t must themselves be the products of the agent’s own prior choices. This formulation appears to set up an infinite regress. This this way of understanding ultimate control may be so demanding, it makes moral responsibility essentially impossible (G. Strawson 1994). But notice that it is impossible irrespective of whether the agent in question is human or artificial.

4.3 The Future of Moral Responsibility

We have identified several gaps that separate a fairly simple artificial agent from typical human agents. Looking to the future, some of these gaps are likely to shrink. For example, there are research programs trying to outfit artificial agents with abilities for long-term planning, leveraging sophisticated causal models of the world (Malinsky & Danks 2018; Steyvers et al. 2003). There is also great interest in building artificial agents that can engage in moral learning (Railton 2017b) and moral reasoning (Awad et al. 2022; Sinnott-Armstrong & Skorburg 2021).

Interestingly, in some cases, gaps between humans and artificial agents may emerge and grow in the opposite direction. That is, artificial agents may exhibit responsibility-relevant abilities that exceed our own. For example, in humans, information sharing is importantly limited by the capacity of working memory (Baddeley 2019; Carruthers 2015; Persuh et al. 2018), which is thought to be limited to roughly seven chunks of information at a time (Miller 1956). In artificial systems, components that play an analogous role to working memory need not be so constrained; dozens, hundreds, or even thousands of pieces of information may be simultaneously activated and available for the purposes of reasoning and inference.

Accounts of moral responsibility have primarily been developed by considering ordinary adult humans as the targets for the theory. In the future, a menagerie of distinctive artificial agents might arise, each outfitted with different sets of abilities and exhibiting different limitations. The targets that a theory of moral responsibility must be sensitive to are likely to commensurably expand. Theories of moral responsibility may need to be amended, refined, or otherwise rethought to accommodate these developments.

Bibliography

  • Amaya, S., 2013, “Slips”, Noûs, 47(3): 559–576. doi:10.1111/j.1468-0068.2011.00838.x
  • Amaya, S., & Doris, J. M., 2015, “No excuses: Performance mistakes in morality”, in J. Clausen & N. Levy (eds.), Handbook of Neuroethics, pp. 253–272, Springer Netherlands.
  • American Psychiatric Association & DSM-5 Task Force, 2013, Diagnostic and statistical manual of mental disorders: DSM-5, American Psychiatric Association.
  • Anderson, B. A., & Kim, H., 2018, “Mechanisms of value-learning in the guidance of spatial attention”, Cognition, 178: 26–36.
  • Awad, E., Levine, S., Anderson, M., Anderson, S. L., Conitzer, V., Crockett, M. J., Everett, J. A., Evgeniou, T., Gopnik, A., & Jamison, J. C., 2022, “Computational ethics”, Trends in Cognitive Sciences, 26(5): 388–405.
  • Baddeley, A., 2019, “Working memory and conscious awareness”, in A.F. Colins, S.E. Gathercole, M.A. Conway, and P.E. Morris (eds.), Theories of memory, pp. 11–28, Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Benson, P., 2001, “Culture and responsibility: A reply to Moody-Adams”, Journal of Social Philosophy, 32(4): 610–620.
  • Bermúdez, J. P., & Massin, O., 2023, “Efforts and their feelings”, Philosophy Compass, 18(1): e12894.
  • Boyd, R., & Richerson, P. J., 1987, “The evolution of ethnic markers”, Cultural Anthropology, 2(1): 65–79.
  • –––, 1988, Culture and the evolutionary process, Chicago: University of Chicago Press.
  • Bradford, G., 2015, Achievement, Oxford: Oxford University Press.
  • Bratman, M. E., 2000, “Reflection, planning, and temporally extended agency”, Philosophical Review, 109(1): 35–61. doi:10.1215/00318108-109-1-35
  • Burdman, F., 2022, “A pluralistic account of degrees of control in addiction”, Philosophical Studies, 179(1): 197–221.
  • Busemeyer, J. R., & Townsend, J. T., 1993, “Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment”, Psychological Review, 100(3): 432–459.
  • Calhoun, C., 1989, “Responsibility and reproach”, Ethics, 99(2): 389–406.
  • Campbell, M., Hoane Jr, A. J., & Hsu, F., 2002, “Deep blue”, Artificial Intelligence, 134(1–2): 57–83.
  • Caplan, B., 2006, “The economics of szasz: Preferences, constraints and mental illness”, Rationality and Society, 18(3): 333–366.
  • Carruthers, P., 2015, The centered mind: What the science of working memory shows us about the nature of human thought, Oxford: Oxford University Press.
  • –––, 2018, “Valence and value”, Philosophy and Phenomenological Research, 97(3): 658–680.
  • Chun, M. M., Golomb, J. D., & Turk-Browne, N. B., 2011, “A taxonomy of external and internal attention”, Annual Review of Psychology, 62(1): 73–101.
  • Clarke, R., 2014, Omissions: Agency, metaphysics, and responsibility, Oxford: Oxford University Press.
  • Coates, D. J., & Tognazzini, N. A., 2013, Blame: Its nature and norms, Oxford: Oxford University Press.
  • Daniels, N., 1979, “Wide reflective equilibrium and theory acceptance in ethics”, The Journal of Philosophy, 76(5): 256–282.
  • Dennett, D. C., 1984, Elbow room: The varieties of free will worth wanting, Cambridge: The MIT Press.
  • Dennis, M. L., Foss, M. A., & Scott, C. K., 2007, “An eight-year perspective on the relationship between the duration of abstinence and other aspects of recovery”, Evaluation Review, 31(6): 585–612. doi:10.1177/0193841X07307771
  • Dorst, K., 2023, “Rational polarization”, Philosophical Review, 132(3): 355–458.
  • Failing, M., & Theeuwes, J., 2018, “Selection history: How reward modulates selectivity of visual attention”, Psychonomic Bulletin & Review, 25(2): 514–538.
  • Fecteau, J. H., & Munoz, D. P., 2006, “Salience, relevance, and firing: A priority map for target selection”, Trends in Cognitive Sciences, 10(8): 382–390.
  • Fischer, J. M., 2012, “Semicompatibilism and its rivals”, The Journal of Ethics, 16(2): 117–143. doi:10.1007/s10892-012-9123-9
  • Fischer, J. M., & Ravizza, M., 1998, Responsibility and control: A theory of moral responsibility, Cambridge: Cambridge University Press.
  • Flanagan, O., 2013, “The shame of addiction”, Frontiers in Psychiatry, 4: 120.
  • Foddy, B., & Savulescu, J., 2007, “Addiction is not an affliction: Addictive desires are merely pleasure-oriented desires”, The American Journal of Bioethics, 7(1): 29–32.
  • –––, 2010, “A liberal account of addiction”, Philosophy, Psychiatry, & Psychology: PPP, 17(1): 1–22. doi:10.1353/ppp.0.0282
  • Forstmann, B. U., Ratcliff, R., & Wagenmakers, E.-J., 2016, “Sequential sampling models in cognitive neuroscience: Advantages, applications, and extensions”, Annual Review of Psychology, 67: 641–666.
  • Frankfurt, H., 1971, “Freedom of the will and the concept of a person”, The Journal of Philosophy, 68(1): 5–20. doi:10.2307/2024717
  • Funkhouser, E., 2022, “A tribal mind: Beliefs that signal group identity or commitment”, Mind & Language, 37(3): 444–464.
  • Glackin, S. N., Roberts, T., & Krueger, J., 2021, “Out of our heads: Addiction and psychiatric externalism”, Behavioural Brain Research, 398: 112936.
  • Griffiths, M., 2005, “A ‘components’ model of addiction within a biopsychosocial framework”, Journal of Substance Use, 10(4): 191–197.
  • Haas, J., 2022, “Reinforcement learning: A brief guide for philosophers of mind”, Philosophy Compass, e12865. doi:10.1111/phc3.12865
  • –––, forthcoming, “The evaluative mind”, in Mind Design III, Cambridge: MIT Press. [Haas forthcoming available online]
  • Hart, C., 2013, High price: A neuroscientist’s journey of self-discovery that challenges everything you know about drugs and society, London: Penguin Books.
  • Hart, C., Haney, M., Foltin, R. W., & Fischman, M. W., 2000, “Alternative reinforcers differentially modify cocaine self-administration by humans”, Behavioural Pharmacology, 11(1): 87–91.
  • Heather, N., 1998, “A conceptual framework for explaining drug addiction”, Journal of Psychopharmacology, 12(1): 3–7.
  • –––, 2020, “The concept of akrasia as the foundation for a dual systems theory of addiction”, Behavioural Brain Research, 390: 112666.
  • Henden, E., Melberg, H.-O., & Rogeberg, O., 2013, “Addiction: Choice or compulsion?” Frontiers in Psychiatry, 4. doi:10.3389/fpsyt.2013.00077
  • Henrich, J., 2016, The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter, Princeton: Princeton University Press.
  • Heyman, G. M., 2009, Addiction: A disorder of choice (Reprint edition). Cambridge: Harvard University Press.
  • Higgins, S. T., & Petry, N. M., 1999, “Contingency management. Incentives for sobriety”, Alcohol Research & Health: The Journal of the National Institute on Alcohol Abuse and Alcoholism, 23(2): 122–127.
  • Hofmann, W., Baumeister, R. F., Förster, G., & Vohs, K. D., 2012, “Everyday temptations: An experience sampling study of desire, conflict, and self-control”, Journal of Personality and Social Psychology, 102(6): 1318–133OB5.
  • Hofmann, W., Vohs, K. D., & Baumeister, R. F., 2012, “What people desire, feel conflicted about, and try to resist in everyday life”, Psychological Science, 23(6): 582–588.
  • Holton, R., & Berridge, K., 2013, “Addiction between compulsion and choice”, in N. Levy (ed.), Addiction and self-control: Perspectives from philosophy, psychology, and neuroscience, pp. 239–268, New York: Oxford University Press.
  • Hunt, W. A., Barnett, L. W., & Branch, L. G., 1971, “Relapse rates in addiction programs”, Journal of Clinical Psychology, 27(4): 455–456. doi:10.1002/1097-4679(197110)27:4<455::AID-JCLP2270270412>3.0.CO;2-R
  • Husak, D. N., 1992, Drugs and rights, Cambridge: Cambridge University Press.
  • James, W., 1890, Principles of psychology, Henry Holt & Company.
  • Jaworska, A., 2007, “Caring and internality”, Philosophy and Phenomenological Research, 74(3): 529–568. doi:10.1111/j.1933-1592.2007.00039.x
  • Kennett, J., 2013, “Addiction, choice, and disease: How voluntary is voluntary action in addiction?” in N. Vincent (ed.), Neuroscience and Legal Responsibility, pp. 257–278, Oxford: Oxford University Press.
  • King, M., & Carruthers, P., 2012, “Moral responsibility and consciousness”, Journal of Moral Philosophy, 9(2): 200–228. doi:10.1163/174552412X625682
  • Kirshenbaum, A. P., Olsen, D. M., & Bickel, W. K., 2009, “A quantitative review of the ubiquitous relapse curve”, Journal of Substance Abuse Treatment, 36(1): 8–17. doi:10.1016/j.jsat.2008.04.001
  • Kiyonaga, A., & Egner, T., 2013, “Working memory as internal attention: Toward an integrative account of internal and external selection processes”, Psychonomic Bulletin & Review, 20, 228–242.
  • Levy, N., 2006, “Addiction, autonomy and ego-depletion: A response to Bennett Foddy and Julian Savulescu”, Bioethics, 20(1): 16–20. doi:10.1111/j.1467-8519.2006.00471.x
  • –––, 2014, Consciousness and moral responsibility, New York: Oxford University Press.
  • –––, 2021, Bad beliefs: Why they happen to good people. Oxford: Oxford University Press. doi:10.1093/oso/9780192895325.001.0001
  • Lewis, R. L., Howes, A., & Singh, S., 2014, “Computational rationality: Linking mechanism and behavior through bounded utility maximization”, Topics in Cognitive Science, 6(2): 279–311.
  • Lieder, F., & Griffiths, T. L., 2020, “Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources”, Behavioral and Brain Sciences, 43: e1.
  • Malinsky, D., & Danks, D., 2018, “Causal discovery algorithms: A practical guide”, Philosophy Compass, 13(1): e12470.
  • Mallon, R., 2016, “Experimental philosophy”, in Herman Cappelen, Tamar Gendler, & John Hawthorne (eds.), Oxford Handbook of Philosophical Methodology, pp. 410–433, Oxford: Oxford University Press.
  • McElreath, R., Boyd, R., & Richerson, P., 2003, “Shared norms and the evolution of ethnic markers”, Current Anthropology, 44(1): 122–130.
  • McLellan, A. T., Lewis, D. C., O’Brien, C. P., & Kleber, H. D., 2000, “Drug dependence, a chronic medical illness: Implications for treatment, insurance, and outcomes evaluation”, JAMA, 284(13): 1689–1695. doi:10.1001/jama.284.13.1689
  • Mele, A., 1990, “Irresistible desires”, Noûs, 24(3): 455–472. doi:10.2307/2215775
  • –––, 2001, Autonomous agents: From self-control to autonomy, New York: Oxford University Press.
  • –––, 2006, Free will and luck. New York: Oxford University Press.
  • Menges, L., & Altehenger A., 2024, “The point of blaming AI systems”, Journal of Ethics and Social Philosophy, 27(2): 287–314.
  • Milgram, S., 1965, “Some conditions of obedience and disobedience to authority”, Human Relations, 18(1): 57–76.
  • Miller, G. A., 1956, “The magical number seven, plus or minus two: Some limits on our capacity for processing information”, Psychological Review, 63(2): 81.
  • Mills, C., 2007, “White ignorance”, Race and Epistemologies of Ignorance, 247: 26–31.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., & Ostrovski, G., 2015, “Human-level control through deep reinforcement learning”, Nature, 518(7540): 529–533.
  • Moody-Adams, M. M., 1994, “Culture, responsibility, and affected ignorance”, Ethics, 104(2): 291–309.
  • Morse, S. J., 2000, “Hooked on hype: Addiction and responsibility”, Law and Philosophy, 19: 3–49.
  • –––, 2002, “Uncontrollable urges and irrational people”, Virginia Law Review, 88: 1025–1078.
  • Murray, S., & Vargas, M., 2020, “Vigilance and control”, Philosophical Studies, 177: 825–843.
  • Nahmias, E., Allen, C. H., & Loveall, B., 2020, “When do robots have free will? Exploring the relationships between (attributions of) consciousness and free will”, Free Will, Causality, and Neuroscience, 338: 57–80.
  • Nelkin, D. K., 2011, Making sense of freedom and responsibility. Oxford: Oxford University Press.
  • Nelkin, D. K., & Rickless, S. C., 2017, “Moral responsibility for unwitting omissions: A new tracing view”, The Ethics and Law of Omissions, pp. 106–129.
  • Nguyen, C. T., 2020, “Echo chambers and epistemic bubbles”, Episteme: A Journal of Individual and Social Epistemology, 17(2): 141–161.
  • Nyholm, S., 2018, “Attributing agency to automated systems: Reflections on human–robot collaborations and responsibility-loci”, 24(4): 1201–1219.
  • –––, 2022, This is technology ethics: An introduction, Hoboken, NJ: John Wiley & Sons.
  • O’Connor, C., & Weatherall, J. O., 2019, The misinformation age: How false beliefs spread. Yale University Press.
  • Persuh, M., LaRock, E., & Berger, J., 2018, “Working memory and consciousness: The current state of play”, Frontiers in Human Neuroscience, 12: 78.
  • Pickard, H., 2012, “The purpose in chronic addiction”, AJOB Neuroscience, 3(2): 40–49. doi:10.1080/21507740.2012.663058
  • –––, 2015, “Psychopathology and the ability to do otherwise”, Philosophy and Phenomenological Research, 90(1): 135–163. doi:10.1111/phpr.12025
  • –––, 2016, “Denial in addiction”, Mind & Language, 31(3): 277–299.
  • Preston, K. L., Vahabzadeh, M., Schmittner, J., Lin, J.-L., Gorelick, D. A., & Epstein, D. H., 2009, “Cocaine craving and use during daily life”, Psychopharmacology, 207(2): 291. doi:10.1007/s00213-009-1655-8
  • Quine, W. V., 1957, “The scope and language of science”, The British Journal for the Philosophy of Science, 8(29): 1–17.
  • Railton, P., 2017a, “At the core of our capacity to act for a reason: The affective system and evaluative model-based learning and control”, Emotion Review, 9(4): 335–342.
  • –––, 2017b, “Moral learning: Conceptual foundations and normative relevance”, Cognition, 167: 172–190.
  • Rangel, A., Camerer, C., & Montague, P. R., 2008, “A framework for studying the neurobiology of value-based decision making”, Nat Rev Neurosci, 9: 545–556. doi:10.1038/nrn2357
  • Rangel, A., & Hare, T., 2010, “Neural computations associated with goal-directed choice”, Current Opinion in Neurobiology, 20(2): 262–270.
  • Ratcliff, R., & McKoon, G., 2007, “The diffusion decision model: Theory and data for two-choice decision tasks”, Neural Computation, 20(4): 873–922. doi:10.1162/neco.2008.12-06-420
  • Rini, R., 2017, “Fake news and partisan epistemology”, Kennedy Institute of Ethics Journal, 27(2): E-43–E-64.
  • Rosen, G., 2004, “Skepticism about moral responsibility”, Philosophical Perspectives, 18: 295–313.
  • Russell, S., & Norvig, P., 2020, Artificial intelligence: A modern approach, 4th Edition, Boston: Pearson Education, Inc.
  • Scanlon, T., 1998, What We Owe Each Other, Cambridge: Harvard University Press.
  • Shepherd, J., 2014, “The contours of control”, Philosophical Studies, 170: 395–411.
  • Sher, G., 2006, “Out of control”, Ethics, 116(2): 285–301. doi:10.1086/et.2006.116.issue-2
  • –––, 2009, Who knew?: Responsibility without awareness. Oxford: Oxford University Press.
  • Shoemaker, D., 2003, “Caring, identification, and agency”, Ethics, 114(1): 88–118. doi:10.1086/376718
  • –––, 2011, “Psychopathy, responsibility, and the moral/conventional distinction”, The Southern Journal of Philosophy, 49: 99–124.
  • Simon, H. A., 1990, “Bounded rationality”, Utility and Probability, 15–18.
  • Sinnott-Armstrong, W., & Pickard, H., 2013, “What is addiction?” in K.W.M. Fulford et al. (eds.), The Oxford Handbook of Philosophy and Psychiatry. pp. 851–864, Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199579563.013.0050
  • Sinnott-Armstrong, W., & Skorburg, J. A., 2021, “How AI can AID bioethics”, Journal of Practical Ethics, 9(1). doi:10.3998/jpe.1175
  • Slote, M., 1982, “Is virtue possible?” Analysis, 42(2): 70–76.
  • Smith, A., 2005, “Responsibility for attitudes: Activity and passivity in mental life”, Ethics, 115: 236–271.
  • –––, 2008, “Control, responsibility, and moral assessment”, Philosophical Studies, 138: 367–392.
  • Smith, H., 1983, “Culpable ignorance”, The Philosophical Review, 92(4): 543–571.
  • Solway, A., & Botvinick, M. M., 2012, “Goal-directed decision making as probabilistic inference: A computational framework and potential neural correlates”, Psychological Review, 119(1): 120–154.
  • Sripada, C., 2021, “Impaired control in addiction involves cognitive distortions and unreliable self-control, not compulsive desires and overwhelmed self-control”, Behavioural Brain Research, 418: 113639.
  • –––, 2022, “Loss of control in addiction: The search for an adequate theory and the case for intellectual humility”, In M. Vargas & J. M. Doris (eds.), Oxford Handbook of Moral Psychology, Oxford: Oxford University Press.
  • –––, 2025, “The valuationist model of human agent architecture”, Philosophical Psychology, first online 06 April 2025. doi:10.1080/09515089.2025.2485323
  • Steyvers, M., Tenenbaum, J. B., Wagenmakers, E.-J., & Blum, B., 2003, “Inferring causal networks from observations and interventions”, Cognitive Science, 27(3): 453–489.
  • Stich, S., 1996, Deconstructing the mind. New York: Oxford University Press on Demand.
  • Strawson, G., 1994, “The impossibility of moral responsibility”, Philosophical Studies, 75: 5–24.
  • Strawson, P., 1962, “Freedom and resentment”, Proceedings of the British Academy, 48: 187–211.
  • Sussman, S., & Sussman, A. N., 2011, “Considering the definition of addiction”, International Journal of Environmental Research and Public Health, 8(10): 4025–4038.
  • Sutton, R. S., & Barto, A. G., 1998, Reinforcement Learning: An Introduction, Cambridge: Bradford.
  • Theeuwes, J., Bogaerts, L., & van Moorselaar, D., 2022, “What to expect where and when: How statistical learning drives visual selection”, Trends in Cognitive Sciences, 26(10): 860–872.
  • Vargas, M., 2013, Building better beings: A theory of moral responsibility, Oxford: Oxford University Press.
  • Wallace, R. J., 1998, Responsibility and the moral sentiments, Cambridge: Harvard University Press.
  • –––, 1999, “Addiction as defect of the will: Some philosophical reflections”, Law and Philosophy, 18(6): 621–654. doi:10.1023/A:1006315614953
  • Watson, G., 1999, “Disordered appetites: Addiction, compulsion, and dependence”, in Addiction: Entries and exits, pp. 3–28, New York: Russell Sage Foundation.
  • Watzl, S., 2017, Structuring mind: The nature of attention and how it shapes consciousness. Oxford University Press.
  • Weatherson, B., 2019, Normative ignorance. Oxford: Oxford University Press.
  • Wieland, J. W., & Robichaud, P., 2017, Responsibility: The epistemic condition. Oxford: Oxford University Press.
  • Williams, D., 2021, “Socially adaptive belief”, Mind & Language, 36(3): 333–354.
  • –––, 2023, “Bad beliefs: Why they happen to highly intelligent, vigilant, devious, self-deceiving, coalitional apes”, Philosophical Psychology, 36(4): 819–833.
  • Williamson, T., 2007, The philosophy of philosophy. Oxford: Blackwell Publishing.
  • Wolf, S., 1980, “Asymmetrical freedom”, The Journal of Philosophy, 77(3): 151–166. doi:10.2307/2025667
  • –––, 1993, Freedom within reason. Oxford: Oxford University Press.
  • –––, 2012, “Sanity and the metaphysics of responsibility”, in R. Shafer-Landau (ed.), Ethical Theory: An Anthology. New York: John Wiley & Sons.
  • Worsnip, A., 2022, “Review of Bad beliefs: Why they happen to good people”, Notre Dame Philosophical Reviews, 2022.11.02. [Worsnip 2022 available online]
  • Wu, W., 2023, Movements of the mind: A theory of attention, intention and action. Oxford: Oxford University Press.
  • Zelinsky, G. J., & Bisley, J. W., 2015, “The what, where, and why of priority maps and their interactions with visual working memory”, Annals of the New York Academy of Sciences, 1339(1): 154–164.
  • Zimmerman, M. J., 1997, “Moral responsibility and ignorance”, Ethics, 107(3): 410–426.

Other Internet Resources

[Please contact the author with suggestions.]

Copyright © 2025 by
Chandra Sripada <sripada@umich.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free