## Notes to Bayesian Epistemology

1. For enumerative induction, see Fitelson (2006) and section 3.2.1 of the entry on interpretations of probability. For Ockham’s razor, see Rosenkrantz (1983: sec. 3) and Sprenger & Hartmann (2019: ch. 10). For inference to the best explanation, see sections 3.1 and 4 of the entry on abduction. For statistical inference, see section 4 of the entry on philosophy of statistics. For causal inference, see Howson & Urbach (2006: sec. 8.e) and Heckerman (1996 [2008]). For Bayesian replies to Hume’s argument for inductive skepticism (the view that there is no good argument for any kind of induction), see section 3.2.2 of the entry on the problem of induction. For Bayesians’ contributions to the controversies about predictivism (the thesis that predictions are superior to accommodations in the assessment of scientific theories), see sections 5, 6.1, and 7 of the entry on prediction versus accommodation. There is also the Duhem-Quine problem (which concerns when a body of evidence tells against a theory rather than one of its auxiliary hypotheses, as explained in sections 1 and 2 of the entry on underdetermination of scientific theory); for Bayesians’ attempts to solve that problem, see Dorling (1979), Earman (1992: sec. 3.7), Strevens (2001), Fitelson & Waterman (2005), and the survey by Ivanova (2021: ch. 4).

2. A less general but quite common setting requires also that $$\cal A$$ be closed under countable unions; in that case, $$\cal A$$ is called a σ-algebra.

3. If the Ratio Formula is taken as a definition, we don’t really need to assume that it holds—it does automatically. But, as mentioned earlier, it is debatable whether the Ratio Formula should be taken as a definition or a normative constraint.

4. Argue as follows that $$c=0$$. If $$c > 0$$, then by Probabilism the credence of a sufficiently long finite disjunction of some $$A_n$$’s will be a large multiple of $$c$$, greater than $$1$$, which violates Probabilism. If $$c < 0$$, Probabilism is automatically violated. So, by Probabilism, the only alternative is $$c = 0$$.

5. The theory of regular conditional probability actually involves more ideas than presented in the main text, because this theory is also designed to generalize an equivalent form of the Ratio Formula. More specifically, the Ratio Formula is equivalent to the following formula (assuming Probabilism):

$\Cr(A \cap B) = \sum_{k = 1}^n \Cr(A \mid B_k) \, \Cr(B_k),$

where $$\Cr(B_k)$$ is nonzero for each $$k$$ and propositions $$B_1,$$…, $$B_n$$ form a partition of $$B$$ (i.e., $$B_1,$$…, $$B_n$$ are mutually exclusive and their union is $$B$$). The above equivalent form of the Ratio Formula has a generalized version, which is important for many applications in probability theory and statistics. The generalization replaces the sum by an integral, allows $$\Cr(B_i)$$ to be zero, and hence makes use of conditionalization on a zero-credence proposition. It is this generalization that the theory of regular conditional probability is partly designed for. See Rescorla (2015) for an accessible presentation.

6. If a continuum of hypotheses is considered instead, as is often the case in statistical applications, then the summation $$\sum$$ will have to be replaced by an integral $$\int$$.

7. Forster and Sober do not just object to the Bayesian approach to Ockham’s razor in statistical model selection. In fact, they also develop their own positive, non-Bayesian view (Forster & Sober 1994), which has been criticized by some Bayesians (Sprenger & Hartmann 2019: ch. 10).

8. While the above presents some worries to the effect that merging-of-opinions theorems are too weak to get the job done for subjective Bayesians, Belot (2013) raises the worry that they are too strong—and too strong for Bayesians in general, not just for subjective Bayesians. He argues that those theorems (as well as some other Bayesian convergence theorems) require one to be somewhat arrogant—be fully confident that one is so good at certain things, even though one is typically not that good (in a sense of typicality that Belot defines).

9. Interestingly, although Harper’s work (1976, 1978) and Levi’s work (1980: ch. 1–4) on change of certainties belong to Bayesian epistemology, those works actually made an important contribution to the creation of another area of formal epistemology, called belief revision theory. Namely, Harper’s and Levi’s axioms for change of certainties were first reinterpreted as axioms for change of all-or-nothing beliefs, and then adopted as the standard axioms in belief revision theory. Those axioms include, for example, the axioms that are now known as the AGM axioms and Levi identity. For the relevant history, see section 1.1 of the entry on logic of belief revision.

10. Although this idea is attributed by Easwaran (2014: sec. 2.4) to Hájek (2003), it can only be found in the former paper but not in the latter.