#### Supplement to Truthlikeness

## The compatibility of the approaches

The three different approaches to the logical problem of truthlikeness are motivated by somewhat different desiderata. An interesting question thus arises as to whether the three approaches are compatible. If they are compatible, then the different desiderata that motivate them might be accommodated in one happy hybrid.

Consider, for example, Hilpinen’s proposal, which is typically located
within the likeness approach. Interestingly, Hilpinen himself thought
of his proposal as a refined and improved articulation of Popper’s
content approach. Popper’s *truth factor* Hilpinen identified
with that world, in the range of a proposition, closest to the actual
world. Popper’s *content* or *information factor* he
identified with that world, in the range of a proposition, furthest
from the actual world. An improvement in truthlikeness involves an
improvement in either the truth factor or the information factor. His
proposal clearly departs from Popper’s in as much as it incorporates
likeness into both of the determining factors but Hilpinen was also
attempting to capture, in some way or other, Popper’s penchant for
content as well as truth. And his account achieves a good deal of
that. In particular his proposal delivers a weak version of the value
of content for truths: namely, that of two truths the logically
stronger cannot be further from the truth than the logically weaker.
It fails, however, to deliver the stonger principle of the value of
content for truths: that the logically stronger of two truths is
closer to the truth.

To answer the compatibility question we need precise characterizations
of the approaches. Zwart (2001) characterized the approaches in terms
of that proposition they judge to be furthest from the truth. Suppose
that \(z\) is the world furthest from the actual world, and let
\(Z\) be a proposition that is true only in \(z\). On all
likeness approaches \(Z\) is the proposition that is furthest from
the truth. Call this principle *Worst*. *Worst* is at least
a necessary condition for a theory to lie within the likeness
approach, though it seems insufficient. Content theorists judge
theories more or less good in terms of two factors: truth value and
content. So the worst theory will have to be false. And presumably it
will also be weak. Consider \({\sim}T\), the negation of the truth. It
is both false, and it is the logically weakest falsehood. So,
according to Zwart 2001, content theorists ought to judge \({\sim}T\) to
be the worst theory on offer. Call this principle *Weakest*.
*Weakest* assumes something like the separability of content and
truth factors in the evaluation of truthlikeness. While that captures
Miller’s and Kuiper’s symmetric difference account it would banish
from the content-based fold those accounts that judge false theories
to be worse the logically stronger they are.

Zwart and Franssen 2007 adopted somewhat stronger characterizations of
the approaches. Their characterization of the content approach is
essentially that it encompass the Simple Truth Content account:
*viz* that \(A\) is as close to the truth as \(B\) if
\(A\) entails all of \(B\)’s truth content, and \(A\)
is closer to the truth than \(B\) just in case \(A\) is at
least as close as \(B\), and \(B\) is not at least as close
as \(A\). This guarantees that any articulation of the content
approach will embody the value of content for truths, but it goes
somewhat further as we saw above, guaranteeing the value of content
for falsehoods as well. (It is thus much stronger than
*Weakest*.)

Their characterization of the likeness approach is that it deliver all
the judgments delivered by Hilpinen’s proposal. (This is clearly much
stronger than *Worst*.)

With these characterizations in hand Zwart and Franssen go on to show that Arrow’s famous theorem in social choice theory can be applied to obtain a surprising general result about truthlikeness orderings: that there is a precise sense in which there can be no compromise between the content and likeness approaches, that any apparent compromise effectively capitulates to one paradigm or the other. (Given their characterization of the two approaches, Hilpinen’s apparent compromise is deemed to err on the side of the likeness approach.)

This theorem represents an interesting new development in the truthlikeness debate. As already noted, much of the debate has been conducted on the battlefield of intuition, with protagonists from different camps firing off cases which appear to refute their opponent’s definition while confirming their own. The Zwart-Franssen-Arrow theorem is not only an interesting result in itself, but it represents an innovative and welcome development in the debate, since most of the theorizing has lacked this kind of theoretical generality.

One problem with this incompatibility result lies in Zwart and Franssen’s characterization of the two approaches. If delivering all the judgments that are delivered by the Simple Truth Content is a necessary condition for a proposal to be welcomed in the content camp, then while the symmetric difference proposals of Miller and Kuipers are ruled in, Popper’s original proposal is ruled out. Further, if delivering all the judgments delivered by Hilpinen’s proposal is stipulated to be a necessary for any likeness account then Tichý’s averaging account is ruled out of the likeness camp. So both characterizations appear to be too narrow. They rule out what are perhaps the central paradigms of two different approaches.

A rather more liberal characterization of the content approach would count in any proposal that guarantees the value of content for truths. That, at least, was Popper’s litmus test for acceptability and what primarily motivated his original proposal. A more inclusive characterization of the likeness approach would count in any proposal that makes truthlikeness supervene on a measure or ordering of likeness on worlds.

On these more inclusive characterizations, Popper’s theory qualifies as a content account; Tichý’s theory qualifies as a likeness account. And that is as it should be. Further, Hilpinen’s theory falls within the likeness approach, but fails to qualify as a genuine content account. It does not deliver the full value of content for truths. So on these characterizations Hilpinen’s account is not a genuine hybrid.

As we have seen, one shortcoming which Hilpinen’s proposal shares with
Popper’s original proposal is the absolute worthlessness of all
falsehoods: that no falsehood is closer to the truth than any truth
(even the worthless tautology). This defect of Hilpinen’s qualitative
proposal can be remedied by assuming quantitative distances between
worlds, and letting \(A\)’s distance from the truth be a weighted
average of the distance of the closest world in \(A\) from the
actual world, and the distance of the furthest world in \(A\)
from the actual world. This quantitative version (call it
*min-max-average*) of Hilpinen’s account renders all
propositions comparable for truthlikeness, and some falsehoods it
deems more truthlike than some truths.

Although *min-max-average* falls within the likeness approach
broadly characterized, it too fails to deliver the value of content
for truths. So it does not qualify as a content ordering either.
Moreover, it is not entirely satisfactory from a likeness perspective
either, despite satisfying the rather weak likeness constraint that
truthlikeness supervene on likeness. To illustrate this, let
\(A\) be a true proposition with a number of worlds tightly
clustered around the actual world \(a\). Let \(B\) be a
false proposition with a number of worlds tightly clustered around a
world \(z\) maximally distant from actuality. \(A\) is
highly truthlike, and \(B\) highly untruthlike and
*min-max-average* agrees. But now let \(B+\) be \(B\)
plus \(a\), and let \(A+\) be \(A\) plus \(z\).
Considerations of both continuity and likeness suggest that
\(A+\) should be much more truthlike than \(B+\), but they
are deemed equally truthlike by *min-max-average*.

Part of the problem with *min-max-average* proposal is that the
furthest world in a proposition is, as noted above, a very crude
estimator of overall content. It is precisely for this reason that
Niiniluoto suggests a different content measure: the (normalized)
*sum* of the distances of worlds in \(A\) from the actual
world. As we have seen, *sum* is not itself a good measure of
distance of a proposition from the truth. However formally,
*sum* is a probability measure, and hence a measure of a kind
of logical weakness. But *sum* is also a content-likeness
hybrid, rendering a proposition more contentful the closer its worlds
are to actuality. Being genuinely sensitive to size, *sum* is
clearly a better measure of logical weakness than the world furthest
from actuality. Hence Niiniluoto proposes a weighted average of the
closest world (the truth factor) and *sum* (the information
factor).

Niiniluoto’s measure, *min-sum-average*, ranks a
*tautology*, \(B+\) and \(A+\) in that order of
closeness to the truth. *min-sum-average* also delivers the
value of content for truths: if \(A\) is true and is logically
stronger than \(B\) then both have the same truth factor (0), but
since the range of \(B\) contains more worlds, its *sum*
will be greater, making it further from the truth. So
*min-sum-average* falls within the content approach on this
characterization. On the other hand, *min-sum-average* also
seems to fall within the likeness camp, since it deems truthlikeness
to be a non-trivial function of the likenesses of worlds, in the range
of a proposition, to the actual world.

According to *min-sum-average*: all propositions are
commensurable for truthlikeness; the full principle of the value of
content for truths holds provided the content factor gets non-zero
weight; the Truth has greater truthlikeness than any other proposition
provided all non-actual worlds are some distance from the actual
world; some false propositions are closer to the truth than others;
the principle of the value of content for falsehoods is appropriately
repudiated, provided the truth factor gets some weight; if \(A\)
is false, the truth content of \(A\) is more truthlike than
\(A\) itself, again provided the truth factor gets some weight.
*min-sum-average* thus seems like a consistent and appealing
compromise between content and likeness approaches.

This compatibility result may be too quick and dirty for the following
reason. We laid down a somewhat stringent condition on content-based
measures (namely, the value of content for truths) but we have only
required a very lax, supervenience condition for likeness-based
measures (namely, that the likeness of a proposition to the truth be
*some function or other* of the likeness of the worlds in the
proposition to the actual world). This latter condition allows any old
function of likeness to count. For example, summing the distances of
worlds from the actual world is a function of likeness, but it hardly
satisfies basic intuitive constraints on the likeness of a proposition
to the truth. There might well be more demanding but still plausible
constraints on the likeness approach, and those constraints might
block the compatibility of likeness and content. It also has to be
admitted that there is something a little unsatisfactory with the
rather piecemeal method that was used to arrive at an extension from
distance between worlds to distance from the truth. A better way of
proceeding would be to discover some highly plausible general
principles that any likeness theorist would find compelling, which
would ideally uniquely identify the correct extension.

The following three constraints on any extension of distances between worlds to distances of propositions from the truth have been proposed (Oddie 2013).

First, suppose that all the worlds in the range of \(A\) are exactly the same distance from the actual world. What is the overall distance of \(A\) from the actual world? One very plausible answer is that \(A\) is exactly the same distance as the worlds it contains:

: If the worlds in the range of \(A\) are of a uniform distance \(d\) from the actual world then the distance of \(A\) from the actual world is also \(d\).The uniform distance principle

Note that *average* and *min-max-average* both obey
*uniform distance* while *min-sum-average* does not.
*min-sum-average* is based on the intuition that adding new
disjuncts decreases truthlikeness, unless the new disjunct improves
the minimum distance. For example, on *min-sum-average* if it is
hot and rainy, the false proposition
\((h \amp{\sim}r)\vee({\sim}h \amp r)\) is further from
the truth than either of its two false disjuncts, even though both
disjuncts are the same distance from the truth.

Let \(A^{v/u}\) be any proposition that differs from \(A\) only in
that it contains \(v\) rather than \(u\), and suppose that \(v\) is
closer to the actual world than \(u\). Clearly \(A^{v/u}\) cannot be
*further* from the actual world than \(A\) is.

This gives us:

: If \(v\) is at least as close to the actual world as \(u\) is, then \(A^{v/u}\) is at least as close to the truth as \(A\) is.The Pareto principle

If \(v\) is closer to the actual world than \(u\) is then there should be a difference between the distance of \(A\) from the truth and the distance of \(A^{v/u}\) from the truth. What should that difference depend on? Given that \(A^{v/u}\) differs from \(A\) only over the distance from the actual world of worlds \(u\) and \(v\), the difference in closeness to truth of \(A\) and \(B\) can certainly depend on the distance of \(u\) from the actual world and the distance of \(v\) from the actual world. The following argument shows that the difference may also depend on the size of \(A\).

The smaller \(A\) is the more the replacement changes what we might
call \(A\)’s distance profile. In the limit if \(A\) is a
singleton (viz. \(\{u\})\), \(A^{v/u}\) is also a
singleton (viz., \(\{v\})\). From the uniform distance principle, we
know that the difference between the distances of \(A\) and of
\(A^{v/u}\) in this case is the difference between the
distance of \(v\) and the distance of \(u\) from the actual
world. And that is the largest difference that replacing \(u\) with
\(v\) could make. The larger \(A\) is the less of an impact the
replacement will have. So size of \(A\) may make a difference to
the impact of replacement. However, we don’t have to stipulate any
particular function here, or even that it be a decreasing function of
the size of \(A\) (as indeed it should be). Rather, we merely allow
that the difference between the two distances is *some function or
other* of these three factors.

: The difference in closeness to the truth of \(A\) and \(A^{v/u}\) is some function or other of at most three factors: the distance of \(u\) from the actual world; the distance of \(v\) from the actual world; and the size of \(A\).The difference principle

These three extension principles individually should be very
attractive to a likeness theorist. And it is easy to check that
averaging satisfies the extension principles. Interestingly it can
also be shown that averaging is the only extension principle to do so.
Any other extension will violate one of the three constraints. By
relaxing the implicit assumption that all worlds are of equal weight,
a generalized argument shows that *weighted average distance* is
the only function to satisfy the extension principles.

Call a distance/likeness function \(\delta\) *flat* provided
\(\delta vw=1\) if and only if \(v\ne w\). A flat
distance function embodies an extreme version of likeness nihilism
– namely, that as a matter of brute necessity no world is more
like the actual world than is any other. It still counts as a possible
view of likeness, albeit an extreme view, one which is perhaps
supported by a generalized language dependence argument (see section
§1.4.4). Given a flat distance function on worlds, together with
weighted averaging, the distance of proposition \(A\) from the
truth is \((1-(P(A)/P(T))\) if \(A\)
is true, and 1 if \(A\) is false. Since this is generated by a
distance function this measure of distance from the truth falls within
the likeness approach broadly construed, and since we used weighted
averaging, it also satisfies the distance extension principles.
Further, since the ordering delivers the value of content for truths
it falls within the content approach, broadly characterized.

So, it turns out that the content and likeness approaches are compatible. Indeed Popper’s original ordering satisfies the strictures of both content and likeness approaches. It is obviously a content ordering, and, since averaging the flat distance function induces an extension of Popper’s ordering, it falls within the likeness approach as well. Notice that averaging a flat distance function delivers both kinds of worthlessness for falsehoods. It yields the result that no falsehood is closer to the truth than any other, and no falsehood is closer to the truth than the least truthlike truth. Furthermore, this is not just a peculiar feature of the flat distance function, for at least one half of this result is completely general:

Any ordering of closeness to truth which is derived from averaging a likeness function, and which delivers the value of content for truths, deems all falsehoods to be absolutely worthless.

Although the three approaches are, strictly speaking, compatible,
there is still a deep tension between them. If you accept the three
plausible likeness principles (*uniform distance*, *Pareto*,
and *difference*) then you either have to reject the principle of
the value of content for truths or you have to accept the absolute
worthlessness of falsehoods. The latter is not a serious option. Both
*uniform distance* and *pareto* seem rather compelling. If
this is right the choice is between rejecting the *difference*
principle and rejecting the value of content for truths.